article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
over the last two decades , the evidence of power laws in frequency - size distributions of several natural hazards such as earthquakes , volcanic eruptions , forest fires and landslides has suggested a relationship between these complex phenomena and self - organized criticality ( soc ) .the idea of soc , applied to many media exhibiting avalanche dynamics , refers to the tendency of natural systems to self - organize into a critical state where the distribution of event sizes is represented by a power law with an exponent , which is universal in the sense that it is robust to minor changes in the system .generally , the nature of a _ critical _ state is evidenced by the fact that the size of a disturbance to the system is a poor predictor of the system response .let us consider storms as perturbations for natural slopes .large storms can produce large avalanches , but also small storms sometimes can do it . on the other hand , small stormsusually do not produce any avalanche , but also large storms may not cause any avalanching phenomena .moreover , avalanches triggered by small storms can be larger than those triggered by large storms. the unpredictability of the sizes of such system responses to incremental perturbations and the observed power - law statistics could be the exhibition of self - organized critical behavior in most natural avalanches .however , the idea of understanding power - law distributions within the framework of soc is not the only one .recently , in order to reproduce the range of the power - law exponents observed for landslides , some authors have introduced a two - threshold cellular automaton , which relates landslide dynamics to the vicinity of a breakdown point rather than to self - organization . in this paper , we report an accurate investigation of a cellular automaton model , which we have recently proposed to describe landslide events and , specifically , their frequency - size distributions . in particular , we discuss the role of a finite driving rate and the anisotropy effects in our non - conservative system .it has been pointed out by several authors that the driving rate is a parameter that has to be fine tuned to zero in order to observe criticality .we notice that the limit of zero driving rate is only attainable in an ideal chain reaction , therefore finite rates of external drives are essential ingredients in the analysis of the dynamics of real avalanche processes .we show that increasing the driving rate the frequency - size distribution of landslide events evolves continuously from a power - law ( small driving rates ) to an exponential ( gaussian ) function ( large driving rates ) .interestingly , a crossover regime characterized by a maximum of the distribution at small sizes and a power - law decay at medium and large sizes is found in the intermediate range of values of the driving rate for a wide range of level of conservation .power - law behaviors are robust even though their exponents depend on the system parameters ( e.g. , driving rate and level of conservation , see below ) .although the critical nature of landslides is not fully assessed and many authors believe that deviations from power law appear to be systematic for small landslides data , results from several regional landslide inventories show robust power - law distributions of medium and large events with a critical exponent .the variation in the exponents of landslide size distributions is larger than in the other natural hazards that exhibit scale - invariant size statistics .whether this variation of is caused by scatter in the data or because different exponents are associated with different geology , is an important open question , which we may contribute to address .the model we analyze describes the evolution of a space and time dependent factor of safety field .the factor of safety ( ) is defined as the ratio between resisting forces and driving forces .it is a complicate function of many dynamical variables ( pore water pressure , lithostatic stress , cohesion coefficients , etc . )whose rate of change is crucial in the characterization of landslide events .a landslide event may include a single landslide or many thousands .we investigate frequency - size distributions of landslide events by varying the driving rate of the factor of safety .although our probability density distributions are lacking of a direct comparison with frequency - size distributions of real landslides they reproduce power - law scaling with an exponent very close to the observed values .moreover , they allow us to get insight into the difficult problem of the determination of possible precursors of future events .the paper is organized as follows . in the next section ,we present the model and briefly discuss the differences between our approach and previous cellular automata models that have been recently introduced to characterize landslide frequency - size distributions . in section iii ,we report numerical results obtained by a systematic investigation of the effects of a finite driving rate on the frequency - size distribution .the values of the exponent of the power - law decay are given as a function of the driving rate and the level of conservation .an accurate analysis of the spatial distribution of the values of the factor of safety by varying the driving rate provides useful information for quantifying hazard scenarios of possible avalanche events . in section iv, we analyze the role of anisotropic transfer coefficients , which control the propagation of the instability .we summarize our results in a phase diagram that shows the location of power - law and non power - law scaling regions in the anisotropy parameter space .conclusions are summarized in section v.the instability in clays often starts from a small region , destabilizes the neighborhood and then propagates . such a progressive slope failure recalls the spreading of avalanches in the fundamental models of soc .the term self - organized criticality ( soc ) was coined by bak , tang and wiesenfeld to describe the phenomenon observed in a particular cellular automaton model , nowadays known as the sandpile model . in the original sandpile model, the system is perturbed externally by a random addition of sand grains .once the slope between two contiguous cells has reached a threshold value , a fixed amount of sand is transferred to its neighbors generating a chain reaction or avalanche .the non - cumulative number of avalanches with area satisfies a power - law distribution with a critical exponent , which is much smaller than the values of the power - law exponents observed for landslides .few years later the paper of bak et al . , olami , feder and christensen ( ofc ) recognized the dynamics of earthquakes as a physical realization of self - organized criticality and introduced a cellular automaton that gives a good prediction of the gutenberg - richter law . such a model , whose physical background belongs to the burridge - knopoff spring - block model , is based on a continuous dynamical variable which increases uniformly through time till reaches a given threshold and relaxes .this means that the dynamical variable decreases , while a part of the loss is transferred to the nearest neighbors .if this transfer causes one of the neighbors to reach the threshold value , it relaxes too , resulting in a chain reaction .ofc recognized that the model still exhibits power - law scaling in the non - conservative regime , even if the power - law exponent strongly depends on the level of conservation . in this paper, we investigate the role of a finite driving rate and of anisotropy in a non - conservative cellular automaton modeling landslides .in such a model , we sketch a natural slope by using a square grid where each site is characterized by a local value of the safety factor . in slope stability analysis ,the factor of safety , , against slip is defined in terms of the ratio of the maximum shear strength to the disturbing shear stress the limited amount of stress that a site can support is given by the empirical mohr - coulomb failure criterion : , where is the total normal stress , is the pore - fluid pressure , is the angle of internal friction of the soil and is the cohesional ( non - frictional ) component of the soil strength .if , resisting forces exceed driving forces and the slope remains stable .slope failure starts when . since a natural slope is a complex non - homogeneous system characterized by the presence of composite diffusion , dissipative and driving mechanisms acting in the soil ( such as those on the water content ) , we consider time and site dependent safety factor and treat the local inverse factor of safety as the non - conserved dynamical variable of our cellular automata model .the long - term driving of the ofc model is , here , replaced by a dynamical rule which causes the increases of through the time with a finite driving rate : .such a rule allows us to simulate the effect on the factor of safety of different complex processes which can change the state of stress of a cell .the model is driven as long as on all sites .then , when a site , say , becomes unstable ( i.e. , exceeds the threshold , ) it relaxes with its neighbors according to the rule : where denotes the nearest neighbors of site and is the fraction of toppling on .this relaxation rule is considered to be instantaneous compared to the time scale of the overall drive and lasts until all sites remain below the threshold . when reaches the threshold value and relaxes , the fraction of moving from the site to its `` downward '' ( resp . `` upward '' ) neighbor on the square grid is ( resp . ) , as is the fraction to each of its `` left '' and `` right '' neighbors .the transfer parameters are chosen in order to individuate a privileged transfer direction : we assume and .we notice that the model reproduces features of the ofc model for earthquakes in the limit case and . a detailed analysis of the model on varying the transfer coefficients reported in sec .iv . since many complex dissipative phenomena ( such as evaporation mechanism , volume contractions , etc . ) contribute to a dissipative stress transfer in gravity - driven failures , we study the model in the non - conservative case , which makes our approach different from previous ones within the framework of soc . the conservation level , , and the anisotropy factors , which we consider here to be uniform , are actually related to local soil properties ( e.g. , lithostatic , frictional and cohesional properties ) , as well as to the local geometry of the slope ( e.g. , its morphology ) .the rate of change of the inverse factor of safety , , induced by the external drive ( e.g. , rainfall ) , in turn related to soil and slope properties , quantifies how the triggering mechanisms affect the time derivative of the fs field .recently , in order to reproduce the range of the power - law exponents observed for landslides , several authors have used two - threshold cellular automata , which relate landslide dynamics to self - organization or to the vicinity of a breakdown point . in the first approach , a time - dependent criterion for stability , with a not easy interpretation in terms of governing physics ,provides a power - law exponent close to 2 without any tuning .therefore , this approach does not explain the observed variability of . in ref . , the range of is found by tuning the maximum value of the ratio between the thresholds of two failure modes , the shear failure and the slab failure .however , the frequency - size distribution of avalanches is obtained by counting only clusters where shear failures have occurred , considering conservative transfer processes between adjacent cells with a different number of nearest neighbors . in this paper , the investigation of our non - conservative cellular automaton is mainly devoted to the characterization of landslide event dynamics on varying the driving rate in order to analyze different hazard scenarios .grid corresponding to four values of the driving rate .the logarithm of the normalized number of model events , ] in the whole range ] . grid for and ( triangles ) and ( squares ) .the inset shows the distribution curves in a log - linear scale for the range of values where commensurate peaks are observed .the vertical lines mark multiples of the system size .the results are obtained for and .,width=302 ] grid for and ( open dots ) and ( squares ) .the results are obtained for and .,width=302 ] we notice that an analysis of the anisotropic case of the ofc model is made in ref. where the authors introduce only two transfer coefficients and and control the degree of anisotropy by changing the ratio , while keeping the level of conservation constant .they find that the anisotropy has almost no effect on the power - law exponent while the scaling exponent , expressing how the finite - size cutoff scales with the system size , changes continuously from a two - dimensional to a one dimensional scaling of the avalanches .varying the anisotropic ratio in the range $ ] is equivalent to consider the straight line in the phase diagram of fig.[phd ] . as in ref . , we find that on moving along the line , the changes in the power - law exponent are negligible .however , differently ref . , we find a crossover in the frequency - size distribution behavior from power - law to non power - law ( see fig .[ comparison ] ) .we attribute such a different result to the finite driving rate . in conclusion, our analysis shows that only a finite range of values of the anisotropic transfer coefficients can supply power - law distributions .this characterization provides insight into the difficult determination of the complex and non - linear transfer processes that occur in a landslide event .explanation of the power - law statistics for landslides is a major challenge , both from a theoretical point of view as well as for hazard assessment . in order to characterize frequency - size distributions of landslide events, we have investigated a continuously driven anisotropic cellular automaton based on a dissipative factor of safety field .we have found that the value of the driving rate , which describes the variation rate of the factor of safety due to external perturbations , has a crucial role in determining landslide statistics . in particular , we have shown that , as the driving rate increases , the frequency - size distribution continuously changes from power - law to gaussian shapes , offering the possibility to explain the observed rollover of the data for small landslides .the values of the calculated power - law exponents are in good agreement with the observed values .moreover , the analysis of the model on varying the driving rate suggests the determination of correlated spatial domains of the factor of safety as a useful tool to quantify the severity of future landslide events . as concerns the effects of anisotropic transfer coefficients , which control the non - conservative redistribution of the load of failing cells, we have found that the power - law behavior of the frequency - size distribution is a feature of the model only in a limited region of the anisotropy parameter space .e. piegari wishes to thank a. avella for stimulating discussions and a very friendly collaboration .this work was supported by miur - prin 2002/firb 2002 , sam , crdc - amra , infm - pci , eu mrtn - ct-2003 - 504712 .
|
in order to characterize landslide frequency - size distributions and individuate hazard scenarios and their possible precursors , we investigate a cellular automaton where the effects of a finite driving rate and the anisotropy are taken into account . the model is able to reproduce observed features of landslide events , such as power - law distributions , as experimentally reported . we analyze the key role of the driving rate and show that , as it is increased , a crossover from power - law to non power - law behaviors occurs . finally , a systematic investigation of the model on varying its anisotropy factors is performed and the full diagram of its dynamical behaviors is presented .
|
donsker type functional limit theorems represent one of the key developments in probability theory .they express invariance principles for rescaled random walks of the form \,.\ ] ] many extension of the original invariance principle exist , most notably allowing dependence between the steps , or showing , like skorohod did , that nongaussian limits are possible if the steps have infinite variance . for a survey of invariance principles in the case of dependent variables in the domain of attraction of the gaussian law , we refer to , see also for a thorough survey of mixing conditions . in the case of a nongaussian limit ,the limit of the processes } ] under one of the skorohod topologies .the topology denoted by is the most widely used ( often implicitely ) and suitable for steps , but over the years many theorems involving dependent steps have been shown using other skorohod topologies . even in the case of a simple linear process from a regularly varying distribution, it is known that the limiting theorem can not be shown in the standard topology , see avram and taqqu . moreover , there are examples of such processes for which none of the skorohod topologies work , see . however , as we found out , for all those processes and many other stochastic models relevant in applications , random walks do converge , but their limit exists in an entirely different space . to describe the elements of such a space we use the concept of decorated functions anddenote it ) ] .our main analytical tool is the limit theory for point processes in certain nonlocally compact space which is designed to preserve the order of the observations as we rescale time to interval ] such that where denotes vague convergence on .in particular , for all , according to , the regular variation of the stationary sequence is equivalent to the existence of the tail process which satisfies for and , as , where denotes convergence of finite - dimensional distributions . moreover , the so - called spectral tail process , defined by , , turns out to be independent of and satisfies as . throughout the paper, we will assume in addition that the following condition , often referred to as the anticlustering or finite mean cluster length condition , holds .[ hypo : ac ] there exists a sequence of integers such that and for every , there are many time series satisfying conditions above including several nonlinear models like stochastic volatility or garch ( see ( * ? ? ?* section 4.4 ) ) but traditionally best studied example in the family are regularly varying linear processes described below . [ex : mainf1 ] consider an infinite order moving average process with respect to a sequence of random variables with regularly varying distribution with index .that is , where is a sequence of real numbers and there exists such that under condition ( [ eq : condition - linear ] ) , the sequence is strictly stationary and regularly varying with tail index ; see .moreover , using standard truncation techniques , it can be proved that holds under the same conditions .corresponding tail process was computed in .it can be described as follows : there exists an integer valued random variable , independent of such that be the space of double - sided real sequences converging to zero at both ends , .on consider the uniform norm which makes into a separable banach space .indeed , is the closure of all finite double - sided rational sequences in the banach space of all bounded double - sided real sequences .define the shift operator on by and introduce an equivalence relation on by letting if for some . in the sequel, we consider the quotient space and define a function by for all , and all .the proof of the following result can be found in .[ lem : lo - complete ] the function is a metric which makes a separable and complete metric space . one can naturally embed the set into by mapping to its equivalence class and an arbitrary finite sequence to the equivalence class of the sequence which adds zeros in front and after it .let be a sequence of random variables distributed as conditionally on the event .more precisely , observe that by ( [ eq : ac ] ) , with probability 1 as , see proposition 4.2 in .therefore , in can be viewed as a random element in and in a natural way . in particular, the random variable is a.s .finite and clearly , since , not smaller than 1 . due to regular variation and ( [ eq : ac ] )one can show ( see ) that for one can also define a new sequence in as the equivalence class of consider now a block of observations , conditionally on the event that .it turns out that the law of such a block has a limiting distribution and that and are independent .[ lem : conv - cluster ] under , for every , as in .moreover , and in are independent random elements with values in and respectively . 1 . we write , and , . by the portmanteau theorem (* theorem 2.1 ) , it suffices to prove that & = { \mathbb{e}}[g({\{y_i , i\in{\mathbb{z}}\ } } ) \mid m_{-\infty,-1}^y \leq 1 ] \ ; , \label{eq : portmanteau } \end{aligned}\ ] ] for every nonnegative , bounded and uniformly continuous function on .+ define the truncation at level of by putting all the coordinates of which are smaller in absolute value than at zero , that is is the equivalence class of , where is a representantive of .note that by definition , .+ for a function on , define by .if is uniformly continuous , then for each , there exists such that if , that is , . thus it is sufficent to prove ( [ eq : portmanteau ] ) for for all sufficiently small .+ one can now follow the steps of the proof of ( * ? ? ?* theorem 4.3 ) .decompose the event according to the smallest such that we have =\sum_{j=1}^{r_n } { \mathbb{e}}\left[g_\zeta({\boldsymbol{x}}_n(1,r_n ) ) ; m_{1,j-1}\leq a_nu<|x_j| \right].\ ] ] fix a positive integer and let be large enough so that by the definition of , for all we have that next define . under assumption[ hypo : ac ] , it is known that there exists ] as . on the other hand , by ( [ eq : def - candidate - extremalindex ] ) and regular variation of , the second term tends to .in this section we prove our main result on the point process asymptotics for the sequence .prior to that , we introduce the topology of -convergence . to study convergence in distribution of point processes on a non locally - compact space introduce the notion of -convergence and refer to ( * ? ? ?* section a2.6 . ) and ( * ? ? ?* section 11.1 . ) for details .let be a complete and separable metric space and let denote the space of boundedly finite nonnegative borel measures on , i.e. such that for all bounded borel sets .the subset of of all point measures is denoted by a sequence in is said to converge to in the -topology , noted by , if for every bounded and continuous function with bounded support .equivalently , refers to for every bounded borel set with we note that -convergence coincides with vague convergence when is locally compact , we refer to or for details on vague convergence .the notion of -convergence is metrizable in such a way that is polish .denote by the corresponding borel sigma - field .it is known , see ( * ? ? ?* theorem 11.1.vii ) , that a sequence of random elements in , converges in distribution to , denoted by , if and only if for all and all bounded borel sets in such that a.s . for all .[ rem : laplacesmallerfamily ] as shown in ( * ? ? ?* proposition 11.1.viii ) , this is equivalent to the pointwise convergence of the laplace functionals , that is , ={\mathbb{e}}[e^{-n(f)}] ] by we will study the convergence of in the space of measures on \times { \tilde{l}_0}\setminus\{\tilde{\boldsymbol{0}}\} ] to do that , we embed into a new space where theory described above is applicable ( see ( * ? ? ?* section 2 ) , cf .* section 4.1 ) ) .denote by the unit sphere in and consider the space \times \mathbb{s} ] so by ( [ eq : a ] ) , ( [ eq : def - candidate - extremalindex ] ) and we get \\ & = \lim_{n\to\infty }n { \mathbb{p}}(|x_0| > \epsilon a_n ) \frac{{\mathbb{p}}(m_{1,r_n}>\epsilon a_n ) } { r_n{\mathbb{p}}(|x_0| > \epsilon a_n ) } { \mathbb{e}}[f({\boldsymbol{x}}_{n,1 } ) \mid m_{1,r_n}>\epsilon a_n ] \\ & = \epsilon^{-\alpha}\theta{\mathbb{e}}[f(\epsilon { \boldsymbol{z } } ) ] \ ; . \end{aligned}\ ] ] applying , the last expression is equal to \alpha y^{-\alpha-1}dy=\theta\int_\epsilon^\infty{\mathbb{e}}[f(y{\boldsymbol{q}})]\alpha y^{-\alpha-1 } dy \ ; .\ ] ] finally , since a.s . and if we have that \alpha y^{-\alpha-1}dy=\nu(f ) \ ; , \ ] ] by definition of .the previous lemma is closely related to the large deviations result obtained in mikosch and wintenberger ( * ? ? ?* theorem 3.1 ) . for a class of functions called cluster functionals , which can be directly linked to the functions we used in the proof of , they showed that }{r_n{\mathbb{p}}(|x_0| > a_n)}=\int_0^\infty { \mathbb{e}}[f(y\{\theta_t , t\geq 0\})-f(y\{\theta_t , t\geq 1\ } ) ] \alpha y^{-\alpha-1 } dy \;.\ ] ] however , for an arbitrary function with support bounded away from which is a.e .continuous with respect to , yields }{r_n{\mathbb{p}}(|x_0| > a_n)}=\lim_{n\to\infty } \nu_n(f)=\theta\int_0^\infty{\mathbb{e}}[f(y{\boldsymbol{q}})]\alpha y^{-\alpha-1}dy\,,\ ] ] using the continuous mapping argument and the fact that which gives an alternative and arguably more interpretable expression for the limit in .analogously to we define \times{\tilde{l}_0}) ] which are finite on the sets bounded away from the line \times \{\tilde{\boldsymbol{0}}\}, ] the subspace of \times{\tilde{l}_0}) ] we will need to assume that , intuitively speaking , one can break the dependent series into asymptotically independent blocks . it can be shown that the class of lipschitz continuous functions which depend only on coordinates greater than some , denoted by , is convergence determining ( in the sense of ) .[ hypo : aprimecluster ] there exists a sequence of integers such that and - \prod_{i=1}^{k_{n } } { \mathbb{e}}\biggl [ \exp \biggl\ { - f \biggl(\frac{i}{k_n},{\boldsymbol{x}}_{n , i } \biggr ) \biggr\ } \biggr ] \right ) = 0 \ ; , \end{aligned}\ ] ] for all .we prove in that -mixing implies .[ thm : ppconvinlo ] let be a stationary regularly varying sequence with tail index , satisfying for the same sequence . then in \times{\tilde{l}_0}) ] with intensity measure ; b. is a sequence of elements in , independent of and with common distribution equal to the distribution of in .let , for every , be independent copies of and define by the weak limits of and must coincide .by the previous lemma , in and now the convergence of to in \times{\tilde{l}_0}) ] is not locally compact .however , since the proof only involves laplace functionals of the underlying point processes , it remains valid if one changes from vague convergence used therein to -convergence as described above .see also ( * ? ? ?* theorem 2.4 ) or ( * ? ? ?* proposition 2.13 ) .the representation of follows easily by standard poisson point process transformation ( see ( * ? ? ?* section 3.3.2 . ) ) .in order to study the convergence of the partial sum process in cases where it fails to hold in the usual space , we first introduce an enlarged space . to establish convergence of the partial sum process of a dependent sequence we need to consider a function space larger than ,{\mathbb{r}}). ] of decorated cdlg functions . intuitively speaking ,elements in are built from cdlg functions to which at all jump points we attach a closed interval ( i.e. decoration ) .more precisely , the elements of have the form where ,{\mathbb{r}}) ] with and , for each , is a closed bounded interval in with at least two points such that for all moreover , we assume that for each there are at most finitely many times for which the length of the interval is greater than by ( * ? ? ?* theorem 15.4.2 . )this ensures that the graphs of elements in defined below , are compact subsets of paralleling the construction of the topology on ( see ) , this allows one to impose a metric on by using the hausdorff metric on the space of graphs of elements in .we associate with every triple a set - valued function and for every such we define its graph by \times{\mathbb{r}}:z\in x'(t)\}.\ ] ] as we can construct the triple the set - valued function and the graph starting from any one of the three , these are equivalent representations of the elements in . in the sequel , we will usually denote the elements of by .let denote the hausdorff metric on the space of compact subsets of i.e. , for compact subsets where and we then define a metric on , denoted also by without confusion , by we call the topology induced by on the topology . as shown in , the metric space is a separable , but not complete . also , we define the uniform metric on by where is again the hausdorff metric , but here applied to the space of compact subsets of obviously, is a stronger metric than i.e. for any we will often use the following elementary fact : for and it holds that ,[c , d])\leq |c - a|\vee |d - b|.\ ] ] by a slight abuse of notation , we identify every with an element in represented by : t\in disc(x ) \ } ) \ ; , \ ] ] where for any two real numbers by ] .consequently , we identify the space with the subset of given by \ } \ ; .\ ] ] as observed in ( * ? ? ?* theorem 15.4.6 . ) , for an element we have where is the completed graph of .since the topology on corresponds to the hausdorff metric on the space of the completed graphs the next result is immediate .[ lem : homeomorphismd ] the space endowed with the topology is homeomorphic to the subset in with the topology .[ rem : additionine ] because two elements in can have intervals at the same time point , the addition in is in general not well behaved .however , problems disappear if one of the summands is a continuous function . in such a case ,the sum is naturally defined as follows : consider an element in and a continuous function on , ] one can reconstruct [ thm : charofconve ] for elements the following are equivalent : * in i.e. * for all in a countable dense subset of , ] defined by ,\ ] ] and define also as usual , when , an additional condition is needed to deal with the small jumps .[ hypo : ansj ] for all , it is known from that marginal distributions of converge to those of an lvy process . this result is strengthened in to convergence in the topology if for all i.e. if all extremes within one cluster have the same sign . in the next theorem, we remove any assumption of this kind and establish the convergence of the process in the space .moreover , we give the description of the limit in terms of the point process from .[ thm : partialsumconvine ] let be a stationary regularly varying sequence with tail index , satisfying for the same sequence if , assume moreover that holds and that < \infty \ ; .\ ] ] then with respect to topology on ,{\mathbb{r}}) ] given by + [ eq : levylimit ] + where the last limit holds almost surely uniformly on ] and therefore ( [ eq : sumoftheqjs ] ) always holds in this case , since the function is concave on for , the summability of this series must be assumed .it can be readily checked that for , ( [ eq : sumoftheqjs ] ) is implied by [ hypo : ac ] and the following intra - block negligibility condition [ rem : useful - w ] if ( [ eq : sumoftheqjs ] ) holds , the sums are almost surely well - defined and is a sequence of random variables with < \infty ] with intensity measure \alpha y^{-\alpha -1}dy. ] by setting , for } \;,\{t_i : \|{\tilde{\boldsymbol{x}}}^i\|_\infty > \epsilon \ } , \{i(t_i):\|{\tilde{\boldsymbol{x}}}^i\|_\infty > \epsilon\ } \right ) , \end{aligned}\ ] ] where .\ ] ] since belongs \times{\tilde{l}_0}), ] by we claim that is continuous on the set assume that . by an adaptation of ( * ? ? ?* proposition 3.13 ) , this convergence implies that the finitely many points of in every suitable set bounded away from converge pointwise to the finitely many points of in .in particular , this holds for and it follows that for all in ] and includes and , an application of gives that in endowed with the topology .+ recall the point process from . since the mean measure of does not have atoms , it is clear that a.s . therefore , by the same theorem and the continuous mapping argument where and 2 .[ item : step2 ] recall that and is a poisson point process on ] ( see ) . since , one can sum up the points i.e. therefore , defining , we obtain that the process \ ; , \ ] ] is almost surely a well - defined element in and moreover , it is an -stable lvy process .further , we define an element in ,{\mathbb{r}}) ] and , ] and , ] and , ] and the interval it holds that as hence , ( [ eq : asympeqe1 ] ) holds since and this finishes the proof in the case .2 . assume now that .as shown in [ item : step1 ] in the proof of [ item:01 ] , it holds that in where and .+ for every define a process by setting , for , ] from ( [ eq : conv - mu ] ) we have , for any , ] , by lemma [ lem : continuityofadditionine ] and ( [ eq : epsilonconv ] ) it follows that in where is given by ( see ) let be the cdlg part of i.e. by , converges uniformly a.s . to the process defined in ( [ eq : levylimit - alphageq1 ] ) as next , as in [ item : step2 ] in the proof of [ item:01 ] we can define a element in by where ,\ ] ] also , one can argue similarly as in the proof of ( [ eq : mbound1 ] ) to conclude that where the uniform metric on was defined in ( [ e : unmetric1 ] ) .now it follows from ( [ e : unmetric2 ] ) , ( [ eq : negligibilitycond1 ] ) and a.s .uniform convergence of to that in ,{\mathbb{r}}). ] by define analogously using infimum instead of supremum .note that and need not be right - continuous at the jump times however , their partial supremum or infimum are functions .[ thm : suppartsumconv ] under the same conditions as in , it holds that } { \stackrel{d}{\longrightarrow}}\left(\sup_{s\leq t } v^+ ( s)\right)_{t\in [ 0,1]}\ ] ] and } { \stackrel{d}{\longrightarrow}}\left(\inf_{s\leq t } v^- ( s)\right)_{t\in [ 0,1]}\ ] ] in ,{\mathbb{r}}) ] by note that is non - decreasing and since for every there are at most finitely many times for which the is greater than , by ( * ? ? ?* theorem 15.4.1 . )it follows easily that this mapping is well - defined , i.e. that is indeed an element in also , by construction , }\ ] ] and }.\ ] ] define the subset of by and assume that in where by theorem [ thm : charofconve ] it follows that for all in a dense subset of ( 0,1 ] , including .also , the convergence trivially holds for since for all since is non - decreasing for all we can apply ( * ? ? ?* corollary 12.5.1 ) and conclude that in endowed with topology . since almost surely , by theorem [ thm : partialsumconvine ] and continuous mapping argument it follows that } { \stackrel{d}{\longrightarrow}}\left(\sup_{s\leq t } v^+ ( s)\right)_{t\in [ 0,1]}\ ] ] in endowed with topology .note that when a.s ., the limit in theorem [ thm : suppartsumconv ] is simply a so called frchet extremal process . for an illustration of the general limiting behavior of running maxima in the case of a linear processes ,consider again the moving average of order 1 from example [ ex : mainf4 ] .figure [ figma1 ] shows a path ( dashed line ) of the running maxima of the ma process .we can now characterize the convergence of the partial sum process in the topology in ) ] endowed with the topology .since the supremum functional is continuous with respect to the topology , this result implies that the limit of the running supremum the partial sum process is the running supremum of the limiting -stable lvy process as in the case of random variables .consider again the linear process of example [ ex : mainf4 ] . as we argued therein, the corresponding sequence is equal to a positive multiple of the sequence .this implies that condition ( [ eq : convm2 ] ) for convergence of the partial sum process can be expressed as this is exactly ( * ? ? ?* condition 3.2 ) .note that ( [ eq : condition - bk ] ) implies that this section we study record times in a stationary sequence under the assumptions of theorem [ thm : ppconvinlo ] . however , because record times remain unaltered after strictly increasing transformation , the main result below holds for stationary sequences with a general marginal distribution as long as they can be monotonically transformed into a regularly varying sequence . moreover , notethat the point process convergence in theorem [ thm : ppconvinlo ] directly extends from the space \times{\tilde{l}_0}) ] such that , hence & = \sum_{t_{i } \in ( a , b ] } r^{{\boldsymbol{x}}_i } ( m^{m}(t_i- ) ) = \sum_{j=1}^{k } r^{{\boldsymbol{x}}_i } ( m^{m}(t_{i_j}- ) ) \ ; .\end{aligned}\ ] ] for all large enough there also exist exactly ( depending on and ) time instances ] , , which can be enlarged slightly to a set ] implies ] with intensity measure .note that there are infinitely many points of in any interval \times[0,\epsilon] ] . since the exponential of a negative function is less than 1 , by definition of the total variation distance , the bound ( [ eq : bound - tv ] ) yields - { \mathbb{e}}\left [ \rme^{-\tilde{n}^*_n(f)}\right]\right| \leq \mathrm{d}_{tv}(\mathcal{l}(\tilde{\mathbb{x}}_n),\mathcal{l}(\tilde{\mathbb{x}}_n^ * ) ) = o(1 ) \ ; .\label{eq : bound - laplace } \end{aligned}\ ] ] we must now check that the same limit holds with the full blocks instead of the truncated blocks . under , we know by ( * ? ? ?* proposition 4.2 ) that for every and every sequence such that , then , applying ( [ eq : bigo ] ) yields , assume now that depends only on the components greater than in absolute value .then unless at least one component at the end of one block is greater than .this yields - { \mathbb{e}}[\rme^{-\tilde{n}_n''(f ) } ] \right| \leq { \mathbb{p}}\left ( \max_{1\leq j\leq k_n }\max_{1\leq i \leq \ell_n } |x_{jr_n - i+1}|>\epsilon a_n\right ) = o(1 ) \ ; .\end{aligned}\ ] ] the same relation also holds for the independent blocks . therefore , holds .the next lemma gives sufficient conditions for continuity of addition in the space ,{\mathbb{r}}) ] and an element in such that in also that is a sequence in ,{\mathbb{r}}) ] then the sequence converges in to an element defined by recall that , for compact subsets in where and by whitt ( * ? ? ?* theorem 15.5.1 . ) to show that in it suffices to prove that take an arbitrary note that is uniformly continuous so by the conditions of the lemma there exists and such that * * for all and * for all . ] take and a point i.e. .\ ] ] since there exists ] defined in and assume that <\infty ] and that is a poisson point process on ] in particular , for every there a.s .exists at most finitely many points such that . for , define \alpha y^{-\alpha -1}dy \ ; .\end{aligned}\ ] ] note that is well defined and that by bounded convergence , . since for every there a.s .exists at most finitely many points such that , for every we can define the process in ] .note first that the finite dimensional distributions of converge to those of an -stable lvy process . since is a poisson point process ,the process has independent increments with respect to , that is for every , is independent of .moreover , since is a poisson integral , we have \alpha y^{-\alpha-1 } \rmd y \\ & \leq \theta { \mathbb{e}}\left [ w^2 \int_0^{\delta'/w } \alpha y^{-\alpha+1 } \rmd y \right ] = \frac{\theta\alpha ( \delta')^{2-\alpha}}{(2-\alpha ) } { \mathbb{e}}[w^\alpha ] \ ; .\end{aligned}\ ] ] therefore , arguing exactly as in the proof of ( * ? ? ?* proposition 5.7 , property 2 ) there exists a cdlg version of the limiting -stable lvy process such that converges almost surely uniformly to .there only remains to prove that for all , write with \alpha y^{-\alpha -1 } \rmd y \ ; , \\ r_\delta ( t ) & = \sum_{t_i\leq t } p_i \sum_{j\in{\mathbb{z } } } q_{i , j } { \mathbbm{1}_{\{p_i|q_{i , j}| > 1\ } } } { \mathbbm{1}_{\{p_i w_i \leq \delta\ } } } \ ; .\end{aligned}\ ] ] the process does not depend on and almost surely there are finitely many points such that .therefore , almost surely as .the process is a cdlg martingale , thus applying doob - meyer s inequality yields \alpha y^{-\alpha-1 } \rmd y \\ & \leq u^{-2 } \theta{\mathbb{e}}\left [ w^2 \int_0^{\delta / w } \alpha y^{-\alpha+1 } \rmd y \right ] = \frac{\theta\alpha \delta^{2-\alpha}}{u^2(2-\alpha ) } { \mathbb{e}}[w^\alpha ] \ ; .\end{aligned}\ ] ]parts of this paper were written when bojan basrak visited the laboratoire modalx at universit paris ouest nanterre .bojan basrak takes pleasures in thanking modalx and for excellent hospitality and financial support , as well as johan segers for useful discussions over the years .the work of bojan basrak and hrvoje planini has been supported in part by croatian science foundation under the project 3526 .the work of philippe soulier was partially supported by labex mme - dii .richard arratia . on the central role of scale invariant poisson processes on . in _ microsurveys in discrete probability ( princeton , nj , 1997 )_ , volume 41 of _ dimacs ser . discrete math ._ , pages 2141 .soc . , providence , ri , 1998 .
|
we prove a sequence of limiting results about weakly dependent stationary and regularly varying stochastic processes in discrete time . after deducing the limiting distribution for individual clusters of extremes , we present a new type of point process convergence theorem . it is designed to preserve the entire information about the temporal ordering of observations which is typically lost in the limit after time scaling . by going beyond the existing asymptotic theory , we are able to prove a new functional limit theorem . its assumptions are satisfied by a wide class of applied time series models , for which standard limiting theory in the space of functions does not apply . to describe the limit of partial sums in this more general setting , we use the space of so called decorated functions . we also study the running maximum of partial sums for which a corresponding functional theorem can be still expressed in the familiar setting of space . we further apply our method to analyze record times in a sequence of dependent stationary observations , even when their marginal distribution is not necessarily regularly varying . under certain restrictions on dependence among the observations , we show that the record times after scaling converge to a relatively simple compound scale invariant poisson process . keywords : point process regular variation invariance principle functional limit theorem record times
|
low density parity check ( ldpc ) codes have become a mainstay of wireless communications .originally proposed by gallager in 1963 , ldpc codes lay largely unnoticed ( although see , , ) until their re - discovery in the mid 90s , . since then many hundreds of papers have been published outlining the near optimal performance of ldpc codes over a wide range of noisy wireless communication channels . in almost all of these previous works it was assumed that the characteristics of the noisy wireless channel was known .however , the reality is that in many cases an exact determination of the wireless channel is unavailable .indeed , several works have in fact investigated the case where a channel mismatch ( or channel misidentification ) occurs , which in turn impacts on the performance of the ldpc decoder ( e.g. ) .from the perspective of the work reported on here , the most interesting aspect of such channel mismatch studies is the asymmetry in the ldpc code performance as a function the channel crossover probability for the binary symmetric channel ( bsc ) .in fact , the main focus of the work described here is an investigation of whether such asymmetric ldpc code performance carries over from the classical bsc to quantum ldpc codes operating over the quantum depolarizing channel . since the discovery of css ( calderbank , shor and steane ) codes and stabilizer codes , it has been known how quantum error - correction codes can be developed in a similar manner to classical codes .quantum ldpc codes based on finite geometry were first proposed in , followed by the _ bicycle _ codes proposed in .their research explored the conjecture that the best quantum error - correcting codes will be closely related to the best classical codes , and in poulin _ et al . _ proposed serial turbo codes for quantum error correction .a more detailed history on the development of qecc can be found elsewhere _ e.g. _ , .more recently , many works attempting to improve quantum ldpc code performance have been published , and based on quasi - cyclic structure since it reduces the complexity of encoding and decoding . recently in the impact of channel mismatch effects on the performance of quantum low - density parity - check codes was highlighted . in investigations of the performance of quantum ldpc codesit has been assumed that perfect knowledge of the quantum channel exists .of course in practice this is not the case . in this workwe utilize optimal estimates of the channel derived from quantum fisher information about the channel parameters . even in this optimal situation ,the use of the unbiased estimator to estimate the level of channel noise produces an approximately order of magnitude degradation to the performance .we note , however , that the use of quantum entangled states will aid an estimating the noise level of the channel .however , there is a practical trade - off in hardware complexity between entanglement consumption and code performance . in this paper, we further investigate the behavior and the robustness of the sum - product decoding algorithm when simulating over the quantum depolarizing channel .interestingly , an asymmetry behavior in performance is observed as a function of the estimated channel flip probability , showing that the performance of a quantum ldpc code would experience a reduced degradation when the channel is overestimated instead of underestimated , given the overestimated channel knowledge still within the threshold limit of the code .based on these observations , a new decoding strategy is proposed that can improve a quantum codes performance by as much as . in section ii we discuss the behavior of the classical sum - product decoder under channel mismatch conditions . in sectioniii we briefly review quantum communications and the _ stabilizer _formalism for describing qeccs , and discuss their relationship to classical codes . in sectioniv we explore the behavior of a quantum decoder when simulating over a quantum depolarizing channel and show how the decoding strategy we outline here leads to a significant improvement in performance relative to decoders that simply utilize the estimated channel parameter .lastly , in section v we draw some conclusions and discuss future works .it is well known in classical coding that low - density parity - check codes are good rate achievable codes , given an optimal decoder .the best algorithm known to decode them is the sum - product algorithm , also known as iterative probabilistic decoding or belief propagation ( bp ) .the performance of sparse - graph codes can be improved if knowledge about the channel is known at the decoder side . however , in practical situations the decoder unlikely to know the channel s characteristics exactly , thus , the robustness of the decoder to channel mismatches is also an important issue when designing practical codes . in , mackay _ et.al_ investigated the sensitivity of gallager s codes to the assumed noise level ( classical bit - flip probability ) when decoded by belief propagation .a useful result therein is that the belief propagation decoder for ldpc codes appears to be robust to channel mismatches because the block error probability is not a very sensitive function of the assumed noise level .in addition , an underestimation of channel characteristics deteriorates the performance more compare to an overestimation of channel characteristics .this behavior is shown in fig .[ fig : classicalbehavior ] . our results shown in fig .[ fig : classicalbehavior ] are for a rate half code of block length and a binary symmetric channel .the code is a regular ldpc codes which is constructed with the length of the cycle maximized . by inspection , the plotted result shows a similar behavior to that found by mackay in .the vertical straight line indicates the true value of the noise level , and the minimum point of the plot is approximately at the intersection between two lines .this infers that an optimal performance of a practical sum - product decoder can be achieved when the input of decoder is the true noise level .the slope towards the left of the graph is more steeper than the slope towards the right , indicating that underestimation of the noise level degrades the performance more so than overestimation does .however , when the estimated noise level is far too large , there is a significant increase in the error probability .such higher bit flip probabilities can be thought of as the classical shannon s limit ( in this case , the shannon s limit for rate code is ) , which theoretically represents the maximum allowable noise level under reliable transmission at a certain rate .a stabilizer generator that encodes qubits in qubits consists of a set of pauli operators on the qubits closed under multiplication , with the property that any two operators in the set _ commute _ , so that every stabilizer can be measured simultaneously .an example of a stabilizer generator is shown below for representing a rate quantum stabilizer code , consider now a set of error operators taking a state to the corrupted state . a given error operator either commutes or anti - commutes with each stabilizer ( row of the generator ) where .if the error operator commutes with then and therefore is a eigenstate of .similarly , if it anti - commutes with , the eigenstate is the measurement outcome of is known as the _ syndrome_. to connect quantum stabilizer codes with classical ldpc codes it is useful to describe any given pauli operator on qubits as a product of an -containing operator , a -containing operator and a phase factor .for example , the first row of matrix ( [ stabilizer ] ) can be expressed as thus , we can directly express the -containing operator and -containing operator as separate binary strings of length . in the -containing operator a represents the operator ( likewise for the operator ) , and for .the resulting binary formalism of the stabilizer is a matrix of columns and rows , where and represent -containing and -containing operators , respectively _ example 1 : _ for example , the set of stabilizers in ( [ stabilizer ] ) appears as the binary matrix due to the requirement that stabilizers must commute , a constraint on a general matrix can be written as __ . note that the quantum syndrome can be conceptually considered as an equivalent to the classical syndrome , where is a binary parity - check matrix and is a binary error vector . to summarize ,the property of stabilizer codes can be directly inferred from classical codes .any binary parity - check matrix of size that satisfies the constraint in ( [ binaryconstraint ] ) defines a quantum stabilizer code with rate that encodes qubits into qubits . as mentioned earlier ,an important class of codes are the _ css codes _these have the form where and are and matrices , respectively , ( does not necessary equal to ) .requiring ensures that constraint ( [ binaryconstraint ] ) is satisfied .if , the resulting css code structure is called a _ dual - containing code_. most classical ( good ) ldpc codes do not satisfy the constraint ( [ binaryconstraint ] ) .motivated by the decoding asymmetry discussed above for classical ldpc codes , we now wish to explore whether a similar asymmetry in decoding performance is achieved for quantum ldpc codes . as stated above several well known classes of quantum codes , such as quantum stabilizer codes and css codes can be designed from existing classical codes . upon construction of such codeswe will then investigate the decoding performance under asymmetrical estimates of the quantum channel parameters .the quantum channel we investigate is the widely adopted polarization channel .the issue of quantum channel identification ( quantum process tomography ) is of fundamental importance for a range of practical quantum information processing problems ( _ e.g. _ ) . in the context of ldpc quantum error correction codes, it is normally assumed that the quantum channel is known perfectly in order for the code design to proceed .in reality of course , perfect knowledge of the quantum channel is not available - only some estimate of the channel is available . to make progress we will assume a depolarization channel with some parameter .given some initial system state , a decoherence model can be built by studying the time evolution of the system state s interaction with some external environment . in terms of the density operator ,the evolution of in a channel , which can be written as , is a completely positive , trace preserving , map which provides the required evolution of .the depolarization parameter , , of a qubit where , is defined such that means complete depolarization and means no depolarization . in terms of the well - known pauli matrices ( here , the depolarization channel for a single qubit can be defined as .note that it is also possible to parameterize the depolarization channel as , where .this latter form is more convenient for decoding purposes , and below we term as the _ true flip probability_. however , in what follows we will assume the true value of is unknown _ a priori _ , and must first be measured via some channel identification procedure .this estimate of , which we will refer to as , will be used in a decoder in order to measure its performance relative to a decoder in which the true is utilized . in general ,quantum channel identification proceeds by inputting a known quantum state ( the probe ) into a quantum channel that is dependent on some parameter ( in our case ) . by taking some quantum measurements on the output quantum state which leads to some result , we then hope to estimate .the input quantum state may be unentangled , entangled with an ancilla qubit ( or qudit ) , or entangled with another probe .multiple probes could be used , or the same probe can be recycled ( _ i.e_. sent through the channel again ) . as can be imagined many experimental schemes could be developed along these lines , and the performance of each scheme ( _ i.e. _ how well it estimates the true value of the parameter ) could be analyzed . however , in this study we will take a different tact . herewe will simply assume an experimental set - up is realized that obtains the information - theoretical _ optimal _ performance .optimal channel estimation via the use of the quantum fisher information has been well studied in recent years , particularly in regard to the determination of the parameter of the depolarizing channel ( _ e.g. _ , , , , ) . defining .quantum fisher information about can be written as ^2 , \end{aligned}\ ] ] where is the symmetric logarithmic derivative defined implicitly by and where signifies partial differential w.r.t . . with the quantum fisher information in hand ,the quantum cramer - rao bound can then be written as \ge { \left ( { n_mj(f ) } \right)^ { - 1}}\ ] ] where $ ] is the mean square error of the unbiased estimator , and is the number of independent quantum measurements .the performance results in are obtained by randomly estimating a flip probability from a truncated normal distribution at the decoder side , given the mean square error of the unbiased estimation . in return , the performance is degraded approximately an order of magnitude . the appropriate decoding algorithm to decode quantum ldpc codes is based on the classical sum - product algorithm since the most common quantum channel model , namely _ depolarizing channel _ , is analogous to the classical -ary symmetric channel .the received values at the decoder side can be mapped to measurement outcomes ( syndrome ) of the received qubit sequence , and this syndrome is then used in error estimation and recovery . assuming an initial quantum state representing a codeword , the initial probabilities for the qubit of the state undergoing an , or error are where is the flip probability known at the decoder .the standard bp algorithm operates by sending messages along the edges of the tanner graph .let and denote the messages sent from bit node _ i _ to check node _j _ and messages sent from check node _j _ to bit node _ i _. also denote as the number of neighbors of bit node , and define as the number of neighbors of check node .to initialize our algorithm , each qubit node sends out a message to all its neighbors equal to its initial probability value obtained according to equation ( [ llrbsc ] ) . upon reception of these messages, each check node sends out a message to its neighboring qubit node given by where denotes all neighbors of check node except qubit node , and the summation is over all possible error sequences .each bit node then sends out a message to its neighboring checks given by where denotes all neighbors of qubit node except check node .equations ( [ checktobit ] ) and ( [ bittocheck ] ) operate iteratively until the message is correctly decoded or the maximum pre - determined iteration number is reached . in this section ,we investigate the dependence of the performance of a quantum ldpc code on the estimated flip probability of a depolarizing channel using the same quantum ldpc code simulated in . in each decoding process, the decoder performed an iterative message passing algorithm ( sum - product decoding algorithm ) until it either found a valid codeword or timed out after a maximum number of iterations .if the maximum number of iterations was reached then the decoding process was considered a failure .conversely , whenever a valid codeword was found , it was the correct one regardless whether it was the actual transmitted codeword . thus , the simulation plots herein is the block / qubit error probability ( bler / qber ) as a function of noise level , where the block error probability is defined the noise vectors were generated to have weight exactly , where was the block length of the code and was the true flip probability for the depolarizing channel .the decoder assumed an estimated flip probability .we varied the value of while the the true flip probability is fixed .the result of our simulation is shown in figure [ fig : fvsfhat ] .similar to the case of our classical ldpc code discussed earlier we can see from figure [ fig : fvsfhat ] , that optimal performance in the quantum ldpc code can be obtained when the input at the decoder is the true flip probability , _i.e _ exact channel knowledge .the trend of the curve in figure [ fig : fvsfhat ] also shows the impact on block error probability caused when overestimation of the flip probability is less than that arising from an overestimation of the channel flip probability .when the assumed flip probability reaches a limit ( beyond the limit of for a classical rate code ) , there is a catastrophic increase in the error probability .this result indicates that in the quantum case ( just as found in the classical case ) a limit to the degradation in performance may occur if in any estimate of the channel flip probability , an overestimate of the flip probability is utilized .we investigate this possibility further in what follows .consider now the case where a decoder can only attain partial channel information by probing the quantum channel using un - entangled or entangled quantum states ( only one measurement each , _ i.e. _ ) .given such partial information we will then weight our estimate of the channel parameter ( at the decoder side ) to large values ( rather than smaller values ) of the estimated flip probability .a schematic of our new decoding strategy is shown in fig .re - simulated results for the case ( unentangled state ) and ( entangled state ) discussed in section iii are plotted in fig .[ fig : improveda ] and fig .[ fig : improvedb ] . instead ofestimating a noise level randomly from a truncated normal distribution ( in a range of to ) characterized by mean and variance .our new simulation results , shown in fig . 4 and fig . 5 ,attempt to mimic the situation where to any estimate of the total flip probability an additional is added at the decoder side ( with an upper limit on the flip probability related to shannon s limit for the classical code rate ) . in the simulations , shown set at .it is this new estimate that is referred to as improved decoder " in the figures .the quantum ldpc code used here has block length of qubits with its quantum code rate .the decoding process terminates if block errors are collected or the maximum iteration number is reached .we can clearly see that by weighting the estimated channel information higher ( overestimating ) , an approximately improvement in performance for both using entangled quantum states and without entangled states .various other simulations were run with was set at other ratios .however , we find that setting w at resulted in close to optimal performance for the specific code studied .it is straight forward to carry our a numerical fit to the curves shown fig . 4 and fig . 5 ,producing `` cost '' functions for the codes as a function of . differentiating such curves with respect to will lead to the value .we conjecture that all quantum codes applied in misidentified depolarizing channels will show similar performance than that showed here .the specific value of , however , will likely vary from code to code .such studies form part of our ongoing work in this area . in practical situations ,a trade - off between entanglement consumption and quantum ldpc code performance is one of important aspect when partial channel information is available to the decoder . theoretically ,when the number of entanglement pairs used is very large , the performance should approach the case where perfect channel knowledge is known at the decoder side .however , a reduction in the number of required entanglement pairs could also yield a near optimal performance if a constant overestimation of channel is utilized when estimating the quantum channel . for case a. the dash lines plots the bler of the code and the solid line plots the qber of the code.,width=307 ] for case b. the dash lines plots the bler of the code and the solid line plots the qber of the code.,width=307 ]in this work we have investigated possible improvements in the decoding strategies of quantum ldpc decoders in the quantum depolarization channel .the importance of the channel mismatch effect in determining the performance of quantum low - density parity - check codes has very recently been shown to lead to an order of magnitude degradation in the qubit error performance . in this workwe have illustrated how such a performance gap in the qubit error performance can be substantially reduced .the new strategies for quantum ldpc decoding we provided here are based on previous insights from classical ldpc decoders in mismatched channels , where an asymmetry in performance is known as a function of the estimated bit - flip probability .we first showed how similar asymmetries carry over to the quantum depolarizing channel .we then showed show that when a weighted estimate of the depolarization coherence parameter to larger values is assumed , significant performance improvement by as much as was found .for the specific quantum code studied here we found that given a specific estimate of the channel flip probability , increasing that estimate by half as much again provided the most improved decoding performance .future work will further investigate asymmetric decoder performance in other quantum channels for which multiple parameters must be estimated in order to define the channel .we conjecture that all quantum channels which are misidentified , or for which only partial channel information is available , will benefit from similar decoding strategies to those outlined here. the use of pre - existing quantum entanglement between sender and receiver in the presence of asymmetric decoder performance will also be investigated in future work .the use of such pre - existing quantum entanglement is known to greatly expand the range of classical ldpc codes that can be `` re - used '' in the quantum setting .this work has been supported by the university of new south wales , and the australian research council ( arc ) . l. qi , g. chen , c. huijuan , t. kun , `` channel mismatch effect on performance of low density parity check codes , '' imacs multiconference on computational engineering in systems applications , bejing , 2006 .a. r. calderbank and p. w. shor , `` good quantum error - correcting codes exist , '' _ phys . rev .a _ , vol .1098 - 1105 , 1996 .a. m. steane , `` error correcting codes in quantum theory , '' _ phys .rev . letters _ , vol .793 - 797 , 1996 .d. gottesman , `` class of quantum error - correcting codes saturating the quantum hamming bound , '' _ phys .a _ , vol .54 , pp . 1862 - 1868 , 1996. m. s. postol , `` a proposed quantum low density parity check code , '' _ arxiv : quant - ph/0108131v1 _ , 2001 .d. mackay , g. mitchison , and p. mcfadden , `` sparse - graph codes for quantum error correction , '' _ ieee transactions on information theory _ ,50 , pp . 2315 - 2330 , 2004 .d. poulin , j. p. tillich , and h. ollivier , `` quantum serial turbo codes , '' _ ieee transactions on information theory _ , vol .2776 - 2798 , 2009 .m. nielsen and i. chuang , `` quantum computation and quantum information , '' _ cambridge series on information and the natural sciences _ , cambridge university press , 2000 .e. knill , r. laflamme , a. ashikhmin , h. barnum , l. viola and w. h. zurek , `` introduction to quantum error correction , '' _ arxiv : quant - ph/0207170v1 _ , 2002 .p. tan and j. li , `` efficient quantum stabilizer codes : ldpc and ldpc - convolutional constructions , '' _ ieee transactions on information theory _ , vol .476 - 491 , 2010 .m. hagiwara , k. kasai , h. imai , and k. sakaniwa , `` spatially coupled quasi - cyclic quantum ldpc codes , '' _ ieee proc . on international symposium in information theory _638 - 642 , 2011 .k. kasai , m. hagiwara , h. imai , and k. sakaniwa , `` quantum error correction beyond the bounded distance decoding limit , '' _ ieee transactions on information theory _ , _arxiv:1007.1778v2 [ cs.it]_ , 2011 .a. fujiwara , `` quantum channel identification problem , '' _ phys .a _ , vol .63 , 042304 , 2001 . m. sasaki , m. ban , and s. m. barnett , `` optimal parameter estimation of a depolarizing channel , '' _ phys .a _ , vol .66 , 022308 , 2002 .a. fujiwara and h. imai , `` quantum parameter estimation of a generalized pauli channel , '' _ journal of physics a : mathematical and general _ , vol .29 , pp . 80938103 , 2003 . m. r. frey , a. l. miller , l. k. mentch , and j. graham , `` score operators of a qubit with applications , '' _ quantum information processing _, vol . 9 , pp .629 - 641 , 2010 .m. r. frey , d. collins , and k. gerlach , `` probing the qudit depolarizing channel , '' _ journal of physics a : mathematical and theoretical _ , vol .20 , 205306 , 2011 .
|
the importance of the channel mismatch effect in determining the performance of quantum low - density parity - check codes has very recently been pointed out . it was found that an order of magnitude degradation in the qubit error performance was found even if optimal information on the channel identification was assumed . however , although such previous studies indicated the level of degradation in performance , no alternate decoding strategies had been proposed in order to reduce the degradation . in this work we fill this gap by proposing new decoding strategies that can significantly reduce the qubit error performance degradation by as much as . our new strategies for the quantum ldpc decoder are based on previous insights from classical ldpc decoders in mismatched wireless channels , where an asymmetry in performance is known as a function of the estimated bit - flip probability . we show how similar asymmetries carry over to the quantum depolarizing channel , and show with that when a weighted estimate of the depolarization coherence parameter to larger values is assumed we find significant performance improvement .
|
tools of social network analysis ( sna ) have been subject of interest for theoretical as well as empirical study of social systems .a social network is a collection of people or groups interacting with each other and displaying complex features .tools of sna provide quantitative understanding for the human interaction of collective behavior .considerable research has been done on scientific collaboration networks , board of directors , movie - actor collaboration network and citation networks .the use of network analysis not only provides a global view of the system , it also shows the complete list of interactions . in the world of sports , individual players interact with each other and also with the players in the opponent team .it is therefore important to study the effect of interactions on performance of a player . in recent yearsthere has been an increase in study of quantitative analysis of individual performance involving team sports .time series analysis have been applied to football , baseball , basketball and soccer . quantifying the individual performance or ` quality ' of a player in any sportis a matter of great importance for the selection of team members in international competitions and is a topic of recent interest .a lot of negotiations are involved in the process of team - selection .studies have focussed on non - linear modeling techniques like neural networks to rate an individual s performance .for example , neural networks techniques were used to predict the performance of individual cricketer s based on their past performance .earlier tools of neural networks were used to model performance and rank ncaa college football teams , predicting javelin flights to recognize patterns in table tennis and rowing .again , a model - free approach was developed to extract the outcome of a soccer match .it was also shown that the statistics of ball touches presents power - law tails and can be described by -gamma distributions . in recent years, the study of complex networks have attracted a lot of research interests .the tools of complex network analysis have previously been applied to quantify individual brilliance in sports and also to rank the individuals based on their performance .for example , a network approach was developed to quantify the performance of individual players in soccer .network analysis tools have been applied to football and brazilian soccer players .successful and un - successful performance in water polo have been quantified using a network - based approach .head - to - head matchups between major league baseball pitchers and batters was studied as a bipartite network .more recently a network - based approach was developed to rank us college football teams , tennis players and cricket teams and captains .the complex features of numerous social systems are embedded in the inherent connectivity among system components . social network analysis ( sna )provides insight about the pattern of interaction among players and how it affects the success of a team .this article points out that how topological relations between players help better understanding of individuals who play for their teams and thus elucidate the individual importance and impact of a player . in this paperwe apply the tools of network analysis to batsmen and bowlers in cricket and quantify the ` quality ' of an individual player .the advantage of network based approach is that it provides a different perspective for judging the excellence of a player .we take the case of individual performance of batsmen and bowlers in international cricket matches .cricket is a game played in most of the commonwealth countries .the international cricket council ( icc ) is the government body which controls the cricketing events around the globe .although icc includes member countries , only ten countries with ` test ' status - australia , england , india , south africa , new zealand , west indies , bangladesh , zimbabwe , pakistan and sri lanka play the game extensively .there are three versions of the game - ` test ' , one day international ( odi ) and twenty20 ( t20 ) formats .test cricket is the longest format of the game dating back to .usually it lasts for five days involving hours .shorter formats , lasting almost hours like odi started in and during late icc introduced the shortest format called t20 cricket which lasts approximately hours . batsmen and bowlers in cricket are traditionally ranked according to their batting and bowling average respectively .judged by the batting average , sir donald bradman ( with an average of ) is regarded as the greatest batsman of all times .the next best batting average of is held by graeme pollock . even though most of the records held by bradman has been eclipsed by modern day batsmen like sachin tendulkar , brian lara , graham gooch ,mohammad yusuf , bradman s legacy still survives and generates debate among fans about his greatness relative to recent players like sir vivian richards , brian lara or sachin tendulkar .the question thus naturally arises is whether batting average of batsmen ( or bowling average of bowlers ) are the best measure for judging the worth of a batsman ( or a bowler ) .it was shown that rankings based on average suffer from two defects - consistency of scores across innings and value of runs scored by the player .however one should also consider the quality of bowling as well .for example according to bradman himself , the greatest innings he ever witnessed was that of mccabe s innings of at sydney in 1932 .the reason being it came against douglas jardine s body - line attack , widely regarded as one of the fiercest bowling attacks . similarly runs scored against west indian bowlers like michael holding , joel garner , malcom marshall and andy roberts deserve more credit than runs scored against low bowling attack of bangaldesh or zimbabwe . on similar argumentsthe wicket of top - order batsman is valued more than the wicket of a lower - order batsman .if a bowler claims the wicket of bradman , lara , richards or tendulkar , he gets more credit than if he dismiss any lower - order batsman . under the usual ranking scheme based on bowling average , _george lohmann _ of england has the lowest ( best ) bowling average ( ) in test cricket .however bowlers like _ george lohmann _ played under pitch conditions favoring fast bowlers . hence batting ( or bowling ) averagedoes not serve as an efficient gauge for a batsman s ( or bowler s ) ability . against , this background , we propose a network based approach to quantify the ` quality ' of a batsman or bowler .the rest of the paper is presented as follows : in section 2 we propose the methods of link formation among the batsmen and bowlers . in section 3we discuss the results and we conclude in section 4 .we obtain data from the cricinfo website .the website contains the information of proceedings of all test matches played since and all odi matches from onwards .these include the runs scored by batsmen , wickets taken by bowlers , outcome of a game and also the information of the mode of dismissal of a batsman .we collect the data of player - vs - player for test cricket ( ) , odi cricket ( ) from the cricinfo website .the data of player - vs - player contains the information of runs scored by a batsman against every bowler he faced and how many times he was dismissed by the bowlers he faced .no information of player - vs - player is available for games played earlier than .we also collect the batting and bowling averages of players from the player s profile available in the cricinfo website . batting average of a batsmanis defined as the total number of runs scored by the batsman divided by the number of times he was dismissed .thus higher batting average reflects higher ` quality ' of a batsman .similarly , bowling average is defined as the number of runs given by the bowler divided by the number of wickets claimed by him .thus lower bowling average indicates higher ability of the bowler .this information is used to generate the network of interaction among bowlers and batsmen in cricket matches .cricket is a bat - and - ball game played between two teams of players each . the team batting first tries to score as many runs as possible , while the other team bowls and fields , trying to dismiss the batsmen . at the end of an innings ,the teams switch between batting and fielding .this can be represented as a directed network of interaction of batsmen ( ) and bowlers ( ) .every node in has a directed link to all nodes in , provided the batsman and bowler face each other .the performance of a batsman is judged by the ` quality ' of runs scored and not the number of runs scored .hence runs scored against a bowler with lower bowling average carries more credit than runs scored against a bowler of less importance .we introduce a performance index of a batsman ( ) against a bowler given by the following equation where is the batting average of the batsman against the bowler he faced and refers to the career bowling average of the bowler .mathematically , batting average of the batsman ( ) is given by the ratio where is the number of runs scored against a bowler and is the number of times he was dismissed by the bowler . and are evaluated for test matches played between and and odi ( ) ] hence , if the career bowling average of a bowler is low ( indicating a good bowler ) , increases indicating that the batsman scored runs against quality opposition .we generate a weighted and directed network of bowlers to batsmen where weight of the link is given by .the network generated is thus based on the directed interaction of and .for the weighted network the in - strength is defined as where is given by the weight of the directed link .so far , we have concentrated on the performance index of batsmen since .although the data for player - vs - player is not available for dates earlier than , one could quantify the overall performance of a bowler based on the dismissal record of batsmen .for example , the wicket of a top - order batsman always deserve more credit than the wicket of a tail - ender .thus the quality of dismissal serves as a measure for the greatness of a bowler .we define the quality index of bowler ( ) as where is defined as the number of times a batsman was dismissed by a particular bowler , refers to the career batting average of a batsman and indicates the career bowling average of a bowler .thus a greater value of indicates a better rank of a bowler .as before , we construct weighted and directed networks , this time the directed link pointing towards the bowlers .we evaluate the in - strength of the bowlers , which serves as a quantification of the ` quality ' of a bowler .the manner in which the game is played does nt allow us to compare the relative dominance of one batsman over another batsman or one bowler over another bowler . unlike in tennis , where each player has to compete directly with the opponent , in cricket a batsman is pitted against a bowler .hence it is very difficult to judge the relative superiority of a batsman ( bowler ) over another batsman ( bowler ) .the in - strength of a bowler or batsman conveys the ` quality ' of dismissal by a bowler or the ` quality ' of runs scored by a batsman .however , it does nt reflect the relative importance or popularity of one player over other players . to address this issue , in this sectionwe generate one - mode projected network between batsmen who face the same bowler ( or bowlers who dismiss the same batsman ) in which the links are generated according to the method of gradient link formation .traditionally a gradient network is constructed as follows .consider a substrate network .each node in the network is assigned with a random number which describes the ` potential ' of the node .gradient network is constructed by directed links that point from each node to the nearest neighbor with highest potential . herewe take a slightly different route to construct the projected network . in figure[ fig : network00 ] we demonstrate the generation of the one - mode projected network according to the gradient scheme of link formation .first we consider the substrate network of batsmen and bowlers according to the dismissal records .the thickness of the edge is proportional to .thus if batsman is dismissed by bowlers a and c , then bowlers a and c are connected .we evaluate the in - strength of the nodes a and c. the in - strength acts a ` potential ' for each bowler .we construct gradient links between two bowlers along the steepest ascent , where the weight of the directed link is the difference of the in - strength of two nodes . thus weighted and directed linksare formed between two bowlers if they dismiss the same batsman .we repeat this procedure for all the nodes in the substrate network and a resultant one - mode projected network is formed .additionally we introduce a constraint , in which two bowlers are linked only if they are contemporary .thus b and d are not linked in the gradient scheme since they are not contemporary players .we apply the same method of gradient link formation on batsmen , where the weight of each link in the substrate network is proportional to the .the weight of a gradient - link is given as where are the in - strength of two nodes and .the projected network thus highlights the relative importance of a player over other .we construct the substrate network of batsmen and bowlers for test cricket and odi cricket and construct the projected network of players .next we apply the pagerank algorithm on the resultant projected network and evaluate the importance of each player . in figure[ fig : network0](a ) we show a subgraph of the substrate network of batsmen and bowlers in odi ( ) .the projected network of bowlers is generated if they dismiss the same batsman ( _ wasim akram _ ) ( see figure [ fig : network0](b ) ) . in the same way one can construct projected network of batsmen who are dismissed by _wasim akram_. we quantify the importance or ` popularity ' of a player with the use of a complex network approach and evaluating the pagerank score , originally developed by brin and page .mathematically , the process is described by the system of coupled equations where is the weight of a link and = is the out - strength of a link . is the pagerank score assigned to team and represents the fraction of the overall `` influence '' sitting in the steady state of the diffusion process on vertex ( ) . $ ] is a control parameter that awards a ` free ' popularity to each player and is the total number of players in the network .the term represents the portion of the score received by node in the diffusion process obeying the hypothesis that nodes redistribute their entire credit to neighboring nodes . the term stands for a uniform redistribution of credit among all nodes .the term serves as a correction in the case of the existence nodes with null out - degree , which otherwise would behave as sinks in the diffusion process .it is to be noted that the pagerank score of a player depends on the scores of all other players and needs to be evaluated at the same time . to implement the pagerank algorithm in the directed and weighted network , we start with a uniform probability density equal to at each node of the network .next we iterate through eq .( [ eq : pg ] ) and obtain a steady - state set of pagerank scores for each node of the network . finally , the values of the pagerank score are sorted to determine the rank of each player .according to tradition , we use a uniform value of .this choice of ensures a higher value of pagerank scores .in general it is difficult to get analytical solutions for eq .( [ eq : pg ] ) .although in the simplest case of a single tournament an analytical solution for values of was determined , in cricket such a situation is not possible since it is a team game .the values of are evaluated recursively by setting .then we iterate eq .( [ eq : pg ] ) until a steady - state of values is reached .d in this section , we explore the in - strength distribution of the weighted and directed networks . the in - strength of a node is an indication of the performance of an individual against the opponent team member .thus a greater value of in - strength indicates a better the performance of the individual . in fig[ fig : network2 ] we plot the cumulative in - strength distribution of batsmen and bowlers in test cricket and odi cricket .the in - strength distribution reflects the topology of the network and how the players interact with each other . as show in fig[ fig : network2](a ) , the in - strength distribution decays slowly for smaller values of in - strength ( ) . for values higher than ,the in - strength distribution decays at a much faster rate .this is in contrast with the in - strength distribution of bowlers ( fig [ fig : network2](b ) ) , where the decay is slow .the reason being that all bowlers have to bat once the top order batsmen have been dismissed , thus establishing more links for the batsmen .however not all batsmen are specialist bowlers , which leads to low connections for bowlers . as mentioned above the in - strength of a batsman reflects the performance of a batsman in terms of quality of runs scored . in table [ tab : table1 ] we list the top batsmen in test cricket between and .the batsmen are ranked according to their in - strength .we observe that _ k. c. sangakkara _ of sri lanka occupies the top spot followed by india s _ s. r. tendulkar _ with australia s _r. t. ponting _ and south africa s _j. h. kallis _ occupying the third and fourth spot respectively ._ r. dravid _ of india occupies the fifth position .we compare the in - strength rank with the pagerank score and batting average of batsmen for runs scored between and .additionally we list the best ever cricket rating received by a batsman between and . * in figure[ fig : corr1](a , b ) we compare the correlation of ranks obtained from in - strength and pagerank algorithm with batting average .we observe that ranks obtained from batting average is positively correlated with in - strength rank and pagerank score*. judged by the batting average and the icc points we observe that _ b. c. lara _ of west indies emerge as the most successful batsman in test cricket between and .similarly australia s _ r. t. ponting _ averages more than_ s. r. tendulkar _ and _ k. c. sangakkara_. however both _k. c. sangakkara _ and _ s. r. tendulkar _ accumulated runs against better bowling attack . in table[ tab : table2 ] we list the top batsmen in odi cricket ( ) .* as shown in figure [ fig : corr1](c , d ) we observe that ranks obtained from batting average is positively correlated with in - strength rank and pagerank score .the top positions according to in - strength rank or pagerank do not correspond with that of batting average or icc rankings . * again _k. c. sangakkara _ emerge as the most successful batsman followed by australia s_ r. t. ponting _ and india s _ s. r. tendulkar_. even though _ s. r. tendulkar _ averages more than his predecessors and also received the highest icc points , both _ k. c. sangakkara _ and _ r. t. ponting _ scored runs against better bowling attack . .note that this ranking is sensitive to change in information of player - vs - player once the information prior to the year is available in the cricinfo website .* we rank the performance of all bowlers in test cricket ( ) in table [ tab : table3 ] , and identify bowlers with highest influence .we observe that the bowlers ranked by the average are different from that obtained from sna . in figure[ fig : corr2](a , b ) we compare the ranking obtained from in - strength and pagerank algorithm with bowling average .we observe a low positive correlation between the different ranking schemes .* we observe that according to in - strength values sri lanka s _m. muralitharan _ emerge as the most successful bowler in the history of test cricket ( ) followed by _s. k. warne _ ( aus ) , _g. d. mcgrath _ ( aus ) , _ a. kumble _( ind ) and _ c. a. walsh _ ( wi ) ( see table [ tab : table3 ] ) .as before we generate gradient network of bowlers and apply the pagerank algorithm .it is interesting to note that the top five bowlers according to pagerank score are _m. muralitharan _( sl ) , _ s. k. warne _ ( aus ) , _g. d. mcgrath _ ( aus ) , _ f. s. trueman _ ( eng ) and _ c. a. walsh _ ( wi ) ( see table [ tab : table3 ] ) .thus according to quality of ` dismissal ' and relative ` popularity ' of bowlers _m. muralitharan _ emerge as the most successful bowler in test cricket .interestingly , _ m. muralitharan _ is the highest wicket - taker in test cricket. his success could be _ a posteriori _ justified by his long and successful career spanning years ( between and ) . during his entire career _m. muralitharan _ dismissed batsmen ( highest in test cricket ) which included the likes of _s. r. tendulkar _( dismissed times ) , _ r. dravid _( dismissed times ) and _ b. c. lara _ ( dismissed times ) .in addition to this he holds the record of maximum number of five wickets in an innings ( times ) and ten wickets in a match ( times ) .we also observe that _s. k. warne _ , the second best bowler in test cricket has second highest number of dismissals ( ) to his credit .both these bowlers had extremely long and successful careers spanning almost two decades .australia s _g. d. mcgrath _ , who has been considered one of the best fast bowlers in cricket holds a better average than that of his immediate predecessors .however his in - strength rank and pagerank score indicates that his quality of dismissal were not better than _ muralitharan _ or _warne_. this leads to the possible question - are bowling averages the best indicator of a bowler s ability ? . in our all time top listwe observe that england s _s. f. barnes _ has the best bowling average of and highest icc points of among all the bowlers ( as listed in table [ tab : table3 ] ) . however like _george lohmann _ ,_ s. f. barnes _ too enjoyed favorable pitch conditions .the batsmen playing in such pitches usually averaged lower than the recent batsmen .hence for players like _s. f. barnes _ , the is low which in turn affects his in - strength . however , his pagerank score his higher than most of the modern age bowlers indicating his relative ` popularity ' or supremacy over other bowlers .a similar situation is seen with pakistan s _ imran khan_. although his in - strength is lower than that of _ wasim akram _ or _d. k. lillee _ , his pagerank score is higher than most of his predecessors .rankings based on sna show little agreement with traditional methods of performance evaluation . in odi history ( )too , sri lanka s _ m. muralitharan _ leads the list of top bowlers , followed by pakistan s _ wasim akram _ ,australia s _ g. d. mcgrath _ , pakistan s _ waqar younis _ and south africa s _s. m. pollock_. pagerank scores reveal that _ m. muralitharan _ is the most successful bowler followed by_ wasim akram _ ( pak ) , _ waqar younis _( pak ) , _g. d. mcgrath _ ( aus ) and_b .lee _ ( aus ) .although _ g. d. mcgrath _ has a slightly better average than _ m. muralitharan _ , he falls short of the latter in terms of in - strength , pagerank score and icc points . again , judged by the number of dismissals , _m. muralitharan _ heads the list with wickets , with _wasim akram _ and _ waqar younis _ occupying the second and third position respectively .there are few surprises in the list .india s _ a. b. agarkar _ is placed above in comparison to _ n. kapil dev _ ( ind ) , _ c. e. l. ambrose _ ( wi ) or _ c. a. walsh _ ( wi ) whom cricket experts consider as better bowlers .however , what goes in favor of _b. agarkar _ is the ` quality ' of wickets he took .thus even though he went for runs and did nt have a long career , he was able to dismiss most of the batsmen with good average . * in figure[ fig : corr2](c , d ) we compare the ranks obtained from in - strength and pagerank with bowling average .we observe that ranking schemes obtained from pagerank ( and in - strength ) are anti - correlated with the bowling average .* this is not surprising in the sense that bowling average is not a proper way of judging a player s performance .also in the odis , there has been a practice of bringing in part - time bowlers who have low - averages .this is paradoxical in the sense that it indicates part - time bowlers are better than the regular bowlers .we find that our scheme provides sensible results that are in agreement with the points provided by icc.*the rankings provided by icc take in account several factors like wickets taken , quality of pitch and opposition , match result etc .however , due to its opaqueness , icc s methodology is incomprehensible .our approach is both novel and transparent .for comparison , we choose the top bowlers according to icc rankings bowlers in odi and test . ] and compare them with in - strength rank and pagerank .figure [ fig : corr3 ] shows that strong correlation exists between ranks obtained by network based tools and that provided by icc*. this demonstrates that our network based approach captures the consensus opinions .finally we propose a linear regression model for in - strength that takes into consideration known ranking schemes like pagerank , batting ( bowling ) average and icc ranking , where is the in - strength , is the pagerank of a player . represents the batting ( bowling ) average of player and is a dummy variable which takes the value if a player is placed in the top of icc player ranking , and otherwise .as shown in table [ table_regression_1 ] , we observe that for bowlers in test cricket ( ) , bowling average has no significant effect for in - strength , thus justifying the absence of correlation observed earlier in figure [ fig : corr2](c , d ) .to summarize , we quantified the performance of batsmen and bowlers in the history of cricket by studying the network structure of cricket players . under the usual qualification of balls bowled , _george lohmann _ emerge as the best bowler .again , if we apply the qualification of at least dismissals , then _ c. s. marriott _ is the best bowler .these constraints are arbitrary and hence gauging bowler s potential according to bowling average is not robust .the advantage of network analysis is that it does nt introduce these ` constraints ' and yet provides consistent results .in such situation , in - strength and pagerank score stands out as an efficient measure of a bowler s ability .we would like to mention that although our study includes the quality of bowling attack or quality of dismissal of a batsman , we do nt consider the fielding abilities or wicket - keeping abilities of the fielders .it is not possible to quantify the fielding ability of a fielder , other than by the number of catches , which is not a true measure of a fielder s ability .some fielders are more athletic than others .slip fielders always have a higher chance of taking a catch than others .again , a batsman deserves more credit if he is able to beat athletic fielders like jonty rhodes , ricky ponting or yuvraj singh .secondly , a bowler s ability is also judged by the nature of wicket . an excellent bowling performance ona batsman - friendly pitch holds greater merit than that on pitches which help bowlers .similarly , scoring runs on difficult tracks always gets more attention than scoring runs on good batting tracks . in our analysis ,due to non - availability of these informations , we did nt include these ` external factors ' in our analysis .nevertheless a network based approach could address the issue of relative performance of one player against other .our study shows that sna can indeed classify bowlers and batsmen based on the quality of wickets taken or runs scored and not on the averages alone .team selection is extremely important for any nation .sna could be used as an objective way to aid the selection committee .a proper analysis of a player s domestic performance would help his(her ) selection in the national squad .additionally , owners of the cash rich indian premier league ( ipl ) teams spend lots of money to hire players on a contract basis .the owners along with the coaches can identify talents based on the past ` performance ' of a player .potentially our study could identify the greatest batsman of all time , based on a complete player - vs - player information , which at present we are unable to identify due to non - availability of data .our analysis does nt aim at replacing the existing system of icc player ranking , which are based on expert opinions and has been optimized and almost perfected for many years .it serves as an alternate method to refine the existing ranking scheme of players and quantify the performance of a player .there are many additional features that could be included in the networks .for example , the networks in our analysis are static. a dynamic version of the network can be constructed following the ball - by - ball commentary and obtain a detailed analysis .again , for batsmen there are players who score differently in different innings .there are leadership effects as well .some players perform well under different skippers .bowlers are categorized into different categories based on their bowling style - pacers , medium pacers and spinners .quantifying the ` style ' of bowling and effect of pitch conditions thus remains an open area of research . a rigorous analysis backed by a complete dataset of player - vs - playercould very well answer the question - was _ sir don bradman _ the greatest ever ? in our quest to judge the most successful bowler in the history of cricket , one fact stands out : _m. muralitharan _ remains _ il capo dei capi_. & & & + number of bowlers & test & & 2616 + number of bowlers & odi & & 1914 + number of batsmen & test & & 599 + number of batsmen & odi & & 1027 + & & & + & & & + & & & * model + & & & & + & & coef .& std.err .& p - value * * model for for bowlers in test ( ) & & + & & & + intercept & & * -8.16 & 1.12 & + pagerank& & * 128093.4 & 1324.76 & + bowling average& & 0.042 & 0.025 & + dummy& & * 24.05 & 1.12 & + & & & + r - squared & & + & & & + * * * * * model for for bowlers in odi( ) & & + & & & + intercept & & * -2.41 & 0.438 & + pagerank& & * 37766.47 & 312.61 & + bowling average& & * 0.036 & 0.011 & + dummy& & * 16.98 & 1.73 & + & & & +r - squared & & + & & & + * * * * * * model for for batsmen in test ( ) & & + & & & + intercept & & * -1.78 & 0.822 & + pagerank& & * 825.39 & 50.37 & + batting average& & * 0.289 & 0.036 & + dummy& & * 26.55 & 1.496 & + & & & + r - squared & & + & & & + * * * * * * model for for batsmen in odi ( ) & & + & & & + intercept & & -.029 & 0.53 & + pagerank& & * 1005.56 & 48.39 & + batting average& & * 0.159 & 0.022 & + dummy& & * 30.122 & 1.203 & + & & & +r - squared & & + & & & + * * * *
|
quantifying individual performance in the game of cricket is critical for team selection in international matches . the number runs scored by batsmen and wickets taken by bowlers serves as a natural way of quantifying the performance of a cricketer . traditionally the batsmen and bowlers are rated on their batting or bowling average respectively . however in a game like cricket it is always important the manner in which one scores the runs or claims a wicket . scoring runs against a strong bowling line - up or delivering a brilliant performance against a team with strong batting line - up deserves more credit . a player s average is not able to capture this aspect of the game . in this paper we present a refined method to quantify the ` quality ' of runs scored by a batsman or wickets taken by a bowler . we explore the application of social network analysis ( sna ) to rate the players in a team performance . we generate directed and weighted network of batsmen - bowlers using the player - vs - player information available for test cricket and odi cricket . additionally we generate network of batsmen and bowlers based on the dismissal record of batsmen in the history of cricket - test ( ) and odi ( ) . our results show that _ m muralitharan _ is the most successful bowler in history of cricket . our approach could potentially be applied in domestic matches to judge a player s performance which in turn pave the way for a balanced team selection for international matches . social network analysis , gradient networks , sports , cricket .
|
the theoretical modelling of large - strain elasto - plasticity for polycrystalline materials poses many challenges and , despite its great importance , no fully satisfactory mathematical theory has emerged so far .works in this direction include for instance as well as the monographs .some major issues in the quest for such a theory are the following : first , elasto - plasticity naturally goes beyond what can be modeled in a traditional continuum mechanics framework .rather , a body undergoing plastic flow does _ not _ remain a continuum as the material is internally `` ripped and torn '' , even if this is not macroscopically observable .even if one is not interested in precisely describing the microscopic origins of plastic flow ( such as dislocation movement in metals , see for instance ) , some aspects of the microscopic situation need to be taken into account in order to derive a consistent theory .more advanced mathematical structures such as lie groups and lie algebras , can be very effective in describing the relationships between macroscopic and microscopic phenomena , but are rarely used in the engineering literature ( but see ) .second , when deriving the full dynamics of the material , it is often unclear how the solid - like elastic deformations and the fluid - like plastic deformations interact . in particular, their relative speed plays an important role , but this does not seem to be used at present for plasticity modelling .third , elasto - plastic flow is rate - dependent ( viscous ) in reality , but only slightly so , and thus rate - independent ( quasi - static ) approximations are used more often than not .however , this simplification creates the serious problem that the regularity of solution processes can only be low in general since _ jumps _ ( relative to the `` slow '' time scale ) can occur in rate - independent processes at least there is no obvious mechanism preventing this _ a - priori_. this creates the need to specify the behavior of the evolution on the jump transients , which is important for the global energetics , but rarely considered . the present work develops , from first principles , a model of macroscopic elasto - plasticity that aims to addresses the above issues , and then to analyze it .our approach is based on the following key principles : 1 .macroscopic deformations are modeled as driven by microscopic slips ( e.g. slips induced by dislocation movement ) , an idea taken from single - crystal plasticity .this idea can be abstractly expressed using the theory of lie groups and their lie algebras : a matrix from a lie group represents the macroscopic plastic distortion and hence ( part of ) the internal state of the material .the plastic flow , however , is specified on the level of the associated `` microscopic '' lie algebra ( i.e. the tangent space at the identity matrix ) as the sum of microscopic _ drifts _ ( slip rates ) acting as _ infinitesimal generators _ of the flow .the main advantage of the lie - theoretic point of view expressed in this work is that on the microscopic lie algebra level we are dealing with a _ linear _ structure .elastic movements are assumed to be infinitely fast relative to the plastic ones , which is quite realistic , so we base the model on the postulate that the system minimizes over all admissible _ elastic _ , but not plastic , deformations .the plastic dynamics are first modeled as _ rate - dependent _, i.e. _ viscous_. if the system is spontaneously pushed out of a stable ( rest ) state , it _ relaxes _ to stability by following an evolutionary flow rule . unlike other models ( see below for comparisons ) , here we do not rely on minimization over irreversible movements , which is thermodynamically questionable . in particular ,we work with the realistic _ local _stability ( yield ) condition in the slow - loading limit .4 . the combined elasto - plastic dynamics are modeled based on a two - stage time - stepping scheme , which alternates between two `` fundamental motions '' : purely elastic minimization and elasto - plastic relaxation , the latter exchanging elastic for plastic distortion without modifying the total deformation .this two - stage approach allows for a very clean modeling without any ambiguity as to which test functions should be considered in the principle of virtual power .5 . via a slow - loading limit passagewe arrive at the limit `` two - speed '' formulation , which incorporates two time scales : with respect to the `` slow '' time , the formulation is rate-_independent_. on jump transients , however , we retain the possibility of rate - dependent evolution ( or a mixture of rate - dependent and rate - independent evolution ) with respect to the `` fast '' time .6 . we formulate the whole model in the reference frame , but point out how the common formulation with structural ( intermediate ) tensors is essentially equivalent via lie group adjoints .we do not use the idea of an `` intermediate ( structural ) space '' since one can not consistently define `` intermediate points '' . if denotes the total deformation of our specimen , then the commonly used krner lee decomposition splits the deformation gradient into elastic and plastic _ distortions _ ; since are not in general curl - free , they might not be deformation gradients themselves .we refer to for justifications and various other aspects of this decomposition .note in particular that if , then can not be the identity map ; this expresses the physical constraint that the elastic deformation has to close the gaps opened by the plastic flow so as to restore a macroscopic continuum .the multiplicative krner lee decomposition is at the root of many mathematical challenges in large - strain theories of elasto - plasticity .not only is it incompatible with our traditional linear function spaces , the splitting of into and furthermore is clearly not unique .the ensuing ambiguity ( often called the `` uniqueness problem '' in the literature ) is a big obstacle when trying to develop a useful mathematical theory .for instance , if for the moment we consider the macroscopic elasto - plastic flow to be divided into a number of time intervals , , then in every such interval we have potentially both an elastic and a plastic distortion , say .macroscopically , the total plastic distortion should be .however , if we let ( the interval size going to zero ) , even this `` natural '' setup seems to lead to the need for infinite products , which is not feasible due to the non - commutativity of matrix multiplication .for example , an approach using matrix logarithms to transform products into sums runs into trouble since the involved matrices are not necessarily symmetric and positive definite , whereby the matrix logarithm might not be uniquely definable .we here posit that the plastic distortion can only be determined from the internal state of the material and its flow in time needs to be specified through a differential equation , see some discussion on this point .this approach removes all ambiguity in the krner lee decomposition .another feature of the present model , reminiscent of recent work by reina & conti , is that we study the structure of microscopic slips via a reasoning with functions of bounded variation ( see ) whose derivative contains an absolute continuous ( elastic ) part and a singular ( plastic ) part . on the microscopic levelthese two parts must decompose _ additively _ since they take place in different parts of the material ( bulk and surface parts , respectively ) . in the macroscopic `` homogenized '' theory , however , this separation is lost and we keep track of the plastic distortion rate ( speed ) via an equation in the `` microscopic '' lie group .previous mathematical theories for large - strain elasto - plasticity seem to fall into one of two categories : the first , _ energetic _ , approach was introduced in and applied to nonlinear elasto - plasticity in .it starts from a time discretization and assumes that the system at every time step minimizes the potential energy plus any dissipational cost that may be incurred by jumping to the target state .in the limit , a global form of the stability ( yield ) condition and an energy balance can be derived ( these two conditions alone , however , can have more solutions than the original time - discrete scheme , see ) . the global , dissipative minimization in the time - stepping scheme assumes infinite foresight of the system since one may jump to an ( energetically ) far - away state , even if there is a potential energy barrier in the way , so the system can in fact jump `` too early '' and `` too far '' . moreover , from a physical perspective , minimization over _ irreversible _ ( dissipative ) movements seems to be a violation of the _ second law of thermodynamics_. nevertheless , if these assumptions are a good approximation to the physical situation at hand , then a very mature theory is available ; the current state - of - the - art is presented in the recent monograph , also see for a global variational principles in this context .a more recent approach to address the _ under - specification _ of the system s behavior on jump transients is to add a _ vanishing viscosity _term .besides the question of what shape of viscosity one should use ( which , however , sometimes turns out to be unimportant ) , the mathematical analysis here is still unfinished and only some special cases of elasto - plastic evolutions can be fully analyzed , usually without infinite - dimensional elastic variables .our approximation scheme is closer in spirit to the vanishing viscosity approach and in some cases could be equivalent .however , we try to argue from first principles and only using a time rescaling together with the postulates outlined above .this paper is organized as follows : after a detailed explanation of the modeling in sections [ sc : kinematics][sc : evolution ] , we then in section [ sc : limit ] embark on a detailed , yet mathematically non - rigorous , investigation into the slow - loading limit passage .these calculations shed some light on the total energetic / dissipative behavior of the system and form the basis of a full mathematically rigorous analysis .such an analysis is the subject of future work , but at present many formidable technical challenges remain .we start at the microscopic level and with the non - continuum origins of plastic deformations .then we consider the macroscopic kinematics , which are linked to the microscopic picture through some basic lie group theory .consider an open and bounded `` microscopic '' reference domain , , undergoing plastic deformation .assume without loss of generality that and that there is a ( restricted ) hyperplane , , defined through its normal vector , , that splits into the two parts we assume that the microscopic plastic deformation manifests itself through a translation of in a referential ( relative to the undeformed material ) direction with and ( perpendicular to ) and with speed at time .the situation is illustrated in figure [ fig : microslip ] .hence , the corresponding * simple slip * expressing the deformation after time is if such a simple slip was to occur in a macroscopic state , it would be called a _ shear band _ ,i.e. a shear motion over an infinitely thin plane , but we here assume it takes place on a microscopic scale and might not be visible macroscopically .the space derivative of in the bv - sense is the matrix - valued _ measure _ where is the -dimensional lebesgue measure ( the ordinary -dimensional volume ) and denotes the -dimensional hausdorff measure ( the -dimensional area ) on and is the identity matrix .consequently , is a * ( special ) function of bounded variation*. we refer to for information on this important class of functions and to for a justification of the krner lee decomposition based on a similar argument .a `` differentiated '' view on this simple slip is the following : for the * plastic distortion * ( here there is no elastic distortion ) , we may derive the following differential equation : , \qquad t > 0 , \\p(0 ) & = i , \end{aligned}\right.\ ] ] which we understand as & & \text{-almost every and , } \\p(0,x ) & = i & & \text{-almost every . } \end{aligned } \right.\ ] ] in particular , at every time , a material vector from the tangent space to at a point in is changed with the rate notice that the referential vector is transformed into a `` structural '' vector .this corresponds to the assumption that the slip direction is given with respect to the material frame , which is also natural since the hyperplane is specified in the material frame ( there are no `` structural points '' as detailed below ) and we need .however , it is also possible to construct a `` structural '' formulation , but this leads to an equivalent theory , see section [ ssc : ref_struct ] .the above linear matrix differential equation can be solved explicitly using the matrix exponential function ( notice that in this special case all `` generator '' matrices commute and since ) : as expected .one can generalize the preceding discussion as follows : we postulate that also in the general situation , the plastic distortion at time is given via the * plastic distortion equation * where is a -matrix field ( or even a measure ) called the * ( referential ) plastic drift*. in the case of the simple slip above , in metals undergoing plastic distortion , arises from ( activated ) crystal defects that move around in the material , causing slip .since by definition is a matrix in the _ referential _ frame , the equation simply expresses that the referential drift , given by , transformed to the plastically distorted configuration is equal to the change in the latter .often , one considers only plastic drifts that are a superposition of simple slips ( for example all activated slip systems in crystal plasticity ) .all these slips have trace - free generators ( with ) and so in this case we require to be _ deviatoric _ : from the general formula and , we get hence in this case the * plastic incompressibility * holds . it should be remarked that if is a sum of constant - in - time drifts ( e.g. several different simple slips ) , , then the evolution of is given as this , is however _ not _ equal to , because in general the matrices do not commute . to compute this expression , one would need to expand using the baker campbell hausdorff formula , which involves the commutator brackets ] .thus , we have { \;\mathrm{d}}x\ ] ] as was arbitrary , this is equivalent to our previous force balance . from now onwe also assume ( one can also derive a more complex theory without this ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ inertial forces can be neglected . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _then , we get the * equilibrium equation * = f.\ ] ] now assume , whence on each subdomain the external power expended on is zero . on the other hand , for the internal power we obtain where 1 . is the generalized stress power - conjugate to ( recall that is the cotangent space to at , that is , the dual space to the tangent space ) , and 2 . is the generalized stress power - conjugate to .the two generalized stresses are the stresses that the system has to work against to deform plastically . since , again by the _ principle of virtual power _ , for any subdomain , we obtain the local balance of powers as and can take any value , we conclude the * elasto - plastic relaxation balance * upon defining the * referential plastic stress * as we can rewrite the elasto - plastic relaxation balance as since , we indeed have , see section [ ssc : state_spaces ] .let us also indicate how this rule can be written for and the corresponding elastic stress : lies in , the dual space to . in coordinates , we represent by matrices and the duality product by the frobenius product . of course, are in general strict subspaces of , so there are additional conditions that have to be satisfied by matrices in these spaces . during coordinate calculations , these side constraints can become `` lost '' and so at the end we need to project onto the corresponding spaces again . we have for any , and using , \\ & = \nabla y^t { \mathrm{d}}_e w_e(x , e ) p^{-t } : d,\end{aligned}\ ] ] since for the derivative of in direction , we have = - \nabla y p^{-1 } pd p^{-1 } = - \nabla y d p^{-1}\ ] ] by the well - known formula ^{-1 } ) = - a^{-1 } q a^{-1} ] ( recall that ) with the properties _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ is convex , lower semicontinuous , and strictly positive . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ here , strict positivity means that for ( which is equivalent to for whenever is positively homogeneous of any order ) .we furthermore assume that the * flow rule * can be written in the form of the * biot inclusion * ( cf .section 4.4 in and ) here , is called the * total drift * and denotes the convex subdifferential of at , that is , consists of all those * flow stresses * that satisfy where `` '' is the component - wise scalar product between and , that is , for and we have . if we for the moment ( and for the purpose of illustration only ) assume that our dissipation potential is differentiable , then the flow rule expresses that which is just in a special form .in particular , we assume convexity and lower semicontinuity of , which are not present in . if is positively -homogeneous , then convexity of expresses the following intuitive constraint : when the system is moving in direction , where , , then the frictional power is ( by euler s positive homogeneity theorem ) ; alternatively , we could oscillate very quickly between rates with time - fractions and , respectively , which would result in the frictional power .the convexity tells us that this oscillatory path expends more energy , as should intuitively be the case for `` reasonable '' materials .moreover , it can be shown that the _ maximum plastic work principle _ , which is a strengthening of the _ second law of thermodynamics _, implies convexity of , see p. 5759 in .the lower semicontinuity is a minimal continuity assumption and can again be justified on physical grounds or out of mathematical necessity .finally , the strict positivity means that energy is dissipated if and only if the material deforms plastically .note that in many applications does not depend on the state variables since we can often model changes in the size or shape of the elastic stability domain ( defined below ) through residual stresses .however , one may extend the theory to incorporate this constraint as well , but chose not to do this here for the sake of notational clarity .the reason we specify the ( primal / dual ) flow rule in terms of the drift and not in terms of the current flow velocity is that when the material flows , the flow rule needs to `` flow with the material '' , and therefore ought to be a _referential _ ( or _ structural _ ) quantity . also see section [ ssc : ref_struct ] for more on this .finally , define the * dual dissipation potential * ] with it always holds that , hence the above condition is automatically satisfied for convex , giving another reason to require convexity as a structural property .finally , for a ( time - differentiable ) process we define the * ( total ) dissipation * over an interval } \,\bigr\}},\ ] ] which is closed , convex , and a neighborhood of .also let . then , according to , ^{-1}.\ ] ] at , the elastic deformation domain is which is not convex : the matrices and are in , but their average is not , as a simple calculation shows . in its most general form , * hardening * describes the process by which the _ effective _ elastic stability domain changes , in particular expands or contracts , due to a change in the internal variables .hardening is usually anisotropic and is due to a variety of microscopic effects like dislocation entanglement and generation . in * isotropic hardening * , the elastic stability domain remains centered around the origin but can expand in what is called * positive hardening * and contract in * negative hardening * or * softening*. in * kinematic hardening * , the elastic stability domain is translated ( usually in direction of the plastic flow ) . combined , these two effects give a first approximation to the often - observed phenomenon that an increase in tensile yield strength goes along with a decrease in compressive yield strength , called the * bauschinger effect * , which , however , in general is more complex , see for instance section 3.3.7 in and .[ ex : vonmises_istrop_kinemat ] in the often - considered * mises isotropic kinematic hardening * , the elastic stability domain depends on two internal variables , and , so that ( considered as a vector in ) .then , the elastic stability domain is where is a constant ( the initial tensile yield strength ) and ^{1/2} ] be a given * rate - dependent dissipation ( pseudo)potential * that we assume to be convex , lower semicontinuous , and * superlinear * , that is furthermore , for , we want to assume that is _ continuously differentiable _ in , expressing the fact that we model a _system , i.e. that the flow direction and speed are uniquely determined .then , also is continuously differentiable for all such that for some , see .note that additionally could depend on the current state , but we suppress this dependency for ease of notation . as long as , plastic flow is governed by our flow rule see , . additionally , we want to express the notion that the flow stops once we have reached the elastic stability domain .thus , we really use the * combined flow rule * clearly , we only need the dual flow potential for since the flow stops once we have reached the yield surface . therefore ,we now change notation slightly and from here onwards denote by the * _ combined _ dissipation potential * , for which we furthermore require the decomposition into the following two components : a. is the * rate - independent dissipation potential * , which is equal to the support function of , i.e. the dual of the characteristic function of ( which is on and otherwise ) .b. is the convex , differentiable * residual dissipation potential*. we also assume that its dual is strictly positive , that is and for ( this is essentially a superlinearity assumption on and for instance satisfied if ) .it turns out that the decomposition implies indeed , we can compute , using the inf - convolution ] ( see theorem 16.4 of ) , that ^*(\sigma ) = [ r_1^ * \operatorname{\square}r_+^*](\sigma ) = \inf_{\gamma \in { \mathcal{s } } } r_+^*(\sigma - \gamma).\ ] ] in particular , using the strict positivity of , we have for the * dual _ combined _ dissipation potential * ] .thus , the decomposition above holds , where we also note that because of and because of .an important special case , which we discuss for further illustration , is a material that is * associative*. while there is considerable disagreement in the literature over what exactly constitutes an associative flow rule , we here understand it to mean that there exists a proper , lower semicontinuous , convex , and strictly increasing and a norm on such that the utility of these assumptions is that they allow to simplify the definition ( cf . ) of the dual combined dissipation potential to where is the * distance function * with respect to the norm . in this case , all flow rates are normal to the elastic stability domain , that is , where denotes the normal cone to at , i.e. this is related to the _ maximum plastic work principle _ , which is a strengthening of the second law of thermodynamics , see p. 5759 in .a common choice for a rate - dependent dissipation potential ( recall that contains all trace - free -matrices ) , is a mises power law , see for instance section 101 in , that is , the restriction of to the complement of the elastic stability domain here , and .since for , we may compute ^+,\ ] ] where ^+ = \max(s,0) ] , where may depend on the path , until relaxation is achieved when .we define through note that , where . assuming that this path exists and is unique , we denote its endpoint by . if we also set if already , then can be considered a function with the property that conceptually , rate - independence is of course not a physical property of a system , but a _ mathematical _ rescaling limitso , consider an elasto - plastic process with external loading , where .then , the basic assumption in this work is that the system evolves according to a ( fast ) flow rule as defined above .however , as discussed before , if were to be held constant at some point in time , the system would settle very quickly into a rest state until the external loading changes and the system is pushed out of equilibrium .the traditional rate - independent modeling is built upon the assumption that only this global movement is interesting and the fast `` relaxation '' movements towards a rest state can be neglected , at least if the system does not jump to a far - away state in an instant .this idealized situation is mathematically expressed through _ rescaling _ : define for small the process as a solution of the fast dynamics for the slower external loading , where now ] be the relaxation path starting at , which we consider to be constantly extended to and set note that it is possible that , depending on the value of , the relaxation path may be shorter than the full interval ( if ) or longer , in which case the relaxation is not complete by the next time point . furthermore , since we assumed that all relaxations are internal , during the relaxation the full deformation does not change . see figure [ fig : relaxflow ] for an illustration of the relaxation .+ thus , we have the * effective flow rule * note that only if , at the end of the relaxation path it holds that hence the system has reached an elastically allowable state . of course , in this process, does not necessarily retain the minimization property from stage ( i ) .we iterate this scheme for and call the resulting the * time - stepping evolution * at level .assume we have a sequence of time - stepping solutions . in this section we will consider heuristics of the limit passage in order to understand the continuous - time behavior .this can be seen as a `` blueprint '' of a future fully rigorous mathematical analysis .we do not state all the precise assumptions , but indicate key requirements as we go along . as the basis of all of the following we require : 1 . in a sufficiently good sense .this would have to be made precise in a rigorous treatment , we here simply assume that the convergence is `` good enough '' to make all the following arguments work .we will distinguish two types of points : regular and singular points . at * regular points* , we assume the bound ] as follows : ) : = \sup\ , { \biggl\{\ , \sum_{k=1}^n { \mathcal{d}}_0(h(\tau_{k-1 } ) , h(\tau_k ) ) \ \ \textup{\textbf{:}}\ \s = \tau_0 < \tau_1 < \cdots < \tau_n = t , \ ; n \in { \mathbb{n}}\,\biggr\}},\ ] ] where for , \to { \mathfrak{h}}\gamma(0 ) = h_0 \gamma(1 ) = h_1 ] that ) = \int_s^t \int_\omega r_1 \biggl ( \frac{{\mathrm{d}}}{{\mathrm{d}}\tau } h_0(\tau ) \biggr ) { \;\mathrm{d}}x { \;\mathrm{d}}\tau.\ ] ] we assume here that under a sufficiently strong convergence , is * lower semicontinuous * , that is , if ( with respect to our sufficiently strong convergence ) , then ) \leq \liminf_{j\to\infty}\ , \operatorname{diss}_1(h_j;[s , t]).\ ] ] this can be justified ( even for not so strong convergences ) by appealing to the _ convexity _ of . also , from , we get - { \mathcal{w}}[u^n(t ) ] \leq { \mathcal{w}}[u_{\mathrm{start } } ] < \infty.\ ] ] therefore , in particular , the are of uniformly bounded variation .let be a regular point. assume 1 . n \to \infty ] uniformly in , 2 . uniformly in , 3 . uniformly in .now , at , our is a minimizer of ] .thus , 1 . \qquad \text{for regular .} , .}\ ] ] since is zero if and only if by , this implies the stability condition 1 . at singular points instead of we only require 1 . this can be justified as follows : if our minimizers in stage ( i ) always have enough regularity ( i.e. enough derivatives ) , then the speed of the unrescaled elasto - plastic relaxation flow remains bounded . since we speed up the flow by a factor of , the assumption ( a6 ) is then realistic .also , the stress is uniformly bounded everywhere , i.e. ( a3 ) holds . in a forthcoming work on geometrically linear rate - independent systems , ( a3 ) , ( a6 ) can be proved rigorously ( albeit for the quadratic -norm ) . in the singular case, we need to _ rescale _ our processes around as follows : set since then is uniformly in bounded by the chain rule for the referential derivative , we may assume that for the minimization property , we can argue in a similar way as we did at regular points : it holds that \leq \liminf_{n\to\infty } \biggl ( { \mathcal{e}}[t^n_*,u^n(t^n _ * ) ] + \int_{t^n_*}^{t_0+\lambda^n \theta } \frac{{\mathrm{d}}}{{\mathrm{d}}\tau } { \mathcal{e}}[\tau , u^n(\tau ) ] { \;\mathrm{d}}\tau \biggr),\ ] ] where is the such that .we estimate { \;\mathrm{d}}\tau\biggr| } & \leq { \biggl|\int_{t^n_*}^{t_0+\lambda^n \theta } \int_\omega \sigma(u^n(\tau ) ) \circ \frac{{\mathrm{d}}}{{\mathrm{d}}\tau } h^n(\tau ) { \;\mathrm{d}}x { \;\mathrm{d}}\tau\biggr| } \\ & \qquad + { \biggl|\int_{t^n_*}^{t_0+\lambda^n \theta } \int_\omega \dot{f}(\tau ) \cdot y(\tau ) { \;\mathrm{d}}x { \;\mathrm{d}}\tau\biggr| } \\ & \leq c \biggl ( \frac{2^{-n}}{\lambda^n } + 2^{-n } \biggr ) \\ & \to 0 \qquad\text{as }\end{aligned}\ ] ] by our choice of the as going to zero more slowly than and also using ( a4 ) , ( a5 ) . as before we get for any that \leq \liminf_{n\to\infty } { \mathcal{e}}[t^n_*,u^n(t _ * ) ] \leq \liminf_{n\to\infty } { \mathcal{e}}[t^n_*,\hat{y},h^n(t^n _ * ) ] = { \mathcal{e}}[t_0,\hat{y},h(t_0,\theta)].\ ] ] in the last limit we use that and further assumed continuity properties of .thus , 1 . \qquad \text{for singular .}t_0 t_0 ] differentiating the the energy balance ( e3 ) with respect to time at a regular point ( away from any singular points ) and dropping the integral ( the following argument works on any subdomain ) , we get - f(t ) \bigr ) \cdot \dot{y}_0(t ) = -r_1 \biggl ( \frac{{\mathrm{d}}}{{\mathrm{d}}t } h_0(t ) \biggr).\ ] ] thus , also employing the euler lagrange equation , we get additionally , we have from the stability ( e2 ) that , i.e. adding the last two assertions , we have this is nothing else than the written - out formula of the * rate - independent differential inclusion * 1 . a process together with _ transients _ with for all * singular ( jump ) points * is called a * two - speed solution * if the following assertions are true : 1 . \qquad \text{for regular .} t_0 \in [ 0,t) ] 3 . 4 . = { \mathcal{e}}[0,u_{\mathrm{start } } ] - \operatorname{diss}_+(u;[0,t ) ) -\int_0^t \int_\omega \dot{f}(\tau , x ) \cdot y_0(\tau , x ) { \;\mathrm{d}}x { \;\mathrm{d}}\tau ] , i.e. the jump length is finite ) .the present manuscript aims to make a contribution to the mathematical modelling of large - strain elasto - plasticity and to provide a conceptual framework for existence theorems .future work will cast the concepts introduced into this work , in particular the two - speed solutions , into a mathematically rigorous framework and perform a thorough mathematical analysis .i would like to thank sergio conti , georg dolzmann , gilles francfort , michael ortiz , ulisse stefanelli , florian theil , and emil wiedemann ( in particular , example [ ex : d_nonconvex ] is due to him ) for many interesting discussions related to the present work .the support of the author through the epsrc research fellowship ep / l018934/1 on `` singularities in nonlinear pdes '' and through the royal society travel grant ie131532 is gratefully acknowledged .filip rindler has held the post of zeeman lecturer ( assistant professor ) in mathematics at the university of warwick since 2013 . after completing his doctorate in nonlinear pde theory and the calculus of variations at the oxpde centre within the university of oxford in 2011 , he moved to the university of cambridge to take up the gonville & caius college drosier research fellowship , holding the position until 2015 ( on leave 20132015 ) . for 20142017 he is funded by a full - time epsrc research fellowship on `` singularities in nonlinear pdes '' .g. dal maso , a. desimone , and f. solombrino , _ quasistatic evolution for cam - clay plasticity : a weak formulation via viscoplastic regularization and time rescaling _ , calc .partial differential equations * 40 * ( 2010 ) , 125181 .t. hochrainer , s. sandfeld , m. zaiser , and p. gumbsch , _ continuum dislocation dynamics : towards a physical theory of crystal plasticity _ ,journal of the mechanics and physics of solids * 63 * ( 2014 ) , 167178 . a. mielke and f. theil , _ a mathematical model for rate - independent phase transformations with hysteresis _ , proceedings of the workshop on `` models of continuum mechanics in analysis and engineering '' ( h .- d .alber , r.m .balean , and r. farwig , eds . ) , shaker verlag , 1999 , pp .117129 .s. sandfeld , t. hochrainer , m. zaiser , and p. gumbsch , _ continuum modeling of dislocation plasticity : theory , numerical implementation , and validation by discrete dislocation simulations _ , j. mater . res .* 26 * ( 2011 ) , 623632 .h. ziegler and c. wehrli , _ the derivation of constitutive relations from the free energy and the dissipation function _ , advances in applied mechanics , vol . 25 , adv .25 , academic press , 1987 , pp . 183237
|
this work presents a new modeling approach to macroscopic , polycrystalline elasto - plasticity starting from first principles and a few well - defined structural assumptions , incorporating the mildly rate - dependent ( viscous ) nature of plastic flow and the microscopic origins of plastic deformations . for the global dynamics , we start from a two - stage time - stepping scheme , expressing the fact that in most real materials plastic flow is much slower than elastic deformations , and then perform a detailed analysis of the slow - loading limit passage . in this limit , a rate - independent evolution can be expected , but this brings with it the possibility of jumps ( relative to the `` slow '' time ) . traditionally , the dynamics on the jump transients often remain unspecified , which leads to ambiguity and deficiencies in the energy balance . in order to remedy this , the present approach precisely describes the energetics on the jump transients as the limit of the rate - dependent evolutions at `` singular points '' . it turns out that rate - dependent behavior may ( but does not have to ) prevail on the jump transients . based on this , we introduce the new solution concept of `` two - speed solutions '' to the elasto - plastic evolutionary system , which incorporates a `` slow '' and a `` fast '' time scale , the latter of which parametrizes the jump transients . msc ( 2010 ) : 74c15 ( primary ) ; 74c20 , 35q74 , 74h10 ( secondary ) . keywords : elasto - plasticity , slow - loading limit , rate - independent system , quasi - static evolution , two - speed solution . date : ( version 1.0 ) .
|
is an astronomical software directory , but with a peculiar overall approach . our choice is to design this site as a community supported directory. all people can contribute , with software news , user s views , comments and bugs / refuses notifications .developers should post a brief description of their product , with the classification which can ease the search and retrieval of software projects .skysoft is designed as a community supported directory .the idea is that , like a chat session , content remains timely because of frequent user interaction .users and developers are invited to contribute with software news , user s views , comments and bugs notifications .this site is designed so that software developers acquire more visibility because of astronomical context .developers can post a brief description of their product , with the classification which can ease the search and retrieval of software projects .traditional sites , such as and are valuable , and widely used .but we think they are most useful in the standard context of mainstream data analysis and reduction , where ten - twelve applications do 95% of the work , and remain there for many years .it is difficult for traditional sites to easily accommodate new ideas , new approaches for less - used telescopes+instruments .for instance , during data mining , we have found several interesting approaches to the same specialized problem ( not addressed by mainstream tools ) , but which were rewritten over a decade by different groups , each without knowledge of others efforts .we propose a faster and more flexible approach as a complement to traditional sites .also we do not need to have all the expertise in all the fields skysoft covers : it is enough that such expertise resides in the users community !skysoft is intended to be built by the community which uses it ! if you think that skysoft lacks some information you deem useful , just add it !many others can benefit by your ( minimum ) effort ! plus you gain publicity for your work ! our aim is to build a site useful for astronomers and instrument developers , and to make this utility widely available , easy to use , and up - to - date with the latest developments .we can not cope with the enormous amount of information and expertise needed .but the community as a whole has all the necessary competence !if we all share our 2 cents of informations , we will build a site more useful for everybody .we started with a small amount of software we found in the net , just to boostrap the site .the selection was rather arbitrary , based on our own knowledge .obviously , we have missed important informations : please add it and help us to improve the site !we are ready to add a newsletter , an event calendar , some discussion lists , and more .no . everything which is available at skysoft site can be found using other astronomical software collections , or asking google , or asking some colleagues and friends . but wait , my last google interrogation returned 112000 documents ! a more specialized site can help speed up things significantly ! !we aim to be a first choice in search process !the skysoft software database has been structured as a tree .this structure is wrong in our view : the software has a structure by far too complex to be represented in such a bidimensional way .consideration of database implementation and ease of consultation led us to strip down such a complexity .skysoft lives by the collaboration of really many people : thanks to all our guests , collaborator , maintainers and data miners .thanks to g.calculli and f.giovannini for their advise .founding and support has been provided by : inaf / ira and arcetri astrophysical observatory .
|
we present the skysoft project . skysoft is a yaasd ( yet another astronomical software directory ) , but with a different overall approach . to be useful , skysoft needs to be a long lived project , setting little pressure for maintenance , imposing a very low nuisance level to the developers community , and requiring a low maintenance cost . our aim is to design skysoft as a community supported directory , to which everyone can contribute , both developers and end users .
|
an issue central to modeling local field potentials is whether the extracellular space around neurons can be considered as a resistive medium .a resistive medium is equivalent to replacing the medium by a simple resistance , which considerably simplifies the computation of local field potentials , as the equations to calculate extracellular fields are very simple and based on coulomb s law ( rall and shepherd , 1968 ; nunez and srinivasan , 2005 ) .forward models of the eeg and inverse solution / source localization methods also assume that the medium is resistive ( sarvas , 1987 ; wolters and de munck , 2007 ; ramirez , 2008 ) .however , if the medium is non - resistive , the equations governing the extracellular potential can be considerably more complex because the quasi - static approximation of maxwell equations can not be made ( bdard et al . , 2004 ) .experimental characterizations of extracellular resistivity are contradictory . some experiments reported that the conductivity is strongly frequency dependent , and thus that the medium is non - resistive ( ranck , 1963 ; gabriel et al ., 1996a , 1996b , 1996c ) .other experiments reported that the medium was essentially resistive ( logothetis et al ., 2007 ) . however , both types of measurements used current intensities far larger than physiological currents , which can mask the filtering properties of the tissue by preventing phenomena such as ionic diffusion ( bdard and destexhe , 2009 ) . unfortunately , the issue is still open because there exists no measurements to date using ( weak ) current intensities that would be more compatible with biological current sources . in the present paper ,we propose an indirect method to estimate if extracellular space can be considered as a purely resistive medium .we start from maxwell equations and show that if the medium was resistive , the frequency - scaling of electroencephalogram ( eeg ) and magnetoencephalogram ( meg ) recordings should be the same .we then test this scaling on simultaneous eeg and meg measurements in humans .we recorded the electromagnetic field of the brain during quiet wakefulness ( with alpha rhythm occasionally present ) from four healthy adults ( 4 males ages 20 - 35 ) .participants had no neurological problems including sleep disorders , epilepsy , or substance dependence , were taking no medications and did not consume caffeine or alcohol on the day of the recording .we used a whole - head meg scanner ( neuromag elekta ) within a magnetically shielded room ( imedco , hagendorf , switzerland ) and recorded simultaneously with 60 channels of eeg and 306 meg channels ( nenonen et al . , 2004 ) .meg squid ( super conducting quantum interference device ) sensors are arranged as triplets at 102 locations ; each location contains one `` magnetometer '' and two orthogonal planar `` gradiometers '' ( grad1 , grad2 ) . unless otherwise noted , meg will be used here to refer to the magnetometer recordings .locations of the eeg electrodes on the scalp of individual subjects were recorded using a 3d digitizer ( polhemus fasttrack ) .hpi ( head position index ) coils were used to measure the spatial relationship between the head and scanner .electrode arrangements were constructed from the projection of 3d position of electrodes to a 2d plane in order to map the frequency scaling exponent in a topographical manner .all eeg recordings were monopolar with a common reference .sampling rate was 1000 hz .for all subjects , four types of consecutive recordings were obtained , in the following order : ( 1 ) empty - room recording ; ( 2 ) awake `` idle '' recording where subjects were asked to stay comfortable , without movements in the scanner , and not to focus on anything specific ; ( 3 ) a visual task ; ( 4 ) sleep recordings .all idle recordings used here were made in awake subjects with eyes open , where the eeg was desynchronized .a few minutes of such idle time was recorded in the scanner . for each subject , 3 awake segments with duration of 60 seconds were selected from the idle recordings ( see example signals in fig . [ eegmeg ] ) . as electrocardiogram ( ecg ) noise often contaminates meg recordings , independent component analysis ( ica ) algorithmwas used to remove such contamination ; either infomax ( bell and sejnowski , 1995 ) or the `` jade algorithm '' from the eeglab toolbox ( delorme and makeig , 2004 ) was used to achieve proper decontamination . in all recordings, the ecg component stood out very robustly . in order not to impose any change in the frequency content of the signal , we did not use the ica to filter the data on any prominent independent oscillatory component andit was solely used to decontaminate the ecg noise .we verified that the removal of ecg did not change the scaling exponent ( not shown ) . in each recording session , just prior to brain recordings , we recorded a few minutes of the electromagnetic field present within the dewar in the magnetic shielded room .similar to wake epochs , 3 segments of 60 seconds duration were selected for each of the four recordings .this will be referred to `` empty room '' recordings and will be used in noise correction of the awake recordings . in each subject , the power spectral density ( psd ) was calculated by first computing the fast fourier transform ( fft ) of 3 awake epochs , then averaging their respective psds ( square modulus of the fft ) .this averaged psd was computed for all eeg and meg channels in order to reduce the effects of spurious peaks due to random fluctuations .the same procedure was also followed for empty - room signals . because the environmental and instrumental sources of noise are potentially high in meg recordings , we took advantage of the availability of empty - room recordings to correct for the presence of noise in the signal .we used five different methods for noise correction , based on different assumptions about the nature of the noise .we describe below these different correction methods , while all the details are given in _supplementary methods_. a first procedure for noise correction , exponent subtraction ( es ) , assumes that the noise is intrinsic to the squid sensors .this is justified by the fact that the frequency scaling of some of the channels is identical to that of the corresponding empty - room recording ( see results ) .in such a case , the scaling is assumed to entirely result from the `` filtering '' of the sensor , and thus the correction amounts to subtract the scaling exponents .a second class of noise subtraction methods assume that the noise is of ambient nature and is uncorrelated with the signal .this chatacteristics , warrants the use of spectral subtraction ( where one subtracts the psd of the empty - room from that of the meg recordings ) , prior to the calculation of the scaling exponent .the simplest form of spectral subtraction , linear multiband spectral subtraction ( lmss ) , treats the sensors individually and does not use any spatial / frequency - based statistics in its methodology ( boll et al . , 1979 ) .an improved version , nonlinear multiband spectral subtraction ( nmss ) , takes into account the signal - to - noise ratio ( snr ) and its spatial and frequency characteristics ( kamath and loizou , 2002 ; loizou , 2007 ) . a third type , wiener filtering ( wf ) , uses a similar approach as the latter , but obtain an estimate of the noiseless signal from that of the noisy measurement through minimizing the mean square error ( mse ) between the desired and the measured signal ( lim et al . , 1979 ;abd el - fattah et al . , 2008 ) .a third type of noise subtraction , partial least squares ( pls ) regression , combines principal component analysis ( pca ) methods with multiple linear regression ( abdi , 2010 ; garthwaite , 1994 ) .this methods finds the spectral patterns that are common in the meg and the empty - room noise , and removes these patterns from the psd .the method to estimate the frequency scaling exponent was composed of steps : first , applying a spline to obtain a smooth fft without losing the resolution ( as can happen by using other spectral estimation methods ) ; second , using a simple polynomial fit to obtain the scaling exponent . to improve the slope estimation , we approximated the psd data points using a spline , which is a series of piecewise polynomials with smooth transitions and where the break points ( `` knots '' ) are specified .we used the so - called `` b - spline '' ( see details in de boor , 2001 ) .the knots were first defined as linearly related to logarithm of the frequency , which naturally gives more resolution to low frequencies , to which our theory applies .next , in each frequency window ( between consecutive knots ) , we find the closest psd value to the mean psd of that window .then we use the corresponding frequency as the optimized knot in that frequency range , leading the final values of the knots .the resulting knots stay close to the initial distribution of frequency knots but are modified based on each sensor s psd data to provide the optimal knot points for that given sensor ( fig .[ knots]a ) .we also use additional knots at the outer edges of the signal to avoid boundary effects ( eilers and marx , 1996 ) .the applied method provides a reliable and automated approach that uses our enforced initial frequency segments with a high emphasis in low frequency and it optimizes itself based on the data . after obtaining a smooth b - spline curve ,a simple 1st degree polynomial fit was used to estimate the slope of the curve between 0.1 - 10 hz ( the fit was limited to this frequency band in order to avoid the possible effects of the visible peak at 10 hz on the estimated exponent).using this method provides a reliable and robust estimate of the slope of the psd in logarithmic scale , as shown in fig .[ knots]b . for more details on the issue of automatic non - parametric fitting , and the rationale behind combining the polynomial with spline basis functions ,we refer the reader to magee , 1998 as well as royston & altman , 1994 and katkovnik et al , 2006 .this procedure was realized on all channels automatically ( 102 channels for meg , 60 channels for eeg , for each patient ) .every single fit was further visually confirmed . in the case of meg ,noise correction is essential to validate the results .for doing so , we used different methods ( as described above ) to reduce the noise .next , all the mentioned steps of frequency scaling exponents were carried out on the corrected psd .results are shown in fig .[ topo ] .three rois were selected for statistical comparisons of the topographic plots . as shown in figure [ topo ] ( panel f ) , fr ( frontal )roi refers to the frontal ellipsoid , vx ( vertex ) roi refers to the central disk located on vertex and pt ( parietotemporal ) refers to the horseshoe roi .we start from first principles ( maxwell equations ) and derive equations to describe eeg and meg signals .note that the formalism we present here is different than the one usually given ( as in plonsey , 1969 ; gulrajani , 1998 ) , because the linking equations are here considered in their most general expression ( convolution integrals ) , in the case of a linear medium ( see eq .77.4 in landau and lifchitz , 1984 ) .this generality is essential for the problem we treat here , because our aim is to compare eeg and meg signals with the predictions from the theory , and thus the theory must be as general as possible .maxwell equations can be written as if we suppose that the brain is linear in the electromagnetic sense ( which is most likely ) , then we have the two following linking equations .the first equation links the electric displacement with the electric field : where is a symmetric second - order tensor . a second equation links magnetic induction and the magnetic field : where is a symmetric second - order tensor . if we neglect non - resistive effects such as diffusion ( bdard and destexhe , 2009 ) , as well as any other nonlinear effects with the magnitude of electric field .such variations could appear due to ephaptic ( electric - field ) interactions for example .in addition , any type of linear reactivity of the medium to the electric field or magnetic induction can lead to frequency - dependent electric parameters ( for a detailed discussion of such effects , see bdard and destexhe , 2009 ) . ], then we can assume that the medium is linear .in this case , we can write : where is a symmetric second - order tensor.[liaisonj ] ) are often algebraic and independent of time ( for example , see eqs .5.2 - 6 , 5.2 - 7 and 5.2 - 8 in gulrajani , 1998 ) .the present formulation is more general , more in the line of landau and lifchitz ( 1984 ) . ]because the effect of electric induction ( faraday s law ) is negligible , we can write : this system is much simpler compared to above , because electric field and magnetic induction are decoupled . by taking the fourier transform of maxwell equations ( eqs .[ max ] ) and of the linking equations ( eqs .[ liaisond],[liaisonb],[liaisonj ] ) , we obtain : where and where the relation in eq .[ liaisonfourier ] is the current density produced by the ( primary ) current sources in the extracellular medium .note that in this formulation , the electromagnetic parameters , and depend on frequency .this generalization is essential if we want the formalism to be valid for media that are linear but non - resistive , which can expressed with frequency - dependent electric parameters .it is also consistent with the kramers - kronig relations ( see landau and lifchitz , 1984 ; foster and schwan , 1989 ) . is the current density of these sources in fourier frequency space .this current density is composed of the axial current in dendrites and axons , as well as the transmembrane current . of course , this expression is such that at any given point , there is only one of these two terms which is non - zero .this is a way of preserving the linearity of maxwell equations .such a procedure is legitimate because the sources are not affected by the field they produce . from eq .[ maxfourier ] ( faraday s law in fourier space ) , we can write : from eq .[ maxfourier ] ( ampre - maxwell s law in fourier space ) , we can write : setting , one obtains : where is a source term and is a symmetric second - order tensor ( ) . note that this tensor depends on position and frequency in general , and can not be factorized .we will call this expression ( eq .[ fondement1 ] ) the `` first fundamental equation '' of the problem . from the mathematical identity is clear that this is sufficient to know the divergence and the curl of a field , because the solution of is unique with adequate boundary conditions . as in the case of magnetic induction ,the divergence is necessarily zero , it is sufficient to give an explicit expression of the curl as a function of the sources . supposing that is a scalar ( tensor where all directions are eigenvectors ) , and taking the curl of eq .[ maxfourier ] ( d ) , multiplied by the inverse of , we obtain the following equality : because .this expression ( eq .[ fondement2 ] ) will be named the `` second fundamental equation '' .we consider the following boundary conditions : 1 - on the skull , we assume that is differentiable in space , which is equivalent to assume that the electric field is finite . 2 - on the skull , we assume that is also continuous , which is equivalent to assume that the flow of current is continuous .thus , we are interested in solutions where the electric field is continuous . 3 - because the current is zero outside of the head , the current perpendicular to the surface of cortexmust be zero as well .thus , the projection of the current on the vector normal to the skull s surface , must also be zero . the latter expression can be proven by calculating the total current and apply the divergence theorem ( not shown ) .the `` second fundamental equation '' above implies inverting , which is not possible in general , because it would require prior knowledge of both conductivity and permittivity in each point outside of the sources .if the medium is purely resistive ( where is independent of space and frequency ) , one can evaluate the electric field first , and next integrate using the quasi - static approximation ( ampre - maxwell s law ) . because for low frequencies , we have necessarily , we obtain which is also known as ampre s law in fourier space .thus , for low frequencies , one can skip the second fundamental equation .note that in case this quasi - static approximation can not be made ( such as for high frequencies ) , then one needs to solve the full system using both fundamental equations . such high frequencies are , however , well beyond the physiological range , so for eeg and meg signals , the quasi - static approximation holds if the extracellular medium is resistive , or more generally if the medium satisfies ( see eqs .[ ff ] and [ maxfourier ] ) . according to the quasi - static approximation , and using the linking equation between current density and the electric field ( eq .[ liaisonfourier ] ) , we can write : because the divergence of magnetic induction is zero , we have from eq .[ identite ] : this equation can be easily integrated using poisson integral ( `` poisson equation '' for each component in cartesian coordinates ) in fourier space , this integral is given by the following expression if the medium is purely resistive ( `` ohmic '' ) , then does not depend on the spatial position ( see bedard et al . , 2004 ; bedard and destexhe , 2009 ) nor on frequency , so that the solution for the magnetic induction is given by : and does not depend on the nature of the medium . for the electric potential , from eq .[ fondement1 ] , we obtain the solution : thus , when the two source terms and are white noise , the magnetic induction and electric field must have the same frequency dependence .moreover , because the spatial dimensions of the sources are very small ( see appendices ) , we can suppose that the current density is given by a function of the form : such that and have the same frequency dependence for low frequencies .[ assump ] constitutes the main assumption of this formalism . in appendixa , we provide a more detailed justification of this assumption , based on the differential expressions of the electric field and magnetic induction in a dendritic cable .note that this assumption is most likely valid for states with low correlation such as desynchronized - eeg states or high - conductance states , and for low - frequencies , as we analyze here ( see details in the appendices ) .thus , the main prediction of this formalism is that if the extracellular medium is resistive , then the psd of the magnetic induction and of the electric potential must have the same frequency dependence . in the next section, we will examine if this is the case for simultaneously recorded meg and eeg signals .a total of 4 subjects were used for the analysis . figure [ eegmeg ]shows sample meg and eeg channels from one of the subjects , during quiet wakefulness .although the subjects had eyes open , a low - amplitude alpha rhythm was occasionally present ( as visible in fig . [ eegmeg ] ) . there were also oscillations present in the empty - room signal , but these oscillations are evidently different from the alpha rhythm because of their low amplitude and the fact that they do not appear in gradiometers ( see suppl . fig .s1 ) . ) .fr : frontal , vx : vertex and pt : parietotemporal .these sample channels were selected to represent both right and left hemispheres in a symmetrical fashion .inset : magnification of the meg ( red ) and `` empty - room '' ( green ) signals superimposed from 4 sample channels .all traces are before any noise correction , but after ecg decontamination . ] in the next sections , we start by briefly presenting the method that was used to estimate the frequency scaling of the psds. then we report the scaling exponents for 0.1 - 10 hz frequency bands and their differences in eeg and meg recordings . because of the large number of signals in the eeg and meg recordings, we used an automatic non - parametric procedure to estimate the frequency scaling ( see methods ) .we used a b - spline approximation by interpolation with boundary conditions to find a curve which best represents the data(see methods ) .a high density of knots was given to the low - frequency band ( 0.1 - 10 hz ) , to have an accurate representation of the psd in this band , and calculate the frequency scaling .an example of optimized knots to an individual sensor is shown in figure [ knots]a ; note that this distribution of knots is specific to this particular sensor .the resulting b - spline curves were used to estimate the frequency scaling exponent using a 1st degree polynomial fit .figure [ knots]b shows the result of the b - spline analysis with optimized knots ( in green ) capturing the essence of the data better than the usual approximation of the slope using polynomials ( in red ) .the goodness of fit showed a robust estimation of the slope using b - spline method .residuals were -0.01 0.6 for empty - room , 0.2 0.65 for meg awake , 0.05 0.6 for lmss , 0.005 0.64 for nmss , 0.08 0.5 for wf,0.001 0.02 for pls , and -0.02 0.28 for eeg b - spline ( all numbers to be multiplied by 10 ) .figure [ psd ] shows the results of the b - spline curve fits to the log - log psd vs frequency for all sensors of all subjects . in this figure , and only for the ease of visual comparison , these curves were normalized to the value of the log(psd ) of the highest frequency .as can be appreciated , all meg sensors ( in red ) show a different slope than that of the eeg sensors ( in blue ) .the frequency scaling exponent of the eeg is close to 1 ( scaling ) , while meg seems to scale differently .thus , this representation already shows clear differences of scaling between eeg and meg signals .however , meg signals may be affected by ambient or instrumental noise . to check for this ,we have analyzed the empty - room signals using the same representation and techniques as for meg , amd the results are represented in fig .[ psd ] ( insets ) .empty - room recordings always scale very closely to the meg signal , and thus the scaling observed in meg may be due in part to environmental noise or noise intrinsic to the detectors .this emphasizes that it is essential to use empty - room recordings made during the same experiment to correct the frequency scaling exponent of meg recordings . to correct for this bias ,we have used five different procedures ( see methods ) .the first class of procedure ( es ) considers that the scaling of the meg is entirely due to filtering by the sensors , which would explain the similar scaling between meg and empty - room recordings . in this case , however , nearly all the scaling would be abolished , and the corrected meg signal would be similar to white noise ( scaling exponent close to zero ) . because the similar scaling may be coincidental , we have used two other classes of noise correction procedures to comply with different assumptions about the nature of the noise . the second class ,is composed of spectral subtraction ( lmss and nmss ) or wiener filtering ( see methods ) .these methods are well - established in other fields such as acoustics .the third class , uses statistical patterns of noise to enhance psd ( pls method , for details see methods ) .we applied the above methods to all channels and represented the scaling exponents in topographic plots in fig .this figure portrays that both meg and eeg do not show a homogenous pattern of the scaling exponent , confirming the differences of scaling seen in fig .the eeg ( figure [ topo]a ) shows that areas in the midline have values closer to 1 , while those at the margin can deviate from scaling .meg on the other hand shows higher values of the exponent in the frontal area and a horseshoe pattern of low value exponents in parietotemporal regions ( figure [ topo]b ) . as anticipated above , empty - room recordings scale more or less uniformly with values close to ( figure [ topo]c ) , thus necessitating the correction for this phenomena to estimate the correct meg frequency scaling exponent .different methods for noise reduction are shown in figure [ topo ] : spectral subtraction methods , such as lmss ( figure [ topo]d ) , nmss ( figure [ topo]e ) , wf enhancement ( figure [ topo]f ) .these corrections preserve the pattern seen in figure [ topo]b , but tend to increase the difference with eeg scaling : one method ( lmss ) yields minimal correction while the other two ( nmss and wf ) use band - specific snr information in order to cancel the effects of background colored - noise ( see suppl .s2 ) , and achieve higher degree of correction ( see supplementary methods for details ) .figure [ topo]g portrays the use of pls to obtain a noiseless signal based on the noise measurements .the degree of correction achieved by this method is higher than what is achieved by spectral subtraction and wf methods .exponent subtraction is shown in figure [ topo]h .this correction supposes that the scaling is due to the frequency response of the sensors , and nearly abolishes all the frequency scaling ( see also suppl .s3 for a comparison of different methods of noise subtraction ) . figure [ roi]a represents the overall pattern providing evidence on the general difference and the wider variability in meg recordings .the next three panels relate to the individual rois .of the spectral subtraction methods , nmss achieves a higher degree of correction in comparison with lmss ( see figure [ topo]c , figure [ topo]d as well as suppl .s3 ) . because nmss takes into account the effects of the background colored - noise ( suppl .s2 ) , it is certainly more relevant to the type of signals analyzed here .the results of nmss and wf are almost identical and confirm one another ( see figure [ topo]e , as well as suppl . fig .therefore , of this family of noise correction , only nmss is portrayed here . of the methods dealing with different assumptions about the nature of the noise ,the `` exponent subtraction '' almost abolishes the frequency scaling ( also see in figure [ topo]h , as well as suppl .s3 ) . applying pls yields values in between `` exponent subtraction '' and that of nmss and is portrayed in figure [ roi ] . in the frontal region( figure [ roi]b ) , the eeg scaling exponents show higher variance by comparison to meg .also , eeg shows some overlaps with the distribution curve of non - corrected meg ; this overlap becomes limited to the tail end of the nmss correction and is abolished in the case of pls correction . as can be appreciated ,vx ( figure [ roi]c ) shows both similar values and similar distribution for eeg and non - corrected meg .these similarities , in terms of regional overall values and distribution curve , are further enhanced after nmss correction .it is to be noted that , in contrast to these similarities , the one - to - one correlation of nmss and eeg at vx roi are very low ( see below , table 1b - c ) .the values of pls noise correction are very different from that of eeg and have a similar , but narrower , distribution curve shape .two other rois show distinctively different values and distribution in comparing eeg and meg .both nmss and pls agree on this with pls showing more extreme cases .figure [ roi]d reveals a bimodal distribution of meg exponents in the parietotemporal region ( pt roi ) .this region has also the highest variance ( in meg scaling exponents ) compared to other rois .the distinction between eeg and meg is enhanced in pls estimates ; however , the variance of pt is reduced in comparison to nmss while the bimodality is still preserved but weakened .the values of mean and standard deviation for these rois exponents are provided in table 1a ( mean standard deviation ) ..roi statistical comparison .a. mean and std of frequency scale exponent for all regions and individual roi . b. numerical values of linear pearson correlation . c. rank - based kendall correlation .d. non - parametric test of analysis of variance ( kruskalwallis ) .corrected meg refers to spectral subtraction using nmss .the full table is provided in supplementary information . [ cols="^,<,<,<",options="header " , ] figure s1 : frequency spectra of magnetometers and gradiometers .comparison of awake ( blue ) vs empty - room ( red ) recordings between magnetometers ( mag ) and gradiometers ( grad1 , grad2 ) in a sample subject . as for the eeg, the meg signal is characterized by a peak at around 10 hz , which is presumably due to residual alpha rhythm ( although the subject had eyes open ) .this is also visible from the meg signals ( fig .[ eegmeg ] ) as well as from their psd ( fig .[ psd ] and mag panel here ) .the power spectrum from the empty - room signals also show a peak at around 10 hz , but this peak disappears from the gradiometer empty - room signals , while the 10 hz peak of meg still persists for gradiometers awake recordings .this suggests that these two 10 hz peaks are different oscillation phenomena .all other subjects showed a similar pattern .figure s2 : signal - to - noise ratio ( snr ) of magnetometers ( mag ) for multiple frequency bands : 0 - 10 hz ( slow , delta and theta ) , 11 - 30 hz ( beta ) , 30 - 80 hz ( gamma ) , 80 - 200 hz ( fast oscillation ) , 200 - 500 hz ( ultra - fast oscillation ) . in the scatterplots ,red astrisks relate to individual sensors and the blue line is the band - specific mean across the sensors . in boxplots , the box has lines at the lower quartile , median ( red ) , and upper quartile values .smallest and biggest non - outlier observations ( 1.5 times the interquartile range irq ) are shown as whiskers .outliers are data with values beyond the ends of the whiskers and are displayed with a red + sign . in all subjects ,the snr shows a band - specific trend and has the highest value for lower frequencies and gradually drops down as band frequency goes up . as the frequency drops , the variability of snr ( among sensors ) rises ; therefore , the snr of the lowest band ( 1 - 10 hz ) shows the highest sensors - to - sensor variability and the highest snr in comparison to other frequency bands .figure s3 : noise correction comparison .every horizontal line showes a voxel of the topographical maps shown in fig .[ topo ] sorted based on the scaling exponent values of awake meg ( left stripe ) . using a continuous color spectrum ,these stripes show that minimal correction is achived by lmss .as indicated in the text , the performance of this method is not reliable due to the nonlinear nature of snr ( see suppl .nmss yields higher degree of correction .wf performs almost identical to nmss ( not shown here ) .exponent subtraction almost abolishes the sacling all together ( far right stripe ) .pls results in values between nmss and `` exponent subtraction '' . for details of each of these correction procedures , see methods .lmss , nmss and wf rely on additive uncorrelated nature of noise . exponent subtraction " assumes that the noise is intrinsic to squid .pls ascertains the characteristics of noise to the collective obeserved pattern of spectral domain across all frequencies .see text for more details .
|
the resistive or non - resistive nature of the extracellular space in the brain is still debated , and is an important issue for correctly modeling extracellular potentials . here , we first show theoretically that if the medium is resistive , the frequency scaling should be the same for electroencephalogram ( eeg ) and magnetoencephalogram ( meg ) signals at low frequencies ( hz ) . to test this prediction , we analyzed the spectrum of simultaneous eeg and meg measurements in four human subjects . the frequency scaling of eeg displays coherent variations across the brain , in general between and , and tends to be smaller in parietal / temporal regions . in a given region , although the variability of the frequency scaling exponent was higher for meg compared to eeg , both signals consistently scale with a different exponent . in some cases , the scaling was similar , but only when the signal - to - noise ratio of the meg was low . several methods of noise correction for environmental and instrumental noise were tested , and they all increased the difference between eeg and meg scaling . in conclusion , there is a significant difference in frequency scaling between eeg and meg , which can be explained if the extracellular medium ( including other layers such as dura matter and skull ) is globally non - resistive . * keywords : * _ eeg ; meg ; local field potentials ; extracellular resistivity ; maxwell equations ; power - law _
|
one of the main goals of nanoengineering and quantum optics is the development of nanodevices that reliably process quantum information .a basic requirement for these quantum information processing devices is the ability to universally control the state of a single qubit on timescales much shorter than the coherence time .promising candidates have been studied experimentally , for instance , in superconducting qubits , quantum - dot charge qubits , and in cavity qed . as in all technological applicationsthe natural question arises how these devices can be operated `` optimally '' . in this context ,an important question in the field of quantum information and quantum ( control-)dynamics has recently attracted a lot of attention , namely the _ quantum speed limit _ .the quantum speed limit time is the minimal time a quantum system needs to evolve between an initial and a final state , and it can be understood as a generalization of the heisenberg uncertainty relation for time and energy .a particularly useful set of mathematical tools for approaching this kind of problems is summarized under the headline _ optimal control theory_. however , depending on how these tools are used different `` optimal '' results are concluded , which was recently discussed carefully for qubits evolving under unitary dynamics in ref . .for instance , caneva _ et al ._ showed that the krotov algorithm fails to converge if one tries to drive a qubit faster than an independently determined quantum speed limit , while hegerfeldt used optimal control theory to compute a quantum speed limit that allows even faster evolution .however , quantum optimal control theory is not restricted to determining the maximal speed , and has , e.g. , been successfully applied to finding driving protocols that maximize squeezing and entanglement in harmonic oscillators , or efficiently cool molecular vibrations .nevertheless , optimal control theory has remained a mathematical tool box , which is mostly applied in various fields of engineering and applied mathematics , see for instance refs . , while it is still rather scarcely discussed in the physics literature and textbooks .however , finding `` optimal '' processes has been an important topic of constant interest in virtually all fields of physics . only recently , optimal processes in thermodynamic applications have attracted renewed interest .moreover , in quantum computing so called _ shortcuts to adiabaticity _ have been in the focus of intense research efforts .these shortcuts are optimal driving protocols that reproduce in a finite time the same outcomes as resulting from an infinitely slow process , see for instance ref . and references therein .the purpose of the present paper is two - fold . on the one hand, we will be interested in solving an interesting and important problem , namely how to `` optimally '' control a simple quantum information device .to this end , we will analyze the damped jaynes - cummings model by means of optimal control theory , and discuss the optimal finite - time processing of one qubit of information .a similar classical problem was recently analyzed in .on the other hand , this paper is also of pedagogical value .we will use the fully analytically solvable example in order to illustrate concepts of optimal control theory , and to `` translate '' between the language typically used in engineering textbooks and vocabulary that is more familiar in quantum thermodynamics .we will illustrate that the formulation of the problem is crucial , as the resulting optimal protocol intimately depends on the question asked .as an important consequence of our study , we will be able to reconcile two fundamentally different approaches to the quantum speed limit .[ [ outline ] ] outline + + + + + + + we aim at a presentation of the results , which is as self - contained as possible . to this end, the paper is organized as follows : we start in sec . [sec : opt ] with a brief review of elements of optimal control theory , and establish notation .section [ sec : qubit ] is dedicated to a description of the system under study , namely the damped jaynes - cummings model . in sec .[ sec : control ] we will derive `` optimal '' control protocols , that minimize the heating rate or minimize the energy dispersion rate . in sec . [sec : qsl ] we turn to controlling the system at the quantum speed limit . finally , in sec .[ sec : conclusion ] we conclude the paper with a few remarks .we start by summarizing the elements of optimal control theory , which we will be using in the following , see also .particular focus will be put on some subtleties that will become important in the later analysis .imagine a physical system whose state is fully described by a vector .the components of could be the real , physical microstate , a point in phase space , the state of a qubit , or a collection of macroscopic variables as , for instance , voltage , current , volume , pressure , etc .the evolution of for times is described by a first order differential equation , the so - called _ state equation _ , where the vector is a collection of external control parameters , or simply the control .in a thermodynamic set - up can be typically related to a collective degree of freedom of a work reservoir .[ [ accessibility ] ] accessibility + + + + + + + + + + + + + a central issue in the set - up of a problem in optimal control theory is the _accessibility_. note that in mathematical control theory the concept of accessibility is slightly more general than used in the present context .generally , accessibility refers to controls that are able to drive the state , , to an open set in state space . for the sake of simplicity we focus here on the simpler question , namely : given a state equation with initial value , which control protocols drive the system to a _state during time ?imagine , for instance , that a qubit is initially prepared in its up - state , and one wants to drive the qubit into its down - state at .then only certain parametrizations of an external magnetic field realize this process . in addition ,one could imagine that there are further physical constraints , which have to be met by , as there could be , e.g. , limited resources or technical limitations .this leaves us with a set of _ physically allowed _ or _admissible _ control protocols , however , not all admissible protocols are necessarily practical or even physically meaningful .thus , we can imagine that some controls fit our purposes better and some worse , and we want to identify the _ optimal admissible control _ . [[ optimal - protocols ] ] optimal protocols + + + + + + + + + + + + + + + + + in the paradigm of optimal control theory the task is , then , to find the particular such that a _ performance measure _ , or _cost functional _ is minimized . the cost , xx\alpha ] under the condition that evolves under the state equation .this problem is very similar to problems in classical mechanics , if we identify x ] , and is a parameter , whereas in the present context we are explicitly asking for an _ optimal protocol _ .it is worth emphasizing that the definition of x\alpha ] is non - negative .then the control hamiltonian can be written as and the costate equation becomes as the state equation the costate equation can be solved analytically , and we have substituting the solutions and in the control hamiltonian leaves us with an integro - differential equation for . the optimal control is determined by finding the particular control(s ) for which is constant .as argued earlier , will generically not be an admissible control , cf .[ fig : admissible ] .therefore , we continue our analysis with constructing a sequence of admissible controls , , to find the _ optimal admissible control _ , .[ [ optimal - control - sequence ] ] optimal control sequence + + + + + + + + + + + + + + + + + + + + + + + + our aim is to make full use of the modified algorithm of steepest descent . to this end , let us consider the gradient , as noted above , if we construct a sequence naively as , then typically will not be an admissible protocol , that is here this means that we have to modify the usual algorithm in a way that integral remains invariant , therefore , a modified sequence can be constructed as , for which , with , all controls of the sequence , , are admissible .one easily convinces oneself that the modified sequence still converges uniformly as is simply a numerical constant .the simplest admissible control is described by a constant protocol , which we choose as our initial sophisticated guess , in fig .[ fig : j_heat ] we plot the sequence of performance measures with a stepsize .we observe that the algorithm converges within the first 25 iterations .note that `` convergence '' is quantified by eq ., that means a sequence is considered to `` have converged '' if the inequality is fulfilled . for the specific example in fig .[ fig : j_heat ] we chose . *( color online ) cost functional for minimal heating rate : * cost functional for the modified algorithm for method of steepest descent with constant control as initial guess , stepsize , and termination parameter ] figure [ fig : z_heat ] shows the optimal admissible control together with the optimal trajectory .it turns out that in the optimal case is a linearly decreasing function , for which the heating rate is negative and constant . *( color online ) optimal admissible control for minimal heating rate : * optimal admissible control ( red , solid line ) together with initial , sophisticated guess ( purple , dashed line ) ; optimal evolution of state as an inset . ] as a second example let us consider processes were we want to minimize the rate with which the internal energy of the qubit disperses .such `` optimal '' protocols might be important in situations where one has to worry about decoherence due to some additional coupling to the environment .to this end , consider the variance of the hamiltonian from which we compute the rate of dispersion as , . in this casethe performance measure can be defined as =\int_0^\tau { \mathrm{d}}t\,d_t^2=\int_0^\tau { \mathrm{d}}t\ , \gamma_t^2\,(1+z_t)^2\ , z^2_t\,.\ ] ] accordingly , the control hamiltonian becomes which yields the costate equation and the gradient necessary to construct the optimal control sequence . in fig .[ fig : j_dis ] we plot the resulting sequence of performance measures , where we observe that the convergence is much slower than in the case of the minimal heating rate , cf .[ fig : j_heat ] .* ( color online ) cost functional for minimal dispersion rate : * cost functional for the modified algorithm for method of steepest descent with constant control as initial guess , stepsize , and termination parameter . ] nevertheless , the sequence converges satisfactorily and the resulting admissible optimal control is shown in fig . [fig : z_dis ] .we observe that the optimal control with minimal dispersion rate is significantly different from the control that minimizes the heating rate .the small knick around is most likely a numerical artefact , which probably could be `` ironed out '' by letting the algorithm run for a longer period .however , it seems that this artefact is a generic peculiarity of the problem , as it appears for various initial controls . *( color online ) optimal admissible control for minimal dispersion rate : * optimal admissible control ( red , solid line ) together with initial , sophisticated guess ( purple , dashed line ) ; optimal evolution of state as an inset . ] comparing figs .[ fig : z_heat ] and [ fig : z_dis ] illustrates our earlier point , namely that the resulting optimal control crucially depends on the set - up and formulation of the problem .in the previous sections we introduced elements of optimal control theory and a conceptually simple model system for quantum information processing .in addition , we illustrated concepts and methods by deriving the optimal control protocols , which either minimize the heating rate or the energy dispersion rate .equipped with these methods we now continue to analyze the problem of processing information at the quantum speed limit . for uncontrolled , time - independent systems ,the quantum speed limit time determines the maximum rate of evolution , and is a bound combining the results of mandelstam - tamm ( mt ) and margolus - levitin ( ml ) : it is given for isolated , time - independent systems by , where is the variance of the energy of the initial state and its mean energy with respect to the ground state .generalizations of the mt and ml findings to driven and open systems have been recently proposed in refs .the approach in these papers has been called geometric , as the derivation relies on an estimation of the geometric speed . [[ geometric - approach ] ] geometric approach + + + + + + + + + + + + + + + + + + the question one asks is the following : given a particular external control , how fast can a quantum system follow ?the answer is given by the maximal speed of quantum evolution .to this end , consider the evolution from an initially pure state to a final state .under nonunitary dynamics , the final state will be generally mixed .the geometric approach is then based on the dynamical properties of the bures angle between initial and final states of the quantum system , the bures angle is a generalization to mixed states of the angle in hilbert space between two state vectors .the maximal speed of quantum evolution is then determined by where denotes the operator norm , i.e. , the largest singular value .[ [ minimal - time - approach ] ] minimal time approach + + + + + + + + + + + + + + + + + + + + + a fundamentally different question was addressed in ref . , namely : what is the minimal time a qubit needs to evolve from _ particular _ initial state to a _ particular _final state ?moreover , it was shown for a qubit evolving under unitary dynamics that this problem can be solved by means of optimal control theory . since the geometric approach and the minimal time approach yield the same quantum speed limit for time - independent systems , it was not ad hoc clear which approach is more physically relevant for driven systems .[ [ importance - of - the - formulation ] ] importance of the formulation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we have already seen earlier that the formulation of the problem anticipates what will be considered optimal . in particular , the choice of admissible controls is crucial , and a more careful analysis of the formulation and set - up to derive quantum speed limits is in order .therefore , we continue our analysis by deriving the optimal controls resulting from the minimal time approach and from the geometric approach for our model system introduced above. it will turn out , that by carefully formulating the problem , both approaches , minimal time and geometric , can be reconciled . in the minimal timeapproach one is interested in minimizing the process time , during which the qubit evolves .therefore , the performance measure is simply given by =\tau\,.\ ] ] in this case ] we will , therefore , need a term , that minimizes the difference between left and right side and `` makes '' the inequality as close as possible to an equality .second , we are interested in such processes , whose evolution speed is maximal , i.e. , is maximal . therefore , a performance measure ] . the optimal admissible protocol that maximizes the quantum evolution speed is identical to the control that maximizes the heating rate .such an optimal control , however , is just the optimal protocol that we derived within the minimal time approach , namely the delta - peak control . in conclusion, we explained that the minimal time approach and geometric approach ask fundamentally different questions .however , we also showed that if the problem is formulated carefully by means of optimal control theory , the same results for the quantum speed limit time can be obtained .we found that there is no fundamental bound on the speed with which one qubit of information can be written by a leaky , optical cavity , if we allow for an infinite power input into the system .practically , however , one is rather interested in `` optimal '' controls , that are more experimentally relevant , as for instance , the fastest evolution under a bounded heating rate . to solve these problems one first has to carefully define the admissible controls , and find a cost functionalthe reflects the full physical situation .we expect that the actual quantum speed limit is then governed by , for instance , the maximal control power .naively computing quantum speed limits by means of optimal control theory can yield unphysical results .therefore , special attention has to be paid to a careful definition of the set of admissible controls and the performance measure .the outcome of optimal control theory is only as good , i.e. , physical as the formulation of the problem .in this paper we have shown how to find control protocol that optimally process one qubit of information . to this end, we have presented some elements of optimal control theory . for a specific system , namely the damped jaynes - cummings model ,we then have developed a modified method of steepest descent , which ensures that all elements of a control sequence are actually admissible controls . with this novel algorithmwe have numerically determined the optimal controls that minimize the power input and the dispersion rates .special emphasis has been put on illustrating that the outcome of an analysis by means of optimal control theory crucially depends on the formulation of the problem . in doing so ,we have been able to reconcile two fundamentally different approaches to the quantum speed limit , which yield the same result if the problem is formulated carefully .last but not least this paper is of pedagogical value .the presentation of the analysis is mostly self - contained and we hope that our results will spur interactions between different fields . in particular , we believe that this paper could make optimal control theory more accessible and known among physicists , and introduce engineers and applied mathematicians to problems and questions in quantum thermodynamics .59ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] http://www.nature.com/ncomms/journal/v4/n1/full/ncomms2383.html [ * * , ( ) ] link:\doibase 10.1038/ncomms2412 [ * * , ( ) ] http://iopscience.iop.org/1367-2630/15/1/013017 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.103.240501 [ * * , ( ) ] link:\doibase 10.1103/physreva.84.012312 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.111.260501 [ * * , ( ) ] link:\doibase 10.1103/physreva.88.062326 [ * * , ( ) ] http://arxiv.org/abs/1304.7195 [ ( ) ] http://iopscience.iop.org/1751-8121/46/33/335302 [ * * , ( ) ] http://journals.aps.org/prl/abstract/10.1103/physrevlett.111.010402 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.110.050403 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.110.050402 [ * * , ( ) ] link:\doibase 10.1103/physreva.88.013818 [ * * , ( ) ] link:\doibase 10.1103/physreva.89.012307 [ * * , ( ) ] link:\doibase 10.1088/0256 - 307x/31/2/020301 [ * * , ( ) ] http://arxiv.org/abs/1302.2074 [ ( ) ] http://stacks.iop.org/0295-5075/104/i=4/a=40005 [ * * , ( ) ] link:\doibase 10.1103/physreva.79.055804 [ * * , ( ) ] link:\doibase 10.1103/physreva.79.032327 [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/15/12/125028 [ * * , ( ) ] http://www.sciencedirect.com/science/article/pii/s0893965911005738 [ * * , ( ) ] http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=06426777&tag=1 [ ( ) ] http://journals.aps.org/pra/abstract/10.1103/physreva.88.033425 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.98.108301 [ * * , ( ) ] http://pubs.rsc.org/en/content/articlelanding/2009/cp/b816102j#!divabstract [ * * , ( ) ] http://journals.aps.org/prl/abstract/10.1103/physrevlett.105.170402 [ * * , ( ) ] link:\doibase 10.1088/0143 - 0807/32/3/018 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.108.190602 [ * * , ( ) ] http://journals.aps.org/pre/abstract/10.1103/physreve.87.022143 [ * * , ( ) ] http://www.sciencedirect.com/science/article/pii/b9780124080904000025 [ * * , ( ) ] http://arxiv.org/abs/1401.1184 [ ( ) ] http://arxiv.org/abs/1310.4167 [ ( ) ] _ _ ( , , ) _ _ ( , , ) _ _ ( , , ) http://journals.aps.org/prx/abstract/10.1103/physrevx.3.041003 [ * * , ( ) ] _ _ ( , , ) link:\doibase 10.1109/tac.2010.2043292 [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/13/7/073029 [ * * , ( ) ] link:\doibase 10.1115/1.3640537 [ * * , ( ) ] http://arc.aiaa.org/doi/pdf/10.2514/3.2107 [ * * , ( ) ] http://arc.aiaa.org/doi/pdf/10.2514/3.2209 [ * * , ( ) ] _ _ ( , , ) http://journals.aps.org/pra/abstract/10.1103/physreva.55.2290 [ * * ( ) ] link:\doibase 10.1103/physrevlett.103.210401 [ * * , ( ) ] link:\doibase 10.1103/physreva.81.062115 [ * * , ( ) ] link:\doibase 10.1103/physreva.81.044105 [ * * , ( ) ] link:\doibase 10.1103/physreva.83.062115 [ * * , ( ) ] link:\doibase 10.1088/0031 - 8949/86/06/065004 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.106.233601 [ * * , ( ) ] http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf [ * * , ( ) ] _ _ ( , , ) http://link.springer.com/chapter/10.1007%2f978-3-642-74626-0_8 [ * * , ( ) ] http://www.sciencedirect.com/science/article/pii/s0167278998000542 [ * * , ( ) ] http://www.ams.org/journals/tran/1969-135-00/s0002-9947-1969-0236719-2/ [ * * , ( ) ] http://www.tandfonline.com/doi/abs/10.1080/09500349414552171#.u0gig-zdxzu [ * * , ( ) ] _ _ ( , , )
|
we study quantum information processing by means of optimal control theory . to this end , we analyze the damped jaynes - cummings model , and derive optimal control protocols that minimize the heating or energy dispersion rates , and controls that drive the system at the quantum speed limit . special emphasis is put on analyzing the subtleties of optimal control theory for our system . in particular , it is shown how two fundamentally different approaches to the quantum speed limit can be reconciled by carefully formulating the problem .
|
diffusion - limited aggregation ( in short , dla ) is a statistical mechanics growth model that has been introduced in 1981 by sander and witten .it is defined as follows . a first particle a site of is fixed .then , a particle is released `` at infinity '' and performs a symmetric random walk .as soon as it touches the first particle , it stops and sticks to it .then , we release another particle , which will also stick to the cluster ( the set of the particles of the aggregate ) , and so on after a large number of iterations , one obtains a fractal - looking cluster .dla does not just model sticking particles , but also hele - shaw flow , dendritic growth and dielectric breakdown .figure [ todd ] illustrates the viscous fingering phenomenon , which appears in hele - shaw flow .this phenomenon can be observed by injecting quickly a large quantity of oil into water .this model is extremely hard to study ; only two non - trivial results are rigorously known about dla : an upper bound on the speed of growth and the fact that the infinite cluster has almost surely infinitely many holes , i.e. that the complement of the cluster has infinitely many finite components .the difficulty comes from the fact that the dynamics is neither monotone nor local , and that it roughens the cluster .the _ non - locality _ is quite clear : if big arms surround , even if they are far from it , will never be added to the cluster . by _ non - monotonicity _ ( which is a more serious issue ) , we mean that there is no coupling between a dla starting from a cluster and another from a cluster such that , at each step , the inclusion of the clusters remains valid almost surely . to understand why ,throw the same particles for both dynamics , i.e. use the nave coupling .the big cluster will catch the particles sooner than the small one : when a particle is stopped in the -dynamics - cluster is still bigger than the -one ] , it may go on moving for the -dynamics and stick somewhere that is not in the -cluster , which would break the monotonicity .in fact , this is even a proof of the non - existence of _ any _ monotonic coupling , under the assumption that there exists such that if , can be connected to infinity by a -path avoiding .finally , the fact that the dynamics _ roughens _ the cluster instead of smoothing it is what makes the difference between the usual ( external ) dla and the internal dla of , for which a shape theorem exists . even though this roughening is not mathematically established , simulations such as the one of figure [ dla ] suggest it by the fractal nature of the picture they provide . the rigorous study of dla seeming , for the moment , out of reach , several toy models have been studied .these models are usually easier to study for one of the following reasons : * either the particles are not added according to the harmonic measure of the cluster ( i.e. launched at infinity ) but `` according to some nicer measure '' ; * or the dynamics does not occur in the plane or for results on long - range dla on . ] . in this paper , we prove some results on directed diffusion - limited aggregation ( ddla ) , which is a variant where the particles follow downward directed random walks .a large cluster is presented in figure [ ddla ] .directed versions of dla have already been considered by physicists but , to our knowledge , they have been rigorously studied only in the case of the binary tree ( or bethe lattice ) .the present model is defined in the plane .simulations strongly suggest that the ddla - cluster converges after suitable rescaling to some deterministic convex compact , delimited from below by two segments .ddla can be seen either as a ballistic deposition model where the falling particles fluctuate randomly or as a stretch version of dla .see respectively and .see also for a study of the hastings - levitov version of ddla ; and the present paper have been written independently .section [ defddla ] is devoted to several equivalent definitions of dla . in section [ infvol ], we define the dynamics in infinite volume . in section [ info ] ,we obtain a bound on the speed of propagation of the information for a ddla starting from a ( sufficiently ) horizontal interface . in section [ kestin ] , we adapt kesten s argument ( see ) to obtain bounds on the speed of horizontal growth and vertical growth .finally , section [ sec : last ] explores the geometry of the infinite cluster .we use `` a.s.e . '' as an abbreviation for `` almost surely , eventually '' , which means either `` almost surely , there exists such that for all '' or `` almost surely , there exists such that for all '' .i thank vincent beffara for proposing this topic of research to me , as well as for his advice and availability .i am indebted to vincent beffara and jessica todd for allowing me to use in this paper some of their pictures .in this paper , when dealing with ddla , we will think of as rotated by an angle of ( so that the particles we will throw move downward ) .the vertices of will often be referred to as sites .let be the set of the ( directed ) edges ; it endows with a structure of directed graph .we will denote by the graph - distance on , i.e. the -distance . if is an edge , we call the upper vertex of and its lower vertex . they are referred to as and .a downward directed symmetric random walk is a markov chain with transition probabilities an upward directed symmetric random walk is obtained with transition probabilities when the starting point of a directed random walk is not specified , it is tacitly taken to be .the height of , denoted by , is .its horizontal deviation ( relative to ) is .the height ( resp .horizontal deviation ) of relative to is ( resp . ) .if , we set the line of height is also set a line is said to be above a set if .finally , if one fixes a subset of , the activity of a site relative to is \cdot|\{e\in\textbf{e}~:~ \textbf{l}(e)\in c~\&~ \textbf{u}(e)=p\}|,\ ] ] where is an upward directed symmetric random walk and stands for the cardinality operator . in what follows, we will consider a growing subset of , called cluster .the current activity ( or activity ) of a site will then be relative to the cluster at the considered time .the activity of the cluster will be the sum over all sites of their activity . at time , the cluster is .assume that the cluster has been built up to time , and that . to build , choose any line above .then , independently of all the choices made so far , choose uniformly a point in , and send a downward symmetric random walk from this point .if the walk intersects , then there must be a first time when the walker is on a point of the cluster : let if the random walk fails to hit the cluster , we iterate the procedure of a starting point launching of a random walk independently and with the same , until a random walk hits the cluster , which will happen almost surely .this is obviously the same as conditioning the procedure to succeed .the dynamics does not depend on the choices of : indeed , choosing uniformly a point in and taking a step downward give the same measure to all the points of ( and if a walker goes outside , it will never hit the cluster ) .the dynamics is thus well - defined .we call this process directed diffusion - limited aggregation ( or ddla ). since the process does not depend on the choices of , we can take it as large as we want so that we may ( informally at least ) think of the particles as falling from infinity .here is another process , which is the same ( in distribution ) as ddla .we set assume that we have built , a random set of cardinality .we condition the following procedure to succeed : we choose , uniformly and independently of all the choices made so far , an edge such that .we launch an upward directed symmetric random walk from .we say that the procedure succeeds if the random walk does not touch .the particle added to the cluster is the upper vertex of the edge that has been chosen . iterating the process, we obtain a well - defined dynamics .it is the same as the first dynamics : this is easily proved by matching downward paths with the corresponding upward ones .we now define ddla in continuous time : this is the natural continuous time version of the second definition of ddla .let be a family of independent poisson processes of intensity 1 indexed by the set of the directed edges .the cluster is defined as and we set .assume that for some ( almost surely well - defined ) stopping time , the cluster contains exactly particles .then , wait for an edge whose lower vertex is in to ring ( such edges will be called growth - edges ) .when the clock on a growth - edge rings , send an independent upward directed random walk from its upper vertex .if it does not intersect , add a particle at and define to be the current time .otherwise , wait for another growth - edge to ring , and iterate the procedure .this dynamics is almost surely well - defined for all times is almost surely infinite ] because it is stochastically dominated by first - passage percolation .markov chain theory guarantees that and are identical in distribution .this definition in continuous time consists in adding sites at a rate equal to their current activity . before going any further, it may be useful to know what is the theorem we are looking for and how the results presented in this paper may play a part in its proof . in this subsection, we present _ highly informal heuristics that have not been made mathematically rigorous in any way yet_. they constitute a strategy for proving a shape theorem for ddla .there is some convex compact of non - empty interior such that converges almost surely to for the hausdorff metric . besides , the boundary of consists in two segments and the -rotated graph of a concave function . to prove such a result ,the step 0 may be to prove that the width and height of the cluster both grow linearly in time , so that we would know that we use the right scaling .this would result from a stronger form of fact [ kestt ] .provided this , one may use compactness arguments to prove that if there exists a unique `` invariant non - empty compact set '' , then we have the desired convergence ( to ) . by invariance ,we informally mean the following : if is large enough and if we launch a ddla at time from , then `` remains close '' to .this existence and uniqueness may be proved by finding a ( maybe non - explicit ) ordinary differential equation satisfied by the upper interface of .to do so , we would proceed in two steps .first of all , one needs to check that the upper interface is typically `` more or less '' the -rotated graph of a differentiable function . to do so, one would need to control fjords .roughly speaking , we call fjord the area delimited by two long and close arms of the cluster .fjords are the enemies of both regularity and `` being the graph of a function '' . hereare some heuristics about fjords : in figure [ ddla ] , we observe that there are mesoscopic fjords far from the vertical axis and no such fjord close to it .we try to account for this .we say that a site shades a second one if it can catch particles that would go to the second site if was vacant .assume that we have a behaviour as suggested by figure [ ddla ] .if we are close to the vertical axis , the local slope is close to .we will assume that , at any time , none of the two top - points of the arms delineating the considered fjord shades the other : they will thus survive ( i.e. keep moving ) , following more or less upward directed random walks . by recurrence of the 2-dimensional random walk , we obtain that the two top - points will collide at some time , closing the fjord . to avoid the shading phenomenon , one needs a still unknown proper and _ tractable _ definition of top - point .however , it seems quite reasonable to expect this phenomenon `` not to occur '' if the slope is close to because there is no initial shading . when the slope gets higher , the shading phenomenon appears .if the slope is not too high , the `` lower top - point '' manages to survive but it is hard for it to catch up with the upper one : this creates a fjord .if the slope is too high , the `` lower top - point '' stops catching particles : we are in the lower interface .now , we need to find an ode satisfied by , where is the angular parametrization of the upper interface of and is defined on .we assume that corresponds to what we think of as the vertical .assume that one can launch a ddla from an infinite line of slope ( which is made possible by section [ infvol ] ) and define a deterministic speed of vertical growth .the set being invariant , must be proportional to , where stands for the local slope of at the neighborhood of the point defined by and . more exactly, we have the knowledge of due to the previous step allows us to find .simulations suggest that ; corollary [ coro : truc ] is a weak result in this direction .the last point that has to be checked is that the lower interface consists of two segments .assume that the points of the lower interface are of bounded local slope . from this and large deviation theory , one can deduce that it costs typically exponentially much for a particle to stick to the lower interface at large distance from the upper interface .from the upper interface is lower than , for some constant . ]this might allow us to compare ddla with ballistic deposition , for which the upper interface converges to the graph of a concave function and the lower interface converges to the union of two segments ( use the kesten - hammersley lemma ) .in this section , we define directed diffusion - limited aggregation starting from a suitable infinite set . notice that we make the trivial adjustment that the process now lives in instead of .here is a very informal description of the construction .each edge has a poisson clock and infinitely many upward directed symmetric random walks attached to it , everything being chosen independently .when a clock rings at some edge for the time , if its upper extremity is vacant and its lower one occupied , the random walk is sent and we see if it hits the current cluster or not : we add a particle if and only if the walk does not hit the cluster . in finite volume, this is not problematic because we can ( almost surely ) define the first ( or next ) ringing time : since we only need to know the state of the cluster just before we send the walk , the construction is done . in the case of an infinite initial cluster , in any non - trivial time interval , there are almost surely infinitely many ringing times to consider . to define the dynamics ,a solution is to show that , for all , what happens at before time just depends on some random finite set of edges . indeed , in this case, we can apply the construction in finite volume .this is the idea behind harris - like constructions .see e.g. for an easy harris - like construction of ballistic deposition , the local and monotonic version of ddla .[ construction ] rigourously , the construction goes as follows .let be a family of independent poisson processes of intensity 1 indexed by the set of the directed edges .let be a family of independent upward directed symmetric random walks ( simply referred to as random walks in this section ) indexed by .let be the rotation of centre and angle .for , let be the -cone[defcone ] and let be the -wedge .( remember that we think of as rotated by an angle of . )when is not specified , it is taken to be equal to the introduced in the next line .-cone and the -wedge for . ]there is some such that for all , where maps to . .] let us fix let us pick a site in and try to decide whether we add it to the cluster before time or not and , if so , when . if this can be done with probability 1 , then the dynamics is almost surely well - defined . indeed , it is enough to check every in .a site is said to be activated if there is an upward directed path such that : * , * , * there is an increasing -tuple such that and for every , the clock at rings at time .the model consisting in adding a vertex before time if and only if the condition above is satisfied for instead of is called directed first - passage percolation ( or dfpp ) .we also say that a directed edge is activated if there is an upward directed path such that : * , * * , * there is an increasing -tuple such that and for every , the clock at rings at time . for any directed edge , each time the clock at rings ,if belongs to the current dfpp cluster , then we launch a new random walk from ; the random walk to be launched is .[ fact : expo ] the probability that is activated decays exponentially fast in .fact [ fact : expo ] is a direct consequence of the exponential decay of subcritical percolation if .let denote the -ball of centre and radius .let .if the following holds for : then can not belong to the activation cluster of for .but , by the exponential decay of activation percolations over a time - range equal to ,[page : decroissance ] the probability that this condition is not satisfied is lower than which decays exponentially fast in .let .the wedge based at is defined as .it divides into two connected components . a point of that belongs to the same connected component as said to be to the left of the wedge based at .the set of the points of that are to the left of the wedge based at is denoted by .the site is said to be good if it satisfies the following conditions : * no activated directed edge satisfies `` '' , * every random walk launched from a activated site of remains in .the site is said to be quasi - good if it satisfies the following conditions : * only finitely many activated edges satisfy `` exactly one extremity of the considered edge belongs to '' , * only finitely many walks that are launched from an activated edge whose extremities belong to do not stay in , and each of them takes only finitely many steps outside .there is a constant such that for every and every , the following inequality holds : for , consider the following events : * to an edge such that `` '' , we associate the event `` the directed edge is activated '' , * to and a directed edge the two extremities of which belong to , we associate the event `` the random walk at is launched and its step lands outside '' .it follows from the estimate above and large deviation theory that the events under consideration have summable probability .the borel - cantelli lemma implies that almost surely , only finitely many of these events occur : is thus almost surely quasi - good . by independence ,the site has positive probability to be good .in fact , this proof being quantitative , we know that the probability that is good can be bounded below by some positive constant .[ fact : goodas ] assume that the horizontal deviation is not bounded above in restriction to .then , almost surely , there is a good site such that .taking fact [ fact : goodas ] for granted , it is not hard to conclude . if * d * is bounded , then the assumption on guarantees that is finite and the process has already been defined .we may thus assume that is infinite .if is neither bounded above nor bounded below in restriction to , then fact [ fact : goodas ] and its symmetric version ( which follows from it ) imply the following : almost surely , there are a wedge to the right of and a ( symmetric ) wedge to the left of that are not crossed by the dfpp or any walk launched from between these wedges before time .since the intersection of with the area delineated by the wedges is finite , the construction is once again reduced to finite volume .( the definition of goodness guarantees that the fate of the considered area can be defined without having to look outside it . ) finally , if is only bounded in one direction ( say above ) , then one can find a site that is good and such that : since is finite , the construction in finite volume can be used .let be a point in such that .( such a point exists owing to the geometric assumption on and because is not bounded above in restriction to . )explore the dfpp cluster of the -neighbourhood of in reverse time : starting at time from , one follows the downward dfpp process associated to the same clocks . at time , this exploration has visited a random set of sites and edges .the area explored at step 1 is this random set , together with all the vertices and edges in . by looking at the clocks and walks associated to this area, one can see if is good or not .if this is the case , we stop the process . otherwise , since we know that is _ _quasi-__good , up to taking far enough to the right of , we can assume that the information revealed so far yields no obstruction to the fact that is good .since we have made irrelevant all the negative information , the probability that is good conditionally on the fact that is not good is at least the introduced before fact [ fact : goodas ] . iterating this process , we find a good site such that in at most steps with probability at least .thus , almost surely , such a site exists .the dynamics is measurably defined and does not depend on the choices that are made .besides , the -dynamics are coherent .more exactly , at ( typical ) fixed environment , if we apply the previous construction with and , if the first construction says that is added at time , then so does the second construction . also notice that this dynamics defines a simple - markov process relative to the filtration this section , we prove bounds on the speed of propagation of the information for a horizontal initial cluster .such a control guarantees a weak ( and quantitative ) form of locality , which may help studying further ddla .let us consider a ddla launched with the initial interface before stating the proposition , we need to introduce some terminology .let , i.e. let be a non - empty finite subset of .we want to define where some information about may be available .formally , we want our area of potential influence ( a random subset of depending on time ) to satisfy the following property : if we use the same clocks and walks to launch a ddla from and one from with , the clusters will be the same outside the area of potential influence at the considered time .in fact , the way this area is defined in this section , we even know that the pair _ data of the particles present in the cluster outside the area _ satisfies the ( say weak ) markov property .we define this area as follows . instead of saying that a site of in the cluster or not belongs to the area of potential influence, we will say that it is red , which is shorter and more visual .a non - red site belonging to the cluster will be colored in black .initially , is the red area .then , a site becomes red when one of the following events occurs : * , the site is red , the clock on rings and the launched random walk avoids black sites ; * , the site is black , the clock on rings and the launched random walk avoids black sites and goes through at least one red site .it is not clear that this is well - defined , for the same reason that makes the definition in infinite volume uneasy , but we will see in the proof of proposition [ infoh ] that some larger set is finite almost surely for all times , so that the construction boils down to finite volume , entailing proper definition of the red area . by construction ,it is clear that if it is well - defined , red is a good notion of area of potential influence . will denote the red area at time .we set and .this holds only for this section .[ infoh ] if and if we choose as initial cluster , then is well - defined and a.s.e . for some deterministic constant independent of . without loss of generality, we may assume that .indeed , if one takes to be , then for any finite subset of , the event has positive probability .the rough idea of the proof is the following : 1 .we prove that the red area can not be extremely wide .we show that if it is not very wide , it is quite small ( in height ) .we prove that if it is small , it is narrow .we initialize the process with the first step and then iterate steps 2 and step 3 , allowing us to conclude . for ,we set we consider the following model . at time 0 , the cluster is .an edge is said to be decisive if and .the cluster does not change until a clock on a decisive edge rings .when this event occurs , , which was for some random , becomes .the data of is thus just the data of this random .let be a sequence of independent random variables such that follows an exponential law of parameter .let .then , by construction , has the same law as the sequence of the jumping times of the cluster from one state to another . almost surely , eventually , consider . by construction, one has the following estimate : = \sum_{k = f(n ) + 1}^{f(n+1 ) } \frac{1}{2k } \underset{n \to \infty}{\sim } n.\ ] ] setting , we have \underset{\text{indep.}}{= } \sum_{k = f(n ) + 1}^{f(n+1 ) } { \textbf{var}}[\tau_k ] \leq \frac{1}{4}\times\frac{\pi^2}{6}.\ ] ] by chebyshev s inequality and our control on the expectation , for large enough , \leq \frac{\pi^2}{3n^2}.\ ] ] by the borel - cantelli lemma , a.s.e .the result follows .consequently , for some ( explicit ) , a.s.e . . the area is therefore well - defined and is a.s.e . a subset of .[ lem : boot ] let be a sequence of positive real numbers such that a.s.e ., .assume that is eventually larger than .then for some constant , a.s.e . , the colored area is the set the sites that are red or black .it is dominated by the directed first - passage percolation starting from and using the same clocks .let be the cluster of this percolation at time .we know that , a.s.e . , where . for and , & \leq & { \mathbb{p}}\left[\exists k \leq 2n , { \textbf{h}}\left({\ensuremath{\mathfrak a}}_{\frac{k+1}{2}}^{m_n}\right ) - { \textbf{h}}\left({\ensuremath{\mathfrak a}}_{k/2}^{m_n}\right ) > a\ln m_n/2\right]\\ & \leq & 2n \max_{k \leq 2n } { \mathbb{p}}\left[{\textbf{h}}\left({\ensuremath{\mathfrak a}}_{\frac{k+1}{2}}^{m_n}\right ) - { \textbf{h}}\left({\ensuremath{\mathfrak a}}_{\frac{k}{2}}^{m_n}\right ) >a\ln m_n/2\right]\\ & \leq & 2ne^{-\text{cst}\cdot a\ln m_n}(2m_n+1)\\ & \leq&2n ( 2m_n+1)^{1-\text{cst}\cdot a}.\end{aligned}\ ] ] ( for the last inequality , see page . ) since , taking large enough implies that the probabilities ] , \geq { \mathbb{p}}[\textbf{d}(q + w_{{\textbf{h}}(a ) } ) > 0 ] \geq \frac{1}2.\ ] ] this ends the proof of the lemma .let be or .we will prove that there exists almost surely such that we then conclude using the following lemma .[ gron ] let ] and that there exists such that then , there exists some depending only on such that , eventually , its proof is postponed to the end of the section .let be such that and let us set .we are looking for an upper bound on ] ,assume that .let be the event that is the site added at time .the probability we want to control is lower than ] . by monotonicity of ,this implies that , almost surely , \leq \frac{2^k}{c\cdot{\textbf{f}}(l)^{\alpha}}.\ ] ] we now use the following exponential bound : let be a filtration .let be an -stopping time .let be a sequence of random variables such that \text { and } x_n\text { is } \mathcal{f}_n\text{-measurable}.\ ] ] let $ ] .let be such that .then , \leq \left(\frac{b}a\right)^a e^{a - b}.\ ] ] applying this to with , , and a constant stopping time , we obtain that the probability that there are at least successful fillings through * p * between times and is lower than .thus , &\leq \frac{(l+1)(l+2)}{2}\cdot 2^m\cdot \left(\frac{e}{8}\right)^m\\ { } & \leq ( 2^{k+1}+2)^2\cdot\left(\frac{e}{4}\right)^{2^{k/2}}. \end{array}\ ] ] since , by the borel - cantelli lemma and lemma [ gron ] , the proposition is established .take such that and take . for where the last line results from the choice of .thus , there exists such that .if , then the implication we have just proved shows that which implies that .since is a non - decreasing sequence , we obtain thus , we can assume that is such that .assume that there exists such that .take a minimal such . by minimality, there exists some minimal between and such that and .thus , and , since , in fact , we have proved that , for , this implies the proposition. we can deduce from this a version of proposition [ kesprop ] for the continuous - time model .of course , we set and .[ kestt ] for some constant , almost surely , for every positive , eventually , the quantities and grow at most linearly because continuous - time ddla is stochastically dominated by first - passage percolation .if the lower extremity of an edge is a highest point of the cluster , then the activity of this edge is 1 .consequently , if is the first time when the cluster is of height , then is stochastically dominated by independent exponential random variables of parameter 2 ( there exist at least 2 edges of lower extremity being a highest point of the cluster ) .this entails the at least linear growth of the height .it results from this , the fact that discrete- and continuous - time ddla define the same process and proposition [ kesprop ] that the number of particles in the cluster at time satisfies , for some deterministic constant , almost surely eventually goes to infinity when tends to infinity . ] .this implies that , a.s.e .in this section , we set we call elementary loop in words , measures the probability that a site in is the first site of to be touched by a walk launched from very far . for more information on the harmonic measure ,. there are several equivalent definitions of dla .the setting that will be convenient in this section is the following .the first cluster is .assume that the first clusters have been built and are subsets of .independently of all the choices made so far , choose a point in according to .throw a symmetric random walk starting at and set this process is called diffusion - limited aggregation .is very similar to this process , but not equal to it in distribution . ]let .we consider our evolution temporally : we launch the first particle , look at it step after step until it sticks , before launching the second particle a step is said to be critical if the current particle is at distance 1 from _ and _ is at distance 1 from the current cluster .we wait for a critical step ( we may wait forever ) .conditionally on the fact that such a step exists , with probability , the particle tries immediately after the first critical step to visit all the points of , say clockwise .steps that the particle would take if it was not hindered by the cluster are the ones making it visit clockwise .] since the step is critical and has cardinality 8 , the particle must stick to some particle of the cluster and the cardinality of is increased by .doing so at the first 8 critical steps that occur particle ) , the first critical step of a particle different from the one , and so on up to 8 . ]prevents from being added to the cluster .the fact thus holds for such a proof can not work for the directed version of dla .indeed , take a site with a neighbor belonging to the cluster . even assuming that there are enough particles coming in the neighborhood of , one can not always surround by modifying a finite number of steps : for example , will never be added to the cluster before if one considers a ddla launched from . the screening effect of the particles above it can be very strong , but will never reduce its activity to . with positive probability , the first vertex to be added is .denote by the maximal first coordinate of an element of that belongs to the cluster at time . at time , the activity of is at most times the activity of .( to see this inequality , map a directed random walk launched at that takes its first steps to the left to the random walk launched at that merges with as soon as enters . ) thus , conditionally on the fact that is added to the cluster before , the probability that is added to the cluster before is at most . since is positive , proposition [ propnever ] is established .there is an increasing path going from a point of to a point in . the conic structure and the law of large numbers guarantee that the activity of is bounded away from ( say larger than ) as long as ( which we now assume ) . thus , if and if , then will be added at rate at least . indeed , a walk can reach from by using ; then , from , it escapes with probability .consequently , will almost surely take a finite time to increase its value , as long .thus , and fact [ fact : truc ] is established .smythe and j.c .wierman , _ first - passage percolation on the square lattice _ , lecture notes in mathematics 671 , springer - verlag , berlin , 1978 .f. spitzer , _ principles of random walk _ , edition , springer , new york , 1976 .f. johansson viklund , a. sola and a. turner , _ scaling limits of anisotropic hastings - levitov clusters _, annales de linstitut henri poincar ( b ) , probabilits et statistiques , vol . 48 ( 1 ) , p. 235 - 257 , 2012 .
|
in this paper , we define a directed version of the diffusion - limited - aggregation model . we present several equivalent definitions in finite volume and a definition in infinite volume . we obtain bounds on the speed of propagation of information in infinite volume and explore the geometry of the infinite cluster . we also explain how these results fit in a strategy for proving a shape theorem for this model .
|
a mobile ad hoc network ( manet ) can be defined as a fully self - organizing system where mobile nodes freely communicate with each other without any infrastructure or centralized administration . in such networks, the traditional routing algorithms like aodv and dsr can not adapt to the highly dynamic topology , while the routing algorithms based on opportunistic transmission like two - hop relay routing scheme which is first proposed by grossglauser and tse , and its variants are widely applied due to their simplicity and highly efficiency .therefore , a critical issue of natural interest is how to thoroughly understand the performance of such networks . in our previous work , we investigated the throughput and capacity of a buffer - limited manet .so in this paper , we further extend the network model to a more general scenario and explore the end - to - end delay performance . by now , a lot of work has been done to analyze the packet delay in a class of 2hr manets .neely and modiano studied the end - to - end delay under several routing schemes such as 2hr with or without redundancy , 2hr with feedback and multi - hop relay .they developed a fundamental tradeoff between delay and throughput as ._ also derived expressions for message delivery delay in closed - form .there also exist many scaling law results for the delay performance in manets under various mobility models , like under the random walk model in , under the restricted mobility model in , under brownian motion model in , and under hybrid random walk models in .recently , liu _ et .al _ explored the packet delay under -cast relay algorithm , generalized two - hop relay algorithm , and probing - based two - hop relay algorithm , respectively .however , it is notable that all these works mentioned above assumed the buffer size of each node is infinite to make their analysis tractable .actually , this assumption never holds for a realistic manet . even in some scenarios , in order to save the networking cost , or due to the scarce resource in a terminal node ( small size , low computing capability and so on ) , the buffer space equipped for each node is very limited .thus , these studies are not applicable to and may not reflect the real delay performance of a practical manet with limited buffer . as a first step towards this end, this paper explores the packet end - to - end delay performance for a 2hr manet , where each node is equipped with a limited relay - buffer , which is shared by all other traffic flows to temporarily store the forward their packets . in order to avoid the interference between simultaneous transmissions ,a group - based transmission scheduling scheme is adopted .while in order to avoid the packet loss when the relay - buffer of receiver is blocked , a handshake mechanism is added in the 2hr routing algorithm .the main contributions of this paper are summarized as follows . *a theoretical framework is developed to fully capture the packet arrival and departure processes in both source node and relay node , respectively .based on this framework , we obtain the packet occupancy distribution in a relay buffer , and further derive the relay - buffer blocking probability ( rbp ) under any given exogenous input rate .* the service rate of a source node can be computed by utilizing the rbp .based this service rate and queuing theory , we derive the queuing delay of a packet in its source node . * with the help of rbp and the absorbing markov chain theory , we further derive the packet delivery delay .finally , the packet end - to - end delay can be obtained by incorporating the queuing delay with the delivery delay .the remainder of this paper is organized as follows .section [ section : preliminaries ] introduces the system models , transmission scheduling , routing algorithm and some basic definitions .section [ section : rbp ] provides the theoretical framework to analyze the packet deliver processes and obtain the rbp . based on the computation of rbp ,the packet queuing delay and delivery delay are derived in section [ section : delay ] .finally , section [ section : numerical_results ] provides the numerical results and section [ section : conclusion ] concludes this paper .this section introduces the system models , transmission scheduling , routing algorithm and some basic definitions involved in this paper ._ network model _ : as previous works , we consider a time - slotted and cell - partitioned network model , where the network is partitioned into nonoverlapping cells of equal size and mobile nodes roam from cell to cell according to the independent and identically distributed ( i.i.d ) mobility model . the time - slot has a fixed length and is uniformed to exact one packet transmission .the transmission range of each node is same and can cover a set of cells which have a horizontal and vertical distance of no more than cells away from its own cell , as illustrated in fig .[ fig : cell_partitioned ] . _ traffic model _ : the popular permutation traffic model is adopted .there are in total distinct unicast traffic flows , each node is the source of a traffic flow and meanwhile the destination of another traffic flow . without loss of generality , as shown in , we assume is even and the source - destination pairs are composed as follows : , , , . the exogenous packet arrival at each node is a bernoulli process with rate packets / slot . _ interference model _ : we adopt the famous protocol model to account for the interference between simultaneous transmissions . by applying the protocol model ,when node transmits packets to node , this transmission is successful if and only if : * node is within the transmission range of node . * , for any other concurrent transmitter , where denotes the distance between and , is a guard factor determined by the protocol model . _buffer constraint _ : as the available study on buffer - limited wireless networks , we consider a practical buffer constraint .each node in the manet has two queues , one local queue with unlimited buffer size for storing the self - generated packets , and one relay queue with fixed size for storing the packets coming from all other traffic flows .we adopt this buffer constraint here mainly due to the following reasons .first , in a practical network , each node usually reserves a much larger buffer space for storing its own packets rather than the relay packets .second , even though the local buffer space is not enough when bursty traffic comes , the upper layer can execute congestion control to avoid the loss of local packets .thus , our network model can be served as a well approximation for a realistic manet . as a inherent feature of wireless networks ,the interference between simultaneous transmissions is a critical issue that should be carefully considered .we adopt here the group - based transmission scheduling which has been extensively applied in previous studies . as illustrated in fig .[ fig : group - based_scheduling ] , all cells are divided into distinct groups , where any two cells in the same group have a horizontal and vertical distance of some multiple of cells .thus , the manet has groups and each group contains cells .each group becomes active every time slots and each cell of an active group allows one node to conduct packet transmission . by applying our interference model, should be satisfied that on the other hand , in order to allow as many simultaneous transmissions as possible , is determined as notice that in a buffer - limited manet with 2hr for packet delivery , when a source node want to send a packet to a relay whose relay queue is full , then this transmission fails , and leads to packet loss and energy waste . to solve this problem ,a handshake mechanism is introduced into the traditional 2hr algorithm , termed as h2hr . with h2hr , before each source - to - relay ( s - r ) transmission , the source node initiates a handshake with the relay node to confirm its relay - buffer occupancy state , once the relay queue is full , the source node cancels this transmission . at any time slot , for an active cell , it executes the h2hr algorithm as shown in algorithm [ algorithm : h2hr ] . with equal probability , randomly select such a pair to do source - to - destination ( s - d ) transmission . with equal probability , randomly select one node in as the transmitter . with equal probability , randomly select another node within the transmission range of as the receiver .flips an unbiased coin . *the transmitter initiates a handshake with the receiver to check whether the relay queue is full . * * the transmitter conducts a s - r transmission . ** the transmitter remains idle .* the transmitter conducts a relay - to - destination ( r - d ) transmission . remains idle .* relay - buffer blocking probability ( rbp ) * : for the concerned manet with a given exogenous packet arrival rate to each node , the relay - buffer blocking probability of a node is defined as the probability that the relay queue of this node is full . *queuing delay * : the queuing delay of a packet is defined as the interval between the time this packet arrives at its source node and the time it takes to arrive at the head of local queue .* delivery delay * : the delivery delay of a packet is defined as the interval between the time this packet arrives at the head of its local queue and the time it takes to be delivered to the destination node . * end - to - end delay * : the end - to - end delay of a packet is defined as the interval between the time this packet arrives at its source node and the time it takes to be delivered to its destination node .obviously , the end - to - end delay of a packet is the sum of its queuing delay and delivery delay .in this section , we present the theoretical framework which help us fully characterize the complicated packet delivery processes and further compute the pbp . considering a given time slot and a given active cell , we denote by the probability that there are at least one node within and another node within the transmission range of , and denote by the probability that there are at least one source - destination pair , one node of this pair is within and another one is within the transmission range of .based on the results of , and are determined as , \label{eq : p } \\ & q=\frac{1}{m^{2n}}[m^{2n}-(m^4 - 2l+1)^{n/2 } ] , \label{eq : q}\end{aligned}\ ] ] where .we denote by , and the probabilities that in a time slot a node obtains the opportunity to conduct s - d , s - r and r - d transmission , respectively .similar to , we have the packet delivery processes under h2hr algorithm is illustrated in fig .[ fig : h2hr ] .the local queue can be represented as a bernoulli / bernoulli queue , where in every time slot a new packet will arrive with probability , and a corresponding service rate which is determined as due to the reversibility of bernoulli / bernoulli queue , its output process is also a bernoulli flow with rate . as shown in fig .[ fig : h2hr ] , the ratio of packets transmitted to a relay node is . due to the i.i.d mobility model ,each of the relay nodes will receive this packet with equal probability .on the other hand , for a specific node , the packets from all other flows will arrive its relay queue .then the packet arrival rate at a relay queue can be determined as we denote by that the service rate of relay queue when it contains packets , . according to the results in , we have since the relay queue can not forward and receive a packet at the same time slot , then it can be modeled as a discrete markov chain .we use to denote the limit occupancy distribution on relay queue , then we have where and .when a relay queue contains packets , this queue is full .thus we have notice that given a exogenous input rate , equation ( [ eq : p_b ] ) contains only one unknown quantity . by solving equation ( [ eq : p_b ] ) , wecan then obtain the rbp under any exogenous input rate .with the help of rbp , in this section we further analyze the packet delay performance in a buffer - limited manet .we denote by , and the packet end - to - end delay , queuing delay and delivery delay , respectively .then we have . given the exogenous input rate , the rbp can be obtained by equation ( [ eq : p_b ] ) , further the service rate of local queue ( in the rest of this paper , and are abbreviated as and if there is no ambiguous ) can be determined by formula ( [ eq : mu_s ] ) .then , the average queue length of the local queue ( bernoulli / bernoulli queue ) is given by according to the little s theorem , the average delay of a packet in its local queue is then , the queuing delay is determined as without loss of generality , we focus on a packet which is in the head of a local queue . as illustrated in fig .[ fig : absorbing_markov ] , in the next time slot , packet will be transmitted to its destination node with probability , to a relay node with probability , and stays in the local queue with probability , which forms an absorbing markov chain .we denote by and the average transition time from the transient states and to the absorbing state , respectively .then we have and we denote by the probability that there are packets destined for the same node as is in front of , when is transmitted into a relay queue . notice that in a time slot , a node executes the r - d transmission with probability which is shared by all the traffic flows equally , then we have where denotes the average number of packets in a relay queue which are destined for a same node , under the condition that this relay is not full .we denote by the occupancy distribution on relay queue given that this relay queue is not full. then we have thus , the average queue length of a relay queue given that it is not full is determined as then , is determined as substituting the results of ( [ eq : x_r ] ) , ( [ eq : l_r|nf ] ) and ( [ eq : l_r^1 ] ) into ( [ eq : e_t ] ) , the average packet deliver delay is further determined . finally , the expectation of packet end - to - end delay in the concerned buffer - limited manet is determined as conduct a c++ simulator to simulate the behaviors of manets considered in this paper . in our simulations , we set , and choose two network scenarios of ( case 1 : ) and ( case 2 : ) .the theoretical rbp results are computed by the equation ( [ eq : p_b ] ) . while , to obtain the simulated rbp results , we focus on a specific node and count the number of time slots that its relay - buffer is full over a period of time slots , and then calculate the ratio . fig .[ fig : rbp ] compares the theoretical curves with the simulated results under a variable system load , where , satisfies that and thus is the maximal throughput the manet can support .we can see that for both the two cases , the simulated rbp can match the theoretical curves nicely , indicating that our theoretical framework is highly efficient to capture the packet delivery processes in a buffer - limited manet with h2hr algorithm ., width=288 ] , width=288 ] based on the rbp , we then show the packet delay performance with the system load under the network setting of ( ) .the results of packet queuing delay , delivery delay and end - to - end delay are summarized in fig .[ fig : delay_rho ] .we can see that when is small , the packet queuing delay is small ; as increases , monotonically increases ; when approaches , tends to infinity leading that the packet end - to - end delay is infinite .while , the packet delivery delay performance under the limited - buffer scenario is interesting , which increases first , then decreases .this is mainly due to the reason that the effects of the exogenous input rate on delivery delay are two folds .on one hand , a larger will lead to a longer relay queue length which further leads to a larger delay in a relay queue ; on the other hand , a larger will lead to a higher rbp , which means a lower ratio of packets conducted by s - r transmission , packets in the head of local queue are more likely to wait a direct s - d transmission opportunity , thus the delivery delay decreases .in this paper , we focus on the packet delay performance of a manet under finite buffer scenario .a group - based transmission scheduling is adopted for channel access , while a handshake - based two hop relay algorithm is adopted for packet delivery .for the concerned manet , a theoretical framework has been developed to fully characterize the queuing processes of a packet and obtain the relay - buffer blocking probability .based on this , we has derived the packet queuing delay and delivery delay , respectively .the results show that the packet end - to - end delay performance curve first rises and then declines as the exogenous rate grows , finally rises again and tends to infinity as the exogenous rate approaches the network throughput capacity .j. andrews , s. shakkottai , r. heath , n. jindal , m. haenggi , r. berry , d. guo , m. neely , s. weber , s. jafar , and a. yener , `` rethinking information theory for mobile ad hoc networks , '' _ ieee commun . mag ._ , vol .46 , no . 12 , pp . 94101 , 2008 .a. goldsmith , m. effros , r. koetter , m. medard , and l. zheng , `` beyond shannon : the quest for fundamental performance limits of wireless ad hoc networks , '' _ ieee commun ._ , vol .49 , no . 5 , pp . 195205 , 2011 .d. ciullo , v. martina , m. garetto , and e. leonardi , `` impact of correlated mobility on delay - throughput performance in mobile ad hoc networks , '' _ ieee / acm trans ._ , vol . 19 , no . 6 , pp .17451758 , 2011 .j. gao , j. liu , x. jiang , o. takahashi , and n. shiratori , `` throughput capacity of manets with group - based scheduling and general transmission range , '' _ ieice trans ._ , vol . 96 , no . 7 , pp .17911802 , 2013 .
|
despite lots of literature has been dedicated to researching the delay performance in two - hop relay ( 2hr ) mobile ad hoc networks ( manets ) , however , they usually assume the buffer size of each node is infinite , so these studies are not applicable to and thus may not reflect the real delay performance of a practical manet with limited buffer . to address this issue , in this paper we explore the packet end - to - end delay in a 2hr manet , where each node is equipped with a bounded and shared relay - buffer for storing and forwarding packets of all other flows . the transmission range of each node can be adjusted and a group - based scheduling scheme is adopted to avoid interference between simultaneous transmissions , meanwhile a handshake mechanism is added to the 2hr routing algorithm to avoid packet loss . with the help of markov chain theory and queuing theory , we develop a new framework to fully characterize the packet delivery processes , and obtain the relay - buffer blocking probability ( rbp ) under any given exogenous packet input rate . based on the rbp , we can compute the packet queuing delay in its source node and delivery delay respectively , and further derive the end - to - end delay in such a manet with limited buffer . delay ; mobile ad hoc networks ( manets ) ; limited buffer ; queuing analysis
|
the search of the stable states of biomolecules , such as dna and proteins , by molecular simulations is important to understand their functions and stabilities .however , as the biomolecules have a lot of local minimum - energy states separated by high energy barriers , conventional molecular dynamics ( md ) and monte carlo ( mc ) simulations tend to get trapped in states of local minima . to overcome this difficulty , various sampling and optimization methods for conformations of biomoleculeshave been proposed such as generalized - ensemble algorithms which include the multicanonical algorithm ( muca ) , simulated tempering ( st ) and replica - exchange method ( rem ) .we have also proposed a conformational search method referred to as the parallel simulated annealing using genetic crossover ( psa / gac ) , which is a hybrid algorithm combining both simulated annealing ( sa ) and genetic algorithm ( ga ) . in this method ,parallel simulated annealing simulations are combined with genetic crossover , which is one of the operations of genetic algorithm .moreover , we proposed a method that combines parallel md simulations and genetic crossover with metropolis criterion . in this study , we applied this latest conformational search method using the genetic crossover to trp - cage mini protein , which has 20 residues .the operation of the genetic crossover is combined with the conventional md and rem .the obtained conformations during the simulation are in good agreement with the experimental results .this article is organized as follows . in section 2we explain the present methods . in section 3we present the results .section 4 is devoted to conclusions .we briefly describe our method .we first prepare initial conformations of the system in study , where is the total number of `` individuals '' in genetic algorithm and is usually taken to be an even integer .we then alternately perform the following two steps : 1 . for the individuals , regular canonical mc or md simulations at a fixed temperature carried out simulataneously and independently for a certain mc or md steps . pairs of conformations are selected from `` parental '' group randomly , and the crossover and selection operations are performed . here , the parental group means the latest conformations obtained in step 1 .if we employ mc simulations in step 1 above , we can refer the method to as parallel monte carlo using genetic crossover ( pmc / gac ) and if md simulations , parallel molecular dynamics using genetic crossover ( pmd / gac ) . in step 2, we can employ various kinds of genetic crossover operations . here , we just present a case of the two - point crossover ( see ref .the following procedure is carried out ( see fig . [ fig_crossover ] ) : + 1 .consecutive amino acids of length residues in the amino - acid sequence of the conformation are selected randomly for each pair of selected conformations .dihedral angles ( in only backbone or all dihedral angles ) in the selected amino acids are exchanged between the selected pair of conformations .note that the length of consecutive amino - acid residues can , in general , be different for each pair of selected conformations .we need to deal with the produced `` child '' conformations with care .because the produced conformations often have unnatural structures by the crossover operation , they have high potential energy and are unstable .therefore , a relaxation process is introduced before the selection operation .short simulations at the same temperature with restraints on the backbone dihedral angles of only the amino acids are performed so that the corresponding backbone structures of the amino acids will approach the exchanged backbone conformation .the initial conformations for these equilibration simulations are the ones before the exchanges .namely , by these equilibration simulations , the corresponding backbone conformations of the amino acids gradually transform from the ones before the exchanges to the ones after the exchanges .we then perform short equilibration simulations without the restraints .we select the last conformations in the equilibratoin simulations as `` child '' conformations . in the final stage in step 2, the selection operation is performed .we select a superior `` chromosome '' ( conformation ) from the parent - child pair . for this selection operation ,we employ metropolis criterion , which selects the new child conformation from the parent with the following probability : \ }\right ) , \label{eq1}\ ] ] where and stand for the potential energy of the parental conformation and the child conformation of the parent - child pair , respectively . is the inverse temperature , which is defined by ( is the boltzmann constant ) .the sampling method using genetic crossover in the previous subsection can be easily combined with other sampling methods such as generalized - ensemble algorithms .firstly , the conventional mc or md in step 1 above can be replaced by other sampling methods such as muca and st .secondly , the above method can be combined with rem in step 2 above . as an example, we introduce a method that combines genetic crossover and rem .we first prepare initial conformations of the system in study , where is the total number of `` individuals '' ( in genetic algorithm ) or replicas ( in rem ) and is usually taken to be an even integer . while only one temperature value was used in the previous method , we prepare different temperature values here . without loss of generality , we can assume that .we then alternately perform the following two steps : 1 . for the individuals , regular canonical mc or md simulations at the fixed temperature carried out simulataneously and independently for a certain mc or md steps . pairs of conformations at neighboring temperatures are selected from `` parental '' group , and one of the following two operations is performed . 1 .two - point genetic crossover is performed for each pair of parents to produce tow children , and new child conformations are accepted with the probability in eq .( [ eq1 ] ) .each pair of replicas and ( with coordinates } ] ) corresponding to neighboring temperatures and , respectively , is exchanged with the following probability : where } ) - e(q^{[i ] } ) ) .\label{eq2}\ ] ] here , is the inverse temperature ( ) and })12 & 12#1212_12%12[1][0] a. mitsutake , y. sugita , and y. okamoto , _ generalized - ensemble algorithms for molecular simulations of biopolymers _ ,biopolymers 60 ( 2001 ) , pp . 96123 .lyubartsev , a.a .martsinovski , s.v .shevkunov , and p.n .vorontsov velyaminov , _ protein structure predictions by parallel simulated annealing molecular dynamics using genetic crossover _ , j. chem .96 ( 1992 ) , pp .17761783 .t. hiroyasu , m. miki , m. ogura , k. aoi , t. yoshida , and y. okamoto , _ atom - level simulations of protein folding _, proceedings of the 7th world multiconference on systemics , cybernetics and informatics ( sci 2003 ) ( 2003 ) , pp . 117122 .y. sakae , t. hiroyasu , m. miki , k. ishii , and y. okamoto , _ a conformational search method for protein systems using genetic crossover and metropolis criterion _, j. phys .series 487 ( 2014 ) , p. 012003 .v. hornak , a. abel r. okur , b. strockbine , a. roitberg , and c. simmerling , _ comparison of multiple amber force fields and development of improved protein backbone parameters _ , proteins 65 ( 2006 ) , pp . 712725 .d. qiu , p.s .shenkin , f.p .hollinger , and w.c . still , _ the gb / sa continuum model for solvation . a fast analytical method for the calculation of approximate born radii _ , j. phys .a 101 ( 1990 ) , pp .30053014 .a. mitsutake , y. sugita , and y. okamoto , _ replica - exchange multicanonical and multicanonical replica - exchange monte carlo simulations of peptides .ii . application to a more complex system _ , j. chem .( 2003 ) , pp .
|
we combined the genetic crossover , which is one of the operations of genetic algorithm , and replica - exchange method in parallel molecular dynamics simulations . the genetic crossover and replica - exchange method can search the global conformational space by exchanging the corresponding parts between a pair of conformations of a protein . in this study , we applied this method to an -helical protein , trp - cage mini protein , which has 20 amino - acid residues . the conformations obtained from the simulations are in good agreement with the experimental results .
|
creating exact copies of unknown quantum states chosen from a non - orthogonal set is impossible , due to the no - cloning theorem .however , it is still possible to make approximate copies of quantum states .the best achievable quality of the copy depends on the dimensionality of the state hilbert space , as well as the distribution of states picked from that space .for example , for a uniform distribution of states picked from a qubit space , the best average overlap ( fidelity ) of the clones with the original is . for a flat distribution over an infinite dimensional spacethe limit is .this paper investigates the experimentally relevant situation , in continuous variables , where the distribution of input states to the cloner is a finite distribution , picked from an infinite hilbert space .in contrast to its counterpart in the single particle regime , cloning of continuous variables has only been investigated over the last few years .gaussian cloning machines are of immediate interest for continuous variables as they represent the optimal way to clone a wide class of experimentally accessible states ; the gaussian states , including coherent and squeezed states .they are so called because they add gaussian distributioned noise in the cloning process .we derive quantum cloning limits for finite distributions of coherent states , and we investigate a method to tailor the standard implementation using a linear amplifier to take advantage of the known input state distribution . we also describe the qualitatively different quantum cloning limits for coherent states with a distribution in the magnitude of their amplitudes but with known phase ; we will refer to this as states `` on a line '' .we also show that a gaussian quantum cloner utilising an optical parametric oscillator , as opposed to a linear amplifier , is the optimum approach in this case .the paper is arranged as follows : we begin in the next section by reviewing the standard cloning limit for coherent states . in sec .[ sec : restrictedgauss ] we examine the cloning of finite width gaussian distributions of coherent states and comment on the connection between this and optimal state estimation . as well we investigate how the `` no - cloning limit '' in teleportation is modified for finite distributions . in sec .[ sec : singlequadcloner ] we consider the case of cloning coherent states on a line , and we conclude in sec .[ sec : concl ] .an optimal ( gaussian ) cloner for coherent states can be constructed from a linear optical amplifier and a beam splitter , as shown in fig .. enters the amplifier amp and is amplified according to the gain .this field is then incident onto a 50:50 beam splitter , the output from which forms the two `` clones '' .vacuum noise is added at both the amplifier and the beam splitter . ]the input field can be described by the annihilation operator and the initial coherent state , where is a complex number representing the coherent amplitude of the state . the heisenberg evolution introduced by the linear amplifier transforms this input field into the output field , .the field is then divided on a 50:50 beamsplitter .the output modes are then given by since both modes have the same amplitude and noise statistics , we need only consider the quadrature amplitudes and variance of mode .the quadrature amplitudes and are assuming the input field is in a coherent state , the amplitude and phase variances ( and respectively ) are given by the standard criterion for determining the efficacy of a given cloning scheme is the fidelity of the input state with each of the clones .the fidelity quantifies the overlap of the input state with the clone . in its simplest form , for two pure states ,the fidelity is the modulus squared of the inner product of the two states .when the input is a coherent state , the fidelity is given by the expression where , is the amplitude gain of the coherent amplitude of the clones ( ) with respect to the coherent amplitude of the input state ( ) .unit gain ( ) is the best cloning strategy when the input state is completely arbitrary .this is because the exponential dependence of the fidelity on gain will dominate and lead to low fidelities for large unless the gain is exactly one . with unity gain the fidelity becomes independent of the input state , and is thus only a function of the output variances . picking unit gain by setting and substituting eq .( [ amp ] ) into eq .( [ eq : fidelitysimplified ] ) gives an average fidelity ( defined by ) of . since the fidelity does not depend on the amplitude of the input state at unity gain we have , hence .the optimality of this result was proved by cerf _ et al . _ by considering the generalized uncertainty principle for measurements .when applied to coherent states this principle requires that in any symmetric , simultaneous measurement of the two quadrature amplitudes , sufficient noise is added such that the signal to noise of the two measurement results is reduced by at least a half over what would be obtained by an ideal measurement of one or other of the quadratures .this result implies that the minimum amount of noise that can be added in the cloning process is just enough so that the signal to noise of the quadratures of the clones ( as would be found in an ideal single quadrature measurement ) is reduced to precisely one half of that of the original state .this is just sufficient to prevent the generalized uncertainty principle being violated by performing an ideal measurement of , say , the amplitude quadrature of clone 1 and the phase quadrature of clone 2 .using eq .( [ xpm ] ) it is easy to show that the signal to noise ratios of the quadratures of each clone are equal and given by . to be more explicit ,the input field can be written as . in this representationthe initial state is now the vacuum ( giving rise to quantum noise ) and the coherent amplitude , ( now included explicitly in the heisenberg evolution ) , is considered the signal .the output mode can now be written as the signal to noise transfer ratio ( ) of either quadrature of either clone is thus each clone has the minimum noise added to it allowed by quantum mechanics , and thus is optimal .so far we have assumed ( as in all previous discussions of continuous variable cloners ) that the input state distribution is uniform over all quadrature - phase space ; the probability of seeing a given state at the input of the cloner is the same for _ all _ states .however , this implies an infinite distribution which , for practical reasons , is not the case experimentally .therefore , in general , one has some knowledge about the input state distribution .we now consider how this information can be used to improve the output fidelity of the cloner by tailoring the gain to the input state distribution .let us consider a two - dimensional gaussian distributed coherent input state distribution with mean zero and variance : where and are the real and imaginary parts respectively of the input coherent state .such a distribution is optimal for encoding information and is experimentally accessible . using this distribution, we can find the average fidelity by integrating the fidelity for a given state [ eq . ( [ eq : fidelitysimplified ] ) ] weighted by the probability of obtaining that state , , over all .this is described mathematically by we maximise this fidelity over the gain of the amplifier , to obtain as a function only of the variance of the input state distribution . knowing the distribution of input states can now allow us to choose an appropriate amplifier gain to maximise the cloning fidelity . since the minimum value of is 1, is a piecewise continuous function of ; the two pieces of the function being joined at .the average fidelity is given by since we maximise the average fidelity over the gain , is implicitly a function of , and is given by when , and by otherwise .the average fidelity maximised over the amplifier gain is shown as a function of in fig .[ fig : fbarsigma ] .notice that at large , is at the standard cloning limit . in other words , for sufficiently broad input state distributions , the situation is equivalent to having a completely arbitrary input state . as decreases , the fidelity increases , because we now have better knowledge of the likely value of the input state ; approaching unit fidelity as tends to zero .this is an intuitive result , since if then the input state distribution is a two - dimensional delta function in quadrature - phase space and we know with certainty the value of the input state ( i.e. it is the vacuum ) prior to cloning .notice that in eq .( [ t ] ) the minimum allowed noise added to the clones does not depend on the gain of the amplifier .our procedure of tuning the gain to find the maximum fidelity has retained this property and our clones are therefore optimal .we now consider the connection between cloning and optimal state estimation .dual homodyne detection ( or equivalently heterodyne detection ) is known to be the optimal technique for estimating the amplitude of an unknown coherent state drawn from a gaussian ensemble . for our setup, dual homodyne detection of the input state would correspond to setting the amplifier gain to , and detecting the amplitude quadrature of one of the output beams and the phase quadrature of the other .given that we know the standard deviation of the input state distribution , it can be shown that the best estimate of the amplitude of the input state is given by where and are the measured values of the amplitude and phase quadratures respectively . in the limit of broad distributions ( )the best estimate is just given by however , as the distribution narrows it is better to underestimate the value of in accordance with eq .( [ est ] ) . in the limit of become certain that is zero regardless of the measurement outcome .we have observed that the signal to noise transfer between the original state and the clones is not changed by the choice of amplifier gain , thus optimal state estimation must be possible using the clones .some insight into the physics of the particular choice of amplifier gain which produces the optimal clones can be obtained by noticing that for optimal clones , the best estimation of the original state amplitude is determined by measuring the amplitude quadrature of one , and the phase quadrature of the other , and then setting this is true for all distribution sizes down to the point where the cloning amplifier gain , , equals one . for even smaller distributionswe return to the dual homodyne formula . at such smalldistribution sizes the quantum noise dominates .teleportation is the entanglement assisted communication of quantum states through a classical channel .teleportation of continuous variables can be achieved using entanglement of the form which describes a two - mode squeezed vacuum .the strength of the entanglement is characterised by the parameter which is related to the squeezing , or noise reduction , of the quadrature variable correlations of the modes .zero entanglement is characterised by , and maximum entanglement occurs when .various ways to characterise the quality of the teleportation process have been proposed , and have been used to describe experimental demonstrations . in terms of fidelitytwo distinct bounds have been identified for the case of an infinite input distribution of unknown coherent states .the classical limit is .fidelities higher than this value can not be achieved in the absence of entanglement , i.e. when .the no - cloning limit on the other hand requires that the teleported version of the input state is demonstrably superior to that which could be possessed by anyone else . thisis not guaranteed unless the teleported state has an average fidelity .achieving this requires a particular quality of entanglement , , or more than squeezing .we now investigate how this no - cloning limit changes as a function of the distribution of the input states . in a previous paper two of us ( ptc and tcr ) numerically optimised the average fidelity of continuous variable teleportation of a finite gaussian distribution of coherent input states for various levels of entanglement . an analytical expression for this optimised average fidelityis given by where , as before , is the standard deviation of the input state distribution . using this result andthat derived earlier for the optimum cloning fidelity , [ eq .( [ eq : ovavfid ] ) ] , we can find the squeezing ( ) required for the teleportation fidelity [ eq .( [ tele ] ) ] to equal the no - cloning limit as a function of the distribution width ( ) .this is given by the result is plotted in fig .[ fig : noclone ] . the maximum squeezing parameter value achieved at .this is the minimum amount of squeezing required for teleportation to beat the no - cloning limit for all values of . below this valueit is possible for the teleportation fidelity to be lower than the no - cloning limit and for teleported states not to be superior to that possessed by another party for some values of .this is demonstrated in fig .[ fig : telefidandcloninglimit ] where .the dashed curve is the no - cloning limit fidelity and the solid curve is the teleportation fidelity as a function of for constant .notice that for the teleportation fidelity drops just below the no - cloning limit .only with can one be sure that one will beat the no - cloning limit .notice that in the limit of large the teleportation fidelity is higher than the no - cloning limit and that at the fidelity equals the no - cloning limit .the lowest constant value can take and still equal the no - cloning limit at both and is , corresponding to the quality of entanglement mentioned above . at lower squeezing parameter valuesthe teleportation fidelity does not achieve the no - cloning limit except for the trivial case of .a somewhat surprising feature is that the quality of entanglement required to reach the no - cloning limit actually increases as the width of the distribution is decreased .this occurs because of the different ways in which the teleporter and the cloner add noise at unity gain .for example for a distribution with a standard deviation , an entanglement of is required , significantly higher than the level of entanglement needed for an infinite distribution .[ fig : noisered ] shows the same plot as fig .[ fig : noclone ] but using the more experimentally familiar parameters of noise reduction ( or squeezing ) of the entanglement , and the variance ( or noise power ) of the distribution both plotted in decibels .these graphs show that the issue of the no - cloning limit for teleportation is rather subtle when the realistic situation of finite distributions of input states is taken into account .we now consider cloning of a rather different distribution of coherent states ; one in which all the coherent amplitudes have the same phase , but have a broad distribution in the absolute value of their amplitudes .effectively the signals are encoded on only one quadrature . in sec .[ sec : stdclonelimit ] we discussed the optimum cloning limit in terms of a restriction imposed by the uncertainty principle between the two conjugate parameters being copied . for information on a single quadrature , where only one of the conjugate observables is being copied, it could be easy to reach the nave conclusion that there will not be such a cloning limit .however , quantum mechanics still places a restriction on the fidelity of such clones because coherent states , even when restricted to a line in phase space , are non - orthogonal . if the input states carry information on one quadrature only , a different cloning process is required .a new cloning limit , with a much higher fidelity emerges in such a scenario and is now discussed .consider the single quadrature gaussian cloner shown in fig .[ opog ] . is the opo gain . ]it consists of a phase sensitive amplifier , an optical parametric oscillator ( opo ) set to amplify in the real direction , followed by a 50:50 beam splitter also made phase sensitive by the injection of squeezed vacuum noise at the dark port , .the output of the opo is given by where is the parametric gain .after passing through the beam splitter , the variances of the output quadratures are : suppose that the input distribution is now described by the non - symmetric gaussian distribution we assume , restricting the coherent states to the real axis .for simplicity we assume that the distribution `` along the line '' is sufficiently broad , , such that fidelity will be optimized by unity gain operation .unity gain is achieved by setting the gain to .a minimum uncertainty state is assumed for the squeezed input noise , i.e .this gives a fidelity of : which reaches a global maximum when the beam splitter input phase is quadrature squeezed such that , ( so ) .the maximum fidelity of the clone is then .an equivalent result can be achieved by not injecting squeezed vacuum at the bs but instead inserting two independent opos in each beam splitter output arm .the of the individual amplitude quadrature clones is found to be : with optimised fidelity at , the above expression reduces to 0.6125 .this is a greater value than the single quadrature average snr and lies outside the classical regime . unlike the symmetric case discussed in sec .[ sec : stdclonelimit ] , the single quadrature clones are entangled .the snr of the summed amplitude quadratures of the clones , , is independent of the vacuum squeezing parameter and gives .this indicates that overall no noise has been added in the cloning process .we also note that the noise outputs of the phase and amplitude quadratures are very close to the minimum uncertainty product .it seems likely that this is the optimum fidelity attainable for coherent states on a line .the approach is analogous to the standard cloner setup , and it is hard to imagine how the phase sensitive amplification and phase sensitive beam splitter combination could be improved upon given that overall no noise is added .however , the argument is not as straightforward as for a symmetric cloner because the maximization of the fidelity is non - trivial , depending upon the phase and strength of the squeezing injected at the beam splitter .an extensive search of the parameter space revealed no better result , and we conjecture that the fidelity is optimal .we have shown how to tailor the gaussian quantum cloner to optimally clone unknown coherent states picked from finite symmetric gaussian distributions .operating the cloner at a particular level below unity gain maximises the cloning fidelity for such distributions .this maximum fidelity increases monotonically as a function of the distribution width from in the limit of very broad distributions , to in the limit of very narrow distributions .we discussed the relationship between this optimal gain and state estimation , and have shown that the no - cloning limit for teleportation of coherent states changes in a non - trivial way as a function of the width of the input state distribution .the authors would like to thank p. k. lam for helpful discussions .this work was supported by the australian research council .the diagrams in this paper were produced with pyscript . `http://pyscript.sourceforge.net ` .
|
we derive optimal cloning limits for finite gaussian distributions of coherent states , and describe techniques for achieving them . we discuss the relation of these limits to state estimation and the no - cloning limit in teleportation . a qualitatively different cloning limit is derived for a single - quadrature gaussian quantum cloner .
|
we study first the case in which particles jump independently from state to and vice - versa , schematically : with rates that depend on the value of the heterogeneity parameter , .the probability for particle to be in state at time obeys the linear rate equation . in the case of constant rates ,the solution is : , with .the results derived below apply equally if the rates depend on time or on the time that the particle has been in its current state ( if the rate depends on the time that the particle has been on its current state , the steady - state probability of finding the particle at state is with ) . using particle independence and that the moments with respect to realizations of the stochastic process of the random variable are given by , one obtains that the average and variance of the global variable are : &=&\sum_{i=1}^n \left(p_i(t)-p_i(t)^2\right)= n\left(\overline{p(t)}-\overline{p(t)^2}\right),\label{varparticular}\end{aligned}\ ] ] where the overline denotes an average over the population , .if we consider a system where all particles are identical ( i.e. have the same values for the internal parameter ) , and keep the same average value for the global variable at time , the variance would be =n\overline{p(t)}\left(1-\overline{p(t)}\right)\ge \sigma^2[n(t)] ] , with , being the sample mean and variance equal to , =0.23 ] are themselves random variables that , as shown above , depend on the particular realization of the s .the expected values of these quantities are obtained by averaging eqs.([medparticular],[varparticular ] ) over the distribution of the individual parameters : }=n\left(\widehat{p(t)}-\widehat{p(t)^2}\right)\label{varmed},\ ] ] where the hat denotes an average with respect to , .again the variance is smaller than for a system of identical particles with the same mean value , namely , -\widehat{\sigma^2[n(t)]}=n\left(\widehat{p(t)^2}-\widehat{p(t)}^2\right) ] , the variances are taken over the distribution of the s .if we are considering a particular system , the temporal fluctuations ( all the systems considered in this paper are ergodic , so we can think on averages over time or over the realization of the stochastic process interchangeably ) in will come only from the intrinsic stochasticity , and expressions ( [ varparticular],[varmed ] ) are the ones that measure it .expressions ( [ averagedistribindep],[vartotal ] ) are appropriate only if we are considering an ensemble of systems with a distribution of parameters and our different measurements may come from different systems in the ensemble .let us now consider a general system of interacting heterogeneous particles .the stochastic description now starts from a master equation for the -particle probability distribution : \nonumber\\ & & + \sum_{i=1}^{n}(e_i^{-1}-1)\left[(1-s_i)r_i^{+}p(s_1,\dots , s_n)\right]\label{mastereqgen},\end{aligned}\ ] ] with step operators defined as .the transition rates might now depend on the state of any other particle ( this is how interactions enter in the model ) . from eq.([mastereqgen ] ) one can derive for the moments and correlations : .\label{correlationsgeneral}\end{aligned}\ ] ] with and in the second equation ( recall that ) . in general ,if the transition rates depend on the state variables , these equations are not closed since they involve higher order moments , and some approximation method is needed to proceed .systematic expansions in , including van kampen s -expansion , are not applicable , since variables are not extensive . in the following ,we introduce an approximation suitable for the analytical treatment of systems of globally coupled heterogeneous particles .we assume that the -particle correlations with scale with system size as using this ansatz one can close the system of equations ( [ momentsgeneral],[correlationsgeneral ] ) for the mean values and the correlations .this is shown in the supplementary information for general transition rates of the form .while the resulting equations for the average values coincide with the mean - field rate equations usually formulated in a phenomenological way , our formulation allows us to compute the correlations and include , if needed , higher order corrections in a systematic way .assumption ( [ ansatz ] ) can be justified noting that it is consistent with which follows from van kampen s splitting of the global variable , with deterministic and stochastic .details are given in the supplementary information .the global variable is extensive and it is expected to follow van kampen s ansatz in many cases of interest .note , however , that since there is not a closed description for the macroscopic variable , one can not use van kampen s expansion , and our approach extends the implications of this splitting of the macroscopic variable to the correlations of the microscopic state variables . for simplicity ,we have focused on -states systems and assumed a constant number of particles . systems with statesare also expected to follow ansatz ( [ ansatz ] ) , since the scaling of the global variable is not limited to -sates systems .the case of variable , but bounded , number of particles can be included straightforwardly by considering an extra state . the unbounded case can also be considered performing an appropriate limit . if the system has some spatial structure , the ansatz ( [ ansatz ] ) is not expected to be valid , and some decay of the correlations with the distance is expected instead ; this interesting situation is left for future work .we will proceed by applying the presented method to analyze the role of heterogeneity in two models previously considered in the literature that apply to contexts in which the assumption of identical agents can hardly be justified : stock markets and disease spreading .we will focus on the steady - state properties of both models , skipping transient dynamics .kirman s model was proposed to study herding behavior in the context of stock markets and collective dynamics on ant colonies . in the stock market context, agent can be in two possible states ( e.g. ``pessimistic '' -with regard to future market price- and ``optimistic '' ) and it can switch from one to the other through two mechanisms : spontaneous transitions at a rate , and induced transitions at a rate , being the influence " of agent on other agents .the case corresponds to the voter model . in the original formulation ,all agents have the same influence , i.e. .we generalize the model allowing the parameter to vary between agents .in , the effect of heterogeneity was explored numerically , but not in a systematic way .this model is interesting for us because it incorporates in a simple way two basic processes : spontaneous transitions and induced transitions . as we will see , due to its simplicity , a full analytical treatment is possible that will , in turn , allow us to obtain a deeper insight into the general effect of heterogeneity in systems of interacting particles .the master equation for the process is of the form ( [ mastereqgen ] ) , with rates given by : from ( [ momentsgeneral ] ) the averages and correlations obey : for and .note that , due to the particular form of the rates , these equations do not involve higher - order moments .this is a simplifying feature of this model that allows one to obtain exact expressions .the first equation leads to a steady state value ( a property that comes from the symmetry ) . using the relation =\sum_{i , j}\sigma_{i , j} ] , with =\overline { \lambda^2}-\overline\lambda^2 ] alone does not allow to infer the degree of heterogeneity present in the system , unless one knows from other sources and and it is not possible to conclude whether the observed fluctuations have a contribution due to the heterogeneity of the agents .however , the steady - state correlation function (t)\equiv\langle n(t)n(0)\rangle_\textrm{st}-\langle n\rangle_\textrm{st}^2 ] is obtained integrating eq.([kirmanaverage ] ) and performing the appropriate conditional averages ( see supplementary information ) : (t)=\left(\sigma^2_\text{st}[n]-u\right)e^{-(2\epsilon+\overline{\lambda})t}+ue^{-2\epsilon t } , \label{kirmancorrfunc}\ ] ] with -n/4) ] ) .fitting this expression to data one can obtain ] . in fig.[kirmancorrf ] we show that the numerical simulations indeed support the existence of two exponential decays for the correlation function . ) , solid lines ) .note that when heterogeneity is present ( ) the correlation function departs from purely exponential decay ( displayed as a dashed line ) .data for have been moved up units vertically for better visualization .parameters values are , . are independent random variable distributed according to a gamma with mean and variance , , indicated in the figure. a simple fit of expression ( [ kirmancorrfunc ] ) to the data gives .[ kirmancorrf ] ] interestingly , other ways to introduce heterogeneity in the system have different effects :- if the heterogeneity is introduced in the spontaneous transition rate , , making some particles more prone to spontaneous transitions that others ( but keeping , to isolate effects ) , collective fluctuations again increase with respect to the case of identical particles .-next , we can assume that the rate of induced change is different for different agents , even if all have the same influence .measuring this difference in susceptibility " ( to induced change ) with a parameter , we would have that the rate of induced change in agent is .the effect of heterogeneity in ( keeping again , ) is that the collective fluctuations decrease with the degree of heterogeneity in the susceptibility .-setting some heterogeneous preference for the states among the particles , i.e. making , the spontaneous rate from to of particle , different from , the spontaneous rate from to of the same particle , decreases global fluctuations . in order to vary the preference for one state keeping constant the global intrinsic noise " of this particle ( note that the correlation time of particle , when isolated , is given by ) , we set and generate as i.i.d .random variables with a distribution with support contained in the interval ] to any desired order in . in this case , however , the expressions are rather cumbersome and we skip them here .the results are plotted in figure ( [ sismed ] ) , where we compare the approximation to order with results coming from numerical simulations , showing good agreement . hereboth the average value and the variance are modified by the presence of heterogeneity ( the dependence of the average is , however , only in second order in , almost unnoticeable in the figure ) . as in the kirman model ,the size of the fluctuations increase markedly with the amount of heterogeneity in the `` influence '' ( now influence to infecting others ) of the agents . , , , .[ sismed ] ] in this case , other ways to introduce heterogeneity also have different effects .when heterogeneity appears in the recovery rate , the mean number of infected agent increases , with a moderate effect over the variance ( resulting in smaller relative fluctuations ) .heterogeneity in the susceptibility to infection ( which would be introduced with the change , with distributed over the population ) decreases the fluctuations , with little effect over the mean value .heterogeneity in the spontaneous infection rate has almost no effect . in a real situation ,one expects to find heterogeneity simultaneously in several of the parameters defining the model . when heterogeneity is present both in the infectivity and in the susceptibility , the effects of both types of heterogeneity essentially add up , with the size of the fluctuations increasing with the heterogeneity in the infectivity for a given level of heterogeneity in the susceptibility and fluctuations decreasing with the level of heterogeneity in the susceptibility for a given level of heterogeneity in the infectivity .the effects of heterogeneity in the infectivity and in the susceptibility are equivalent to those found in the kirman model , and can be intuitively understood in the same terms .heterogeneity in the recovery rate is similar to assigning an heterogeneous preference for the state ( recovery ) and its effect in the ( relative ) fluctuations is again the same as that in the case of the kirman model .this suggests that the effects of the heterogeneity found are generic and can be useful to understand the behavior of other systems .in this work , we have analyzed the combined effect of stochasticity and heterogeneity in interacting - particle systems .we have presented a formulation of the problem in terms of master equations for the individual units , but extracted conclusions about the fluctuations of collective variables .we have developed an approximation suitable for the analytical study of this general type of systems .we have shown that the heterogeneity can have an ambivalent effect on the fluctuations , enhancing or decreasing them depending on the form of the system and the way heterogeneity is introduced . in the case of independent particles , heterogeneity in the parameters always decreases the size of the global fluctuations .we have also demonstrated that it is possible to obtain precise information about the degree and the form of the heterogeneity present in the system by measuring only global variables and their fluctuations , provided that the underlying dynamical equations are known . in this way stochastic modeling allows to obtain information not accessible from a purely deterministic approach .we have also demonstrated that , in some cases , one can account for the heterogeneity of the particles without losing analytical tractability .heterogeneity among the constituent units of a system is a very generic feature , present in many different contexts and this work provides a framework for the systematic study of the effect of heterogeneity in stochastic systems , having thus a wide range of potential applicability . more research in this directionwould be welcomed .we have developed and used analytical tools based on an extension of van kampen s ansatz on the relative weight of the fluctuations compared to the mean value , suitable for systems with particle heterogeneity .we have included the details of this method in the supplementary information . in some cases , and in order to compare with the analytical expressions, we have generated data from numerical simulations using a particular form of gillespie s algorithm that takes into account the heterogeneity in the population .we now explain this algorithm using , for the sake of concreteness , the specific case of the kirman model with distributed susceptibility .the parameters of the system are : the spontaneous transition rate , the influence parameter , the susceptibility parameter of each agent , and the total number of agents . in this case, the influence parameter can be reabsorbed rescaling , so we set without loss of generality. the variables of the system are the state of each agent .we will also use the total number of agents in state , , the total susceptibility of agents in state , , and the average susceptibility . atany given instant , two events can happen:(i ) an agent in state changes to state .this can happen due to a spontaneous transition , at a total rate , or due to an induced transition , at a total rate .(i ) an agent in state changes to state .this can happen due to a spontaneous transition , at a total rate , or due to an induced transition , at a total rate . according to the gillespie method , that considers the continuous - time process , the time at which the next transition will take place is exponentially distributed , with average the inverse of the total rate .the probability that a given transition is realized is proportional to its rate .if the realized transition is a spontaneous one , the agent that actually undergoes it is selected at random ( since , in this case , they all have the same rate ) .if the transition is induced , the agent that undergoes it is selected with probability proportional to its susceptibility .it can be easily seen that this principles lead to an exact ( up to numerical precision ) simulation of sample paths of the stochastic process .the algorithm , then , proceeds as follows : + ( 0 ) evaluate the total number of particles in state , , and the total susceptibility of particles in state , .(1 ) evaluate the total transition rate .(2 ) generate the time for the next reaction , , as an exponential random variable with average . this can be done by setting , with a uniform random variable in the range .(3 ) select which reaction takes place .for this , generate a uniform random variable , , in the range .+ -if the transition will be a spontaneous transition from to ; select an agent , , at random among those at state .set , .-if , the transition will be a spontaneous transition form to ; select an agent , , at random among those at state .set , .-if , the transition will be an induced transition from to ; select an agent , , among those at state with probability proportional to the value of its susceptibility parameter . set , .-if , the transition an induced transition from to ; select an agent , , among those at state with probability proportional to the value of its susceptibility parameter . set , .(4 ) set .go to ( 1 ) .we thank e. hernandez - garcia for useful discussions .this work was supported by mineco ( spain ) , comunitat autnoma de les illes balears , feder , and the european commission under project fis2007 - 60327 .is supported by the jaepredoc program of csic .spudich , j. l. , koshland , jr .d. e. non - genetic individuality : chance in the single cell ._ nature _ * 262 , * 467 - 471 ( 1976 ) .snijder , b. , pelkmans , l. origins of regulated cell - to - cell variability .cell biol _ * 12 , * 119 - 125 ( 2011 ) .granovetter , m. threshold models of collective behavior .j. sociol ._ * 83 , * 1420 - 1443 ( 1978 ) .kirman , a. whom or what does the representative individual represent ?. perspect _ * 6 , * 117 - 136 ( 1992 ) .braiman , y. , kennedy , t. a. b. , wiesenfeld , k. and khibnik , a. entrainment of solid - state laser arrays .a _ * 52 , * 1500 - 1506 ( 1995 ) .oliva , r. a. and strogatz , s. h. dynamics of a large array of globally coupled lasers with distributed frequencies .. chaos _ * 11 , * 2359 - 2374 ( 2001 ) .albert , r. and barabsi , a .-statistical mechanics of complex networks . _ rev . mod .phys . _ * 74 , * 47 - 97 ( 2002 ) .boccaletti , s. , latora , v. , moreno , y. , chavez , m. , hwang , d .- u .complex networks : structure and dynamics _ phys .rep . _ * 424 , * 175 - 308 ( 2006 ) .tessone , c. j. , mirasso , c. r. , toral .r. and gunton , j. d. diversity - induced resonance .lett . _ * 97 , * 194101 1 - 4 ( 2006 ) .komin , n. , lacasa , l. and toral , r. critical behavior of a ginzburg landau model with additive quenched noise ._ j. stat ._ * p12008 , * 1 - 19 ( 2010 ) .peyton young , h. innovation diffusion in heterogeneous populations : contagion , social influence , and social learning .rev . _ * 99 , * 1899 - 1924 ( 2009 ) .novozhilov , a. s. epidemiological models with parametric heterogeneity : deterministic theory for closed populations _ math model nat pheno _* 7 , * 147 - 167 ( 2012 ) .masuda , n. , gibert , n. and redner , s. heterogeneous voter models .e _ * 82 , * ( 010103r ) 1 - 4 ( 2010 ) .mobilia , m. and georgiev , i. t. , voting and catalytic processes with inhomogeneities .e _ * 71 , * 046102 1 - 17 ( 2005 ) .xie , j. and sreenivasan , s. social consensus through the influence of committed minorities .e _ * 84 , * 011130 1 - 8 ( 2011 ) .fisher , d. s. critical - behavior of random transverse - field ising sping chains _ phys .b _ * 51 , * 6411 - 6461 ( 1995 ) .young , a. p. ( ed . ) _ spin glasses and random fields _ world scientific , singapore ( 1998 ) .mzard , m. , parisi , g. and virasoro , m. a. _ spin glass theory and beyond _ world scientific , singapore ( 1987 ) .bouchaud , j-.p . and georges , a. , anomalous diffusion in disordered media : statistical mechanisms , models and physical applications ._ phys . rep ._ * 195 , * 127 - 293 ( 1990 ) .ben - avraham , d. and havlin , s. _ diffusion and reactions in fractals and disordered systems _ cambridge university press , cambridge ( 2000 ) .dushoff , j. host heterogeneity and disease endemicity : a moment - based approach .theor popul biol * 56 , * 325335 ( 1999 ) .van kampen , n. g. _ stochastic processes in physics and chemistry _ , north - holland , amsterdam ( 2004 ) .kirman , a. ants , rationality , and recruitment ._ q. j. econ . _ * 108 , * 137 - 156 ( 1993 ) .liggett , t. m. _ interacting particle systems _ , springer - verlag , new york ( 1985 ) .alfarano , s. and milakovi , m. network structure and n - dependence in agent - based herding models ._ j. econ .control _ * 33 , * 78 - 92 ( 2009 ) .anderson , r. m. _ population dynamics of infectious diseases : theory and applications _ , chapman and hall , london - new york ( 1982 ) .gillespie , d. t. _ exact stochastic simulation of coupled chemical reactions _ * 81,*(25 ) 2340 - 2361 ( 1977 ) .we consider here the case in which each particle can be in one of ( instead of ) possible states .we will show that the results obtained in the main text for systems also hold in this more general case .we label the states with the subscript , so in this case the variable describing the state of particle can take possible values , ( we start the labeling from to be consistent with the previous case , that would correspond to ) .let the probability that particle , with heterogeneity parameter , be on state .it satisfies the evolution equation : with a general transition matrix ( satisfying ) , that may depend in principle on time and on the time that the particle has been on its current state . to isolate the role of parameter heterogeneity , we assume that the initial condition is the same for all the particles ( or that the initial condition is determined by the value of ) such that the solution is the same for all particles sharing the same value of the parameter. the macroscopic state of the system will be described by the set of variables , that is , the number of particles in each state .the averages and variances of this variables are given by : &=&\sum_{i=1}^n\left [ p(\lambda_i,\alpha , t)-p(\lambda_i,\alpha , t)^2\right].\end{aligned}\ ] ] this variance is again smaller that tat of a system of identical particles with same average , the difference given by : \text{id}-\sigma^2[n_\alpha(t)]=n\overline{p(\alpha , t)^2}-\overline{p(\alpha , t)}^2,\ ] ] a result exactly analogous to the one obtained in the previous case .the heterogeneity among the particles on the probability of occupation of level can be derived from the first moments of the occupation number of the level : }{n}.\ ] ] note that , when focusing on the number of particles on state , the system effectively reduces to a one , with states and no- , so the results of the previous section can be translated directly . a different and some times relevant question can be considered when the labeling of the states is such that the order is well defined ( for example each state corresponds to an energy level or a distance from a reference ) .then the average state is meaningful and we can study its statistical properties .below we show that the variance of this mean level is again always smaller if heterogeneity is present .the average state of the system is given by .it is a random variable whose average and variance are given by : &=&\sum_{\alpha,\beta=0}^{m-1}\frac{\alpha\beta}{n^2}(\langle n_\alphan_\beta\rangle-\langle n_\alpha\rangle\langle n_\beta\rangle)=\frac{1}{n^2}\sum_{i=1}^n\left[\sum_{\alpha=0}^{m-1}\alpha^2p(\alpha,\lambda_i)-\sum_{\alpha,\beta=0}^{m-1}\alpha p(\alpha,\lambda_i)\beta p(\beta,\lambda_i)\right]\label{meanlevelhet}.\end{aligned}\ ] ] we have used and ] , i.e. the variance of the mean level is always smaller in a system of heterogeneous particles , the difference with respect to the case of identical ones being : \text{id}-\sigma^2[l]=\frac{1}{n}\left(\overline{g^2}-\overline{g}^2\right)=\frac{1}{n}\sum_{\alpha,\beta=0}^{m-1}\alpha\beta\left[\sum_{i=1}^n\frac{p(\alpha,\lambda_i)p(\beta,\lambda_i)}{n}-\sum_{i , j=1}^n\frac{p(\alpha,\lambda_i)p(\beta,\lambda_j)}{n^2}\right]\geq0.\ ] ] the correction to the variance in this case scales as , but again is of the same order as the variance itself , indicating a non - negligible correction . in this case to derive the heterogeneity of over the population one needs to know the average occupation level of each state and use : .\ ] ] this can be written in terms of the variance of in an equivalent system of identical particles , \text{id} ] .[ [ intuitive - origin - of - the - decrease - of - fluctuations - for - independent - units ] ] intuitive origin of the decrease of fluctuations for independent units ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ we have shown that a system of independent heterogeneous particles has smaller fluctuations for the collective variable than an equivalent system of identical ones . the origin of this result is the following ( for simplicity we refer to the case of -state system ) : the average of the global variable is determined by the concentration of the states of the particles around state ( ) . the fluctuations ( measured by the variance ) of the global variableare determined by the stochastic fluctuations of the individual particles alone ( =\sum_i\sigma^2[s_i] ] ) .however , the macroscopic variable is indeed extensive and we can expect that it will follow van kampen s ansatz : .this implies that the -th central moment of will scale as , i.e : now , assuming that for , with independent of i.e. the -particle correlations are all or the same order in , so that scales as ( note that there are of the order of terms in the sum ) , we obtain our main ansatz , for .we have only considered terms with in the sum ( [ vkansatzcorr ] ) ; terms with repeated sub - indexes can be expressed as lower order ones .for example , if the index is present times , and the others are all different , we find : +\sigma_{j_1,\dots , j_{m - k+1}}[(1-\langle s_{j_1}\rangle)^k-(-\langle s_{j_1}\rangle)^k]\nonumber\end{aligned}\ ] ] as can be see expanding and keeping in mind that .the number of such terms in the sum ( [ vkansatzcorr ] ) is , so they give smaller contribution that terms with all sub - indexes different .proceeding order by order from , we see that our main ansatz ( [ ansatz ] ) follows from ( [ vkansatzcorr ] ) .we point out that in systems of heterogeneous particles we do not have a closed description for the global , extensive , variable so van kampen s expansion can not be used .instead we derive the implications of van kampen s ansatz over the correlations of the microscopic variables .( [ ansatz ] ) is a simple and convenient expression that in general allows to close the equation for the moments ( [ momentsgeneral],[correlationsgeneral ] ) .often , however it is not necessary , and a weaker condition of the form ( [ vkansatzcorr ] ) , that directly follows from van kampen s ansatz without further assumptions , is sufficient .van kampen s ansatz is generally valid when the macroscopic equations have a single attracting fixed point , when the system displays small fluctuations around the macroscopic state .the general method explained here is expected to be valid under similar conditions .an interesting topic for future research will be whether a system that has a single attracting fixed point in the absence of diversity always maintains this globally stable state when diversity is present , and whether a system that does not posses this globally stable fixed point can acquire it when diversity is added . in the kirman model with distributed influence ,the averages and correlations obey : \label{kirmancorrelations}\end{aligned}\ ] ] with .note that , due to the particular form of the rates , these equations are indeed closed .the first equation leads to a steady state value , which implies ( a property that comes from the symmetry ) .( [ kirmancorrelations ] ) is a linear system of equations for the correlations .the steady state correlations can always be obtained by inverting the matrix that gives the couplings . obtaining a closed expression for ] .we have seen how the application of the ansatz ( [ ansatz ] ) allows one to obtain closed expression for the global average and variance .interestingly , in this particular example , it is possible to include all higher order terms to obtain an exact expression for ( which gives the exact expression for ] , one has to average over the distribution of , which depends on the distribution of the ( we are assuming i.i.d .random variables ) .this averages were obtained numerically , by evaluating expressions ( [ kirman1st ] , [ kirmanexact ] ) over the same realizations of the s that were used in the numerical simulations .one can use the approximation , that works better the larger the and the lower the variance , and that , due to the law of large numbers , is valid in the limit . in fig.2 of the main textwe compare the average of the analytical expression ( [ kirmanexact ] ) with results coming from numerical simulations .we find perfect agreement and see that at first order the dependence of ] ( remember ) , and after some straightforward algebra , we obtain : (t)=(\sigma_{st}^2-c/\overline{\lambda})e^{-(2\epsilon+\overline{\lambda})t}+c/\overline{\lambda}e^{-2\epsilon t}.\label{kst}\ ] ] from ( [ sigmakir ] ) we get , showing that ( [ kst ] ) is equal to the expression displayed in the main text .we start with equation ( [ sigmakir ] ) : }{2(2\epsilon+\overline{\lambda})}.\ ] ] using the rescaled variables , and defining , we obtain : defining now , we arrive to : ,\\ t_{m+1}-g_1&=&t_m+\frac{n}{2}\sum_{n=1}^m\left[\left(\frac{2}{n\overline{\tilde{\lambda}}-1}\right)^n\left(\frac{2\overline{\tilde{\lambda}^{n+1}}}{n\overline{\tilde{\lambda}}-1}+g_1\overline{\tilde{\lambda}^{n}}\right)\right].\end{aligned}\ ] ] if , we see that : going back to the original variables , we finally obtain , with the notation of the main text : which can be rewritten in the form ( [ dexact ] ) , completing the proof .the condition of convergence is : a necessary and sufficient condition for this is .when the parameters are i.i.d .r. v. the probability of this typically approaches as grows .
|
we study stochastic particle systems made up of heterogeneous units . we introduce a general framework suitable to analytically study this kind of systems and apply it to two particular models of interest in economy and epidemiology . we show that particle heterogeneity can enhance or decrease the size of the collective fluctuations depending on the system , and that it is possible to infer the degree and the form of the heterogeneity distribution in the system by measuring only global variables and their fluctuations . our work shows that , in some cases , heterogeneity among the units composing a systems can be fully taken into account without losing analytical tractability most real systems are made up of heterogeneous units . whether considering a population of cells , a group of people or an array of lasers ( to name just a few examples ) , one never finds two units which behave exactly in the same way . despite this general fact , quantitative modeling most often assumes identical units , since this condition seems necessary for having analytically tractable models . moreover , in the general framework of complexity science , systems very often can be modeled only at a stochastic level , since a complete knowledge of all the variables , the precise dynamics of the units and the interaction with the environment is not available . one way to include system heterogeneity is to consider that the interactions between the units are not homogeneous but mediated by some complex network , an approach that has attracted enormous attention in the last years . an issue that has been less studied , beyond the role of particle heterogeneity in deterministic systems , is the heterogeneity in the behavior of the particles themselves in stochastic models . some exceptions include the recent reference , where the authors analyze the effect of heterogeneous transition rates on consensus times in the voter model , and works considering the effect of a few `` committed '' individuals in this and related models . in the context of statistical physics , the combined effects of stochasticity and heterogeneity have been considered , for example , in random - field ising models and spin glasses or in diffusion in disordered media . we aim here at developing a general framework for the analytical study of stochastic systems made up of heterogeneous units , applicable beyond equilibrium models or hamiltonian systems and suitable for a general class of complex systems of recent interest and at identifying some generic effects of particle heterogeneity on the macroscopic fluctuations . in this work we will show that the combined effect of stochasticity and heterogeneity can give rise to unexpected , non - trivial , results . while , based on nave arguments , one should conclude that global fluctuations increase in heterogeneous systems , we will show that in some systems of stochastic interacting particles fluctuations actually decrease with the degree of heterogeneity . moreover , we will see that it is possible to infer the degree of particle heterogeneity ( or `` diversity '' ) by measuring only global variables . this is an issue of great interest when one has access only to information at the macroscopic , population level , since it allows one to determine if heterogeneity is a relevant ingredient that needs to be included in the modeling . in this way , heterogeneity can be included when its presence is implied by the data and it does not enter as an extra free parameter . we will study first the simple case of independent particles ; then we will consider the general case of interacting particles and develop an approximated method of general validity to analytically study these systems ; next , as a way of example , this method will be applied to two particular models of interest in economy and epidemiology . our starting point is a stochastic description of a system composed by non - identical units , which we call generically `` particles '' or agents " . each particle is characterized by a constant parameter ( ) ; the value of this parameter differs among the particles and it is the source of heterogeneity considered . although there are more general ways of including heterogeneity , we will stick to this type of parametric heterogeneity because it is simple yet rather general . for simplicity , we assume that each particle can be in one of two possible states and define as the variable describing the state of particle at time ( the two - states assumption will be relaxed later ) . the collective state of the system is given by the total number of particles in state . sometimes , one does not have access to the individual dynamics and can only access experimentally the value of . we are interested in the statistical properties of this global variable and how do they depend on the degree of heterogeneity in the system . we will often refer to as the _ macroscopic _ variable and to the s as the _ microscopic _ ones .
|
meshless methods have received much attention in recent decades as new tools to overcome the difficulties of mesh generation and mesh refinement in classical mesh - based methods such as the finite element method ( fem ) and the finite volume method ( fvm ) .the classification of numerical methods for solving pdes should always start from the classification of pde problems themselves into _ strong _ , _ weak _ , or _ local weak _ forms .the first is the standard pointwise formulation of differential equations and boundary conditions , the second is the usual weak form dominating all fem techniques , while the third form splits the integrals of the usual global weak form into local integrals over many small subdomains , performing the integration by parts on each local integral .local weak forms are the basis of all variations of the _ meshless local petrov galerkin _ technique ( mlpg ) of s.n .atluri and collaborators .this classification is dependent on the pde problem itself , and independent of numerical methods and the trial spaces used .note that these three formulations of the `` same '' pde and boundary conditions lead to three essentially different mathematical problems that can not be identified and need a different mathematical analysis with respect to existence , uniqueness , and stability of solutions .meshless _ trial spaces _ mainly come via _ moving least squares _ or _ kernels _ like _ radial basis functions_. they can consist of _ global _ or _ local _ functions , but they should always parametrize their trial functions `` _ entirely in terms of nodes _ '' and require no triangulation or meshing .a third classification of pde methods addresses where the discretization lives . _domain type _techniques work in the full global domain , while _ boundary type _methods work with exact solutions of the pde and just have to care for boundary conditions .this is independent of the other two classifications .consequently , the literature should confine the term `` meshless '' to be a feature of _ trial spaces _ , not of pde problems and their various formulations .but many authors reserve the term _ truly meshless _ for meshless methods that either do not require any discretization with a background mesh for calculating integrals or do not require integration at all .these techniques have a great advantage in computational efficiency , because numerical integration is the most time consuming part in all numerical methods based on local or global weak forms .this paper focuses on a truly meshless method in this sense . most of the methods for solving pdes in global weak form , such as the element - free galerkin ( efg ) method , are not _ truly meshless _ because a triangulation is still required for numerical integration .the _ meshless local petrov - galerkin _( mlpg ) method solves pdes in local weak form and uses no global background mesh to evaluate integrals because everything breaks down to some regular , well - shaped and independent sub - domains .thus the mlpg is known as a truly meshless method .we now focus on meshless methods using moving least squares as trial functions .if they solve pdes in global or local weak form , they still suffer from the cost of numerical integration . in these methods ,numerical integrations are traditionally done over mls shape functions and their derivatives .such shape functions are complicated and have no closed form . to get accurate results , numerical quadratures with many integration pointsare required .thus the mls subroutines must be called very often , leading to high computational costs .in contrast to this , the stiffness matrix in finite element methods ( fems ) is constructed by integrating over polynomial basis functions which are much cheaper to evaluate .this relaxes the cost of numerical integrations . for an account of the importance of numerical integration within meshless methods ,we refer the reader to . to overcome this shortage within the mlpg based on mls , mirzaei and schaback proposed a new technique , _ direct meshless local petrov - galerkin ( dmlpg ) method _ , which avoids integration over mls shape functions in mlpg and replaces it by the much cheaper integration over polynomials .it ignores shape functions completely .altogether , the method is simpler , faster and often more accurate than the original mlpg method .dmlpg uses a generalized mls ( gmls ) method of which directly approximates boundary conditions and local weak forms as some _ functionals _ , shifting the numerical integration into the mls itself , rather than into an outside loop over calls to mls routines .thus the concept of gmls must be outlined first in section [ sect - gmls ] before we can go over to the dmlpg in section [ sec - dmlpg ] and numerical results for heat conduction problems in section [ sec - num ] .the analysis of heat conduction problems is important in engineering and applied mathematics .analytical solutions of heat equations are restricted to some special cases , simple geometries and specific boundary conditions .hence , numerical methods are unavoidable .finite element methods , finite volume methods , and finite difference methods have been well applied to transient heat analysis over the past few decades .mlpg methods were also developed for heat transfer problems in many cases .for instance , j. sladek et.al . proposed mlpg4 for transient heat conduction analysis in functionally graded materials ( fgms ) using laplace transform techniques .v. sladek et.al . developed a local boundary integral method for transient heat conduction in anisotropic and functionally graded media .both authors and their collaborators employed mlpg5 to analyze the heat conduction in fgms .the aim of this paper is the development of dmlpg methods for heat conduction problems .this is the first time where dmlpg is applied to a time dependent problem . moreover , compared to , we will discuss all dmlpg methods , go into more details and provide explicit formulae for the numerical implementation .dmlpg1/2/4/5 will be proposed , and the reason of ignoring dmlpg3/6 will be discussed .the new methods will be compared with the original mlpg methods in a test problem , and then a problem in fgms will be treated by dmlpg1 . in all application cases ,the dmlpg method turned out to be superior to the standard mlpg technique , and it provides excellent accuracy at low cost .whatever the given pde problem is and how it is discretized , we have to find a function such that linear equations defined by linear _ functionals _ and prescribed real values are to be satisfied . note that weak formulations will involve functionals that integrate or a derivative against some test function .the functionals can discretize either the differential equation or some boundary condition .now _ meshless methods _ construct solutions from a _ trial space _ whose functions are parametrized `` _ entirely in terms of nodes _ '' .we let these nodes form a set .theoretically , meshless trial functions can then be written as linear combinations of _ shape functions _ with or without the lagrange conditions as in terms of values at nodes , and this leads to solving the system ( [ eqlkubk ] ) in the form approximately for the nodal values .setting up the coefficient matrix requires the evaluation of all functionals on all shape functions , and this is a tedious procedure if the shape functions are not cheap to evaluate , and it is even more tedious if the functionals consist of integrations of derivatives against test functions .but it is by no means mandatory to use shape functions at this stage at all . if each functional can be well approximated by a formula in terms of nodal values for smooth functions , the system to be solved is without any use of shape functions .there is no trial space , but everything is still written in terms of values at nodes .once the approximate values at nodes are obtained , any multivariate interpolation or approximation method can be used to generate approximate values at other locations .this is a postprocessing step , independent of pde solving .this calls for efficient ways to handle the approximations ( [ eqlambdaapprox ] ) to functionals in terms of nodal values .we employ a generalized version of moving least squares ( mls ) , adapted from , and without using shape functions .the techniques of and allow to calculate coefficients for ( [ eqlambdaapprox ] ) very effectively as follows .we fix and consider just .furthermore , the set will be formally replaced by a much smaller subset that consists only of the nodes that are locally necessary to calculate a good approximation of , but we shall keep and in the notation .this reduction of the node set for the approximation of will ensure sparsity of the final coefficient matrix in ( [ eqalphasys ] ) .now we have to calculate a coefficient vector for ( [ eqlambdaapprox ] ) in case of .we choose a space of polynomials which is large enough to let zero be the only polynomial in that vanishes on .consequently , the dimension of satisfies , and the matrix of values of a basis of has rank . then for any vector of positive weights , the generalized mls solution to ( [ eqlambdaapprox ] ) can be written as where is the diagonal matrix with diagonal and is the vector with values .thus it suffices to evaluate on low order polynomials , and since the coefficient matrix in ( [ awl ] ) is independent of , one can use the same matrix for different as long as does not change locally .this will significantly speed up numerical calculations , if the functional is complicated , e.g. a numerical integration against a test function .note that the mls is just behind the scene , no shape functions occur .but the weights will be defined locally in the same way as in the usual mls , e.g. we choose a continuous function with * * and define for as a weight function , if we work locally near a point .in the cartesian coordinate system , the transient temperature field in a heterogeneous isotropic medium is governed by the diffusion equation where and denote the space and time variables , respectively , and is the final time .the initial and boundary conditions are in ( [ govern1])-([neumanncond ] ) , is the temperature field , is the thermal conductivity dependent on the spatial variable , is the mass density and is the specific heat , and stands for the internal heat source generated per unit volume .moreover , is the unit outward normal to the boundary , and are specified values on the dirichlet boundary and neumann boundary where .meshless methods write everything entirely in terms of scattered nodes forming a set located in the spatial domain and its boundary . in the standard mlpg , around each a small subdomain is chosen such that integrations over are comparatively cheap . for instance , is conveniently taken to be the intersection of with a ball of radius or a cube ( or a square in 2d ) centered at with side - length . on these subdomains , the pde including boundary conditions is stated in a localized weak form for an appropriate _ test function _ . applying integration by parts ,this weak equation can be partially symmetrized to become the _ first _ local weak form the _ second _ local weak form , after rearrangement of and integration by parts twice , can be obtained as if the boundary of the local domain hits the boundary of , the mlpg inserts boundary data at the appropriate places in order to care for boundary conditions .since these local weak equations are all affine linear in even after insertion of boundary data , the equations of mlpg are all of the form ( [ eqlkubk ] ) after some rearrangement , employing certain linear functionals . in all cases , the mlpg evaluates these functionals on shape functions , while our dmlpg method will use the gmls approximation of section [ sect - gmls ] without any shape function . however , different choices of test functions lead to the six different well known types of mlpg .the variants mlpg1/5/6 are based on the weak formulation .if is chosen such that the first integral in the right hand side of ( [ eq - lwf - v ] ) vanishes , we have mlpg1 . in this case should vanish on . if the heaviside step function on local domains is used as test function , the second integral disappears and we have a pure local boundary integral form in the right hand side .this is mlpg5 . in mlpg6 ,the trial and test functions come from the same space .mlpg2/3 are based on the local unsymmetric weak formulation .mlpg2 employs dirac s delta function as the test function in each , which leads to a pure collocation method .mlpg3 employs the error function as the test function in each . in this method, the test functions can be the same as for the discrete least squares method .the test functions and the trial functions come from the same space in mlpg3 .finally , mlpg4 ( or lbie ) is based on the weak form , and a modified fundamental solution of the corresponding elliptic spatial equation is employed as a test function in each subdomain .we describe these types in more detail later , along with the way we modify them when going from mlpg to dmlpg .independent of which variation of mlpg we go for , the dmlpg has its special ways to handle boundary conditions , and we describe these first .neither lagrange multipliers nor penalty parameters are introduced into the local weak forms , because the dirichlet boundary conditions are imposed directly . for nodes , the values are known from the dirichlet boundary conditions . to connect them properly to nodal values in neighboring points inside the domain or on the neumann boundary , we turn the gmls philosophy upside down and ask for coefficients that allow to reconstruct nodal values at from nodal values at the .this amounts to setting in section [ sect - gmls ] , and we get localized equations for dirichlet boundary points as .\ ] ] note that the coefficients are time independent . in matrix form , ( [ dirichlet - impose ] ) can be written as where is the time dependent vector of nodal values at .these equations are added into the full matrix setup at the appropriate places , and they are in truly meshless form , since they involve only values at nodes and are without numerical integration .note that ( [ eq - lwf - v ] ) has no integrals over the dirichlet boundary , and thus we can impose dirichlet conditions always in the above strong form . for there are two possibilities .we can impose the dirichlet boundary conditions either in the local weak form or in the collocation form .of course the latter is the cheaper one .we now turn to neumann boundary conditions .they can be imposed in the same way as dirichlet boundary conditions by assuming in the gmls approximation .\ ] ] note that the coefficients again are time independent , and we get a linear system like ( [ dirichlet - sys ] ) , but with a vector of nodal values of normal derivatives in the right hand side .this is collocation as in subsection [ subsec - dmlpg2 ] .but it is often more accurate to impose neumann conditions directly into the local weak forms and .we will describe this in more detail in the following subsections .we now turn the different variations of the mlpg method into variations of the dlmpg .these methods are based on the local weak form .this form recasts to after inserting the neumann boundary data from , when the domain of ( [ eq - lwf - v ] ) hits the neumann boundary .all integrals in the top part of can be efficiently approximated by gmls approximation of section [ sect - gmls ] as purely spatial formulas while the two others can always be summed up , the first formula , if applied to time varying functions , has to be modified into and expresses the main pde term not in terms of values at nodes , but rather in terms of time derivatives of values at nodes .again , everything is expressed in terms of values at nodes , and the coefficients are time independent .furthermore , section [ sect - gmls ] shows that the part of the integration runs over low order polynomials , not over any shape functions .the third functional can be omitted if the test function vanishes on .this is dmlpg1 .an example of such a test function is where is the weight function in the mls approximation with the radius of the support of the weight function being replaced by the radius of the local domain . in dmlpg5 ,the local test function is the constant .thus the functionals of ( [ eqlamjk ] ) are not needed , and the integrals for take a simple form , if and are simple .dmlpg5 is slightly cheaper than dmlpg1 , because the domain integrals of are replaced by the boundary integrals of .depending on which parts of the functionals are present or not , we finally get a time dependent system of the form where is the time dependent vector of nodal values , collects the time dependent right hand sides with components and , .the -th row of is where ^t , \\ % \label{lambda1-dmlpg1 } \\ \lambda_{2,k}(\calp)= & -\left[\int_{\omega_s^{k}}\kappa \nabla p_1\cdot \nabla v \ , d\omega ,\int_{\omega_s^{k}}\kappa \nabla p_2\cdot \nabla v \ ,d\omega , \ldots , \int_{\omega_s^{k}}\kappa \nabla p_q\cdot \nabla v \, d\omega\right]^t,\\ % \label{lambda2-dmlpg1}\\ \lambda_{3,k}(\calp)= & \left [ \int_{\partial\omega_s^k\setminus \gamma_n } \kappa \frac{\partial p_1}{\partial n}v\ , d \gamma , \int_{\partial\omega_s^k\setminus \gamma_n } \kappa \frac{\partial p_2}{\partial n}v\ , d \gamma , \ldots , \int_{\partial\omega_s^k\setminus \gamma_n } \kappa \frac{\partial p_q}{\partial n}v \ , d \gamma\right]^t.\end{aligned}\ ] ] as we can immediately see , _ numerical integrations are done over low - degree polynomials only , and no shape function is needed at all .this reduces the cost of numerical integration in mlpg methods significantly ._ in this method , the test function on the local domain in is replaced by the test functional , i.e. we have strong collocation of the pde and all boundary conditions . depending on where lies , one can have the functionals connecting to dirichlet , neumann , or pde data .the first form is used on the dirichlet boundary , and leads to ( [ dirichlet - impose ] ) and ( [ dirichlet - sys ] ) .the second applies to points on the neumann boundary and is handled by ( [ neumann - impose ] ) , while the third can occur anywhere in independent of the other possibilities . in all cases ,the gmls method of section [ sect - gmls ] leads to approximations of the form entirely in terms of nodes , where values on nodes on the dirichlet boundary can be replaced by given data .this dmlpg2 technique is a pure collocation method and requires no numerical integration at all .hence it is truly meshless and the cheapest among all versions of dmlpg and mlpg .but it needs higher order derivatives , and thus the order of convergence is reduced by the order of the derivative taken .sometimes dmlpg2 is called _ direct mls collocation ( dmlsc ) method _ .it is worthy to note that the recovery of a functional such as or in ( [ eqallmu ] ) using gmls approximation gives _gmls derivative approximation_. these kinds of derivatives have been comprehensively investigated in and a rigorous error bound was derived for them .sometimes they are called _ diffuse _ or _ uncertain _ derivatives , because they are not derivatives of shape functions , but proves there is nothing diffuse or uncertain about them and they are direct and usually very good numerical approximation of corresponding function derivatives .this method is based on the local weak form and uses the _fundamental solution _ of the corresponding elliptic spatial equation as test function . herewe describe it for a two dimensional problem .to reduce the unknown quantities in local weak forms , the concept of _ companion solutions _ was introduced in .the companion solution of a 2d laplace operator is which corresponds to the poisson equation and thus is a _solution vanishing for .dirichlet boundary conditions for dmlpg4 are imposed as in .the resulting local integral equation corresponding to a node located inside the domain or on the neumann part of the boundary is where is a coefficient that depends on where the source point lies .it is on the smooth boundary , and at a corner where the interior angle at the point is .the symbol represents the cauchy principal value ( cpv ) . for interior points we have and cpv integrals are replaced by regular integrals . in this case^t,\ ] ] and , where ^t , \\ \lambda^{(2)}_{2,k}(\calp)&= -\left[\dashint_{\gamma_s^{k } } \frac{\partial v}{\partial n}p_1 \ , d \gamma , \dashint_{\gamma_s^{k } } \frac{\partial v}{\partial n}p_2 \ , d \gamma , \ldots , \dashint_{\gamma_s^{k } } \frac{\partial v}{\partial n}p_q \ , d \gamma \right]^t , \\ \lambda^{(3)}_{2,k}(\calp)&= \left[\int_{\omega_s^{k}}\frac{1}{\kappa}\nabla\kappa\cdot \nabla p_1\ , v \ , d\omega , \int_{\omega_s^{k}}\frac{1}{\kappa}\nabla\kappa\cdot \nabla p_2\ , v \ ,d\omega , \ldots , \int_{\omega_s^{k}}\frac{1}{\kappa}\nabla\kappa\cdot \nabla p_q\ , v \ , d\omega \right]^t.\end{aligned}\ ] ] finally , we have the time - dependent linear system of equations where the -th row of is the components of the right - hand side are this technique leads to weakly singular integrals which must be evaluated by special numerical quadratures . in both mlpg3 and mlpg6 ,the trial and test functions come from the same space .therefore they are galerkin type techniques and should better be called mlg3 and mlg6 .but they annihilate the advantages of dmlpg methods with respect to numerical integration , because the integrands include shape functions .thus we ignore dmlpg3/6 in favour of keeping all benefits of dmlpg methods .note that mlpg3/6 are also rarely used in comparison to the other mlpg methods .to deal with the time variable in meshless methods , some standard methods were proposed in the literature .the laplace transform method , conventional finite difference methods such as forward , central and backward difference schemes are such techniques . a method which employs the mls approximation in both time and space domains ,is another different scheme . in our case the linear system ( [ eqalphasys ] ) turns into the time dependent version ( [ eqfullsys ] ) coupled with that could , for instance , be solved like any other linear first order implicit differential algebraic equations ( dae ) system . invoking an ode solver onit would be an instance of the method of lines . if a conventional time difference scheme such as a crank - nicolson method is employed , if the time step remains unchanged , and if , then a single lu decomposition of the final stiffness matrix and corresponding backward and forward substitutions can be calculated once and for all , and then the final solution vector at the nodes is obtained by a simple matrix vector iteration .the classical mls approximation can be used as a postprocessing step to obtain the solution at any other point .implementation is done using the basis polynomials where is an average mesh - size , and is a fixed evaluation point such as a test point or a gaussian point for integration in weak form based techniques . here is a multi - index and . if then .this choice of basis function , instead of , leads to a well - conditioned matrix in the ( g)mls approximation .the effect of this variation on the conditioning has been analytically investigated in .a test problem is first considered to compare the results of mlpg and dmlpg methods .then a heat conduction problem in functionally graded materials ( fgm ) for a finite strip with an exponential spatial variation of material parameters is investigated . in numerical results ,we use the quadratic shifted scaled basis polynomial functions ( ) in ( g)mls approximation for both mlpg and dmlpg methods . moreover , the gaussian weight function where and is used .the parameter should be large enough to ensure the regularity of the moment matrix in ( g)mls approximation .it depends on the degree of polynomials in use .here we put .the constant controls the shape of the weight function and has influence on the stability and accuracy of ( g)mls approximation .there is no optimal value for this parameter at hand .experiments show that lead to more accurate results .all routines were written using matlab and run on a pentium 4 pc with 2.50 gb of memory and a twin core 2.00 ghz cpu .let ^ 2\subset\r^2 ] ) . besides , in fig . [ fig4 ] the numerical and analytical solutions at points very close to the application of thermal shocks are given and compared for sample time sec .insert figures 3 and 4 the discussion above concerns heat conduction in homogeneous materials in a case where analytical solutions can be used for verification .consider now the cases , , , and m , respectively .the variation of temperature with time for the three first -values at position are presented in fig .the results are in good agreement with figure 11 presented in , figure 6 presented in and figure 4 presented in .insert figure 5 in addition , in fig .[ fig6 ] numerical results are depicted for m . for high values of ,the steady state solution is achieved rapidly .insert figure 6 it is found from figs .[ fig5 ] and [ fig6 ] that the temperature increases with an increase in -values . for the final steady state, an analytical solution can be obtained as analytical and numerical results computed at time sec .are presented in fig .numerical results are in good agreement with analytical solutions for the steady state temperatures .insert figure 7the first author was financially supported by the center of excellence for mathematics , university of isfahan .d. mirzaei , m. dehghan , mlpg method for transient heat conduction problem with mls as trial approximation in both time and space domains , cmes computer modeling in engineering & sciences , 72 ( 2011 ) 185 - 210 .j. sladek , v. sladek , ch .hellmich , j. eberhardsteiner , heat conduction analysis of 3-d axisymmetric and anisotropic fgm bodies by meshless local petrov - galerkin method , comput .39 ( 2007 ) 223233 .j. sladek , v .sladek , ch .zhang , transient heat conduction analysis in functionally graded materials by the meshless local boundary integral equation method , computational materials science , 28 ( 2003 ) 494504 .
|
as an improvement of the _ meshless local petrov galerkin ( mlpg ) _ , the _ direct meshless local petrov galerkin ( dmlpg ) _ method is applied here to the numerical solution of transient heat conduction problem . the new technique is based on _ direct _ recoveries of test functionals ( local weak forms ) from values at nodes without any detour via classical moving least squares ( mls ) shape functions . this leads to an absolutely cheaper scheme where the numerical integrations will be done over low degree polynomials rather than complicated mls shape functions . this eliminates the main disadvantage of mls based methods in comparison with finite element methods ( fem ) , namely the costs of numerical integration . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
|
[ app : hys ] [ app : hys_modspec_code ] * [ app : hys_va_code ] * * * * *[ app : rram_v0_modspec_code ] * [ app : rram_v0_va_code ] * * * * *[ [ smoothfunctions.va-verilog-a-file-for-smoothing-function-definitions ] ] ` smoothfunctions.va ` : verilog - a file for smoothing function definitions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
existing compact models for memristive devices ( including rram and cbram ) all suffer from issues related to mathematical ill - posedness and/or improper implementation . this limits their value for simulation and design and in some cases , results in qualitatively unphysical predictions . we identify the causes of ill - posedness in these models . we then show how memristive devices in general can be modelled using only continuous / smooth primitives in such a way that they always respect physical bounds for filament length and also feature well - defined and correct dc behaviour . we show how to express these models properly in languages like verilog - a and modspec ( matlab ) . we apply these methods to correct previously published rram and memristor models and make them well posed . the result is a collection of memristor models that may be dubbed `` simulation - ready '' , _ i.e. _ , that feature the right physical characteristics and are suitable for robust and consistent simulation in dc , ac , transient , , analyses . we provide implementations of these models in both modspec / matlab and verilog - a . = = = = [ [ section ] ] in 1971 , leon chua noted that while two - terminal circuit elements relating voltage and current ( _ i.e. _ , resistors ) , voltage and charge ( capacitors ) and current and flux ( inductors ) were well known , no element that directly relates charge and flux seemed to exist . he explored the properties of this hypothetical element and found that its voltage - current characteristics would be those of a resistor , but that if the element were nonlinear , its resistance would change with time and be determined by the history of biasses applied to the device . in other words , the instantaneous resistance of the element would retain some memory of past inputs . chua dubbed this missing element a `` memristor '' , and showed that a telltale characteristic was that its curves would always pass through , regardless of how it was biassed as a function of time. characteristics are `` pinched '' at the origin . ] long after chua s landmark observation , devices with memristive behaviour were found in nature , _ e.g. _ , in the well - publicized nano - crossbar device of stan williams and colleagues , and others as well . it was also realized that many physically observed devices prior to were in fact memristors . physically , present - day memristive nano - devices typically operate by forming and destroying conducting filaments through an insulating material sandwiched between two contacts separated by a small distance . the conducting filaments can be of different types . for example , they can consist of oxide vacancies , by filling which electrons can flow , as in rram ( resistive random access memory ) . in cbram ( conductive bridging ram ) , metal ions that infiltrate the insulator form the conducting filament . in memristors made of si - impregnated silica , conduction occurs via tunnelling between traps . depending on the magnitude and polarity of the voltage applied , the conducting filaments can lengthen or shorten ; it is their length that determines the resistance of the device . basic geometry indicates that the length of the filament(s ) must always be between zero ( _ i.e. _ , there is no filament ) and the distance between the contacts ( _ i.e. _ , the filament connects the two contacts ) in other words , the length of the filament(s ) must never be outside the range ] , ] , ] window size is known as the joglekar window : where is a positive integer used to adjust the sharpness of the window . after multiplying window functions , the function used in these models is still smooth and continuous , and the models still in the differential equation format , complying with the model template we have discussed in sec . [ sec : hys ] . as a result , the models are often reported to run reasonably well in transient simulations . however , there are subtle and deeper problems with this approach . the problems can also be illustrated by analyzing the sign and zero - crossings of function . after multiplying by window functions , the zero - crossings of are shown in fig . [ fig : rram_f2 ] ( b ) . the curves consist of three lines : the and lines , and the line . based on the sign of , the left half of the line and the right half of the line consist of unstable dc solutions ; they are unlikely to show up in transient simulations . therefore , when sweeping the voltage between negative and positive values , will move between and . this is the foundation for the model to work in transient simulations . however , based on fig . [ fig : rram_f2 ] ( b ) , the model has several problems in other types of analyses . in dc operating point analysis or dc sweeps , all lines consisting the curves can show up , including those containing unphysical results . for example , when the voltage is zero , any size is a solution ; is not bounded anymore . in homotopy analysis , the intersection of solution lines introduced by the window functions makes the solution curve difficult to track . in particular , it will attempt to track the line where grows without bound . the fact that there is no single continuous solution curve in the state space indicates poor numerical properties of the model in other types of simulation algorithms as well . even in transient analysis , the model wo nt run properly unless we carefully set an initial condition for . if the initial value of is beyond , or if it falls outside this range due to any numerical error , it can start to grow without bound . other window functions are also tried for this approach , _ e.g. _ , biolek and prodromakis windows . but as long as the window function is multiplied to , the picture of dc solutions in fig . [ fig : rram_f2 ] ( b ) stays the same . and it is this introduction of unnecessary dc solutions the modelling artifact that limits the rram model s use in simulation analyses . in our approach , we try to bound variable while keeping the dc solutions in a single continuous curve , illustrated as the curve in fig . [ fig : rram_f2 ] ( c ) . this is inspired by studying the model template ` hys_example ` in sec . [ sec : hys ] . the sign and zero - crossing of for our rram model are closely related to those of the function ( [ eq : hys_f2 ] ) for ` hys_example ` ( shown in fig . [ fig : hys_f2 ] ) . the desired solution curve consists of three parts : curve ` a ` and ` c ` contain the stable solutions ; curve ` b ` contains those that are unstable ( or marginally stable ) . in this way , when sweeping the voltage past zero , variable will start to switch between and . if the sweeping is fast enough , i - v hysteresis will show up . to construct the desired solution curve , we modify the original in ( [ eq : rram_v0_f2 ] ) by adding clipping terms to it . our new can be written as where is the original function in ( [ eq : rram_v0_f2 ] ) , and are clipping functions : functions and in ( [ eq : rram_v0_fmin ] ) and ( [ eq : rram_v0_fmax ] ) are smooth versions of step functions : the intuition behind and is to make and when is within ] , ] , $ ] . note that there is in the function . this is to scale the equation for better convergence . we explain this technique in more detail in sec . [ sec : rram_v0_va ] . the code in appendix [ app : rram_v0_modspec_code ] shows how to enter this rram model into mapp . [ sec : rram_v0_va ] having followed the model template discussed in sec . [ sec : hys ] and formulated the rram model in the differential equation format in sec . [ sec : rram_v0_eqn ] , in this section , we discuss the verilog - a model for rram . the verilog - a model is show in appendix [ app : rram_v0_va_code ] . same as in the verilog - a model for ` hys_example ` ( sec . [ sec : hys_va ] ) , we also model the internal state variable in rram as a voltage . we have discussed why this approach results in more robust verilog - a models compared with many alternatives , _ e.g. _ , using `` ` idt ( ) ` '' , implementing time integration inside models , . in this section , we would like to highlight from the provided verilog - a code a few more details in our modelling practices . in the verilog - a code , we can see that is modelled in nano - meters , as opposed to meters . this is not an arbitrary choice ; the intention is to bring the value of this variable to around , at the same scale as other voltages in the circuit . when the simulator solves for an unknown , only a certain accuracy can be achieved , controlled by absolute and relative tolerances . the abstol in most simulator for voltages is set to be . if gap is modelled in meters with nominal values around , it wo nt be solved accurately . apart from the scaling of unknowns , we can also see from the verilog - a code another factor in the implicit equation , scaling down its value . in this rram model , the implicit equation is represented as the kcl at the internal node . the equality in kcl is calculated to a certain accuracy as well often . however , without scaling down , the equation is expressed in nano - meter per second . for rram models , this is a value around . the simulator has to ensure an accuracy of at least 18 digits such that the kcl is satisfied , which is not necessary and often not achievable with double precision . so we scale it by to bring its nominal value to around , just like a regular current in a circuit . note that when explaining the scaling of unknowns and equations , we are using the units nm or nm / s , mainly for readers to grasp the idea more easily . it does nt indicate that certain units are more suitable for modelling than others . the essence of scaling is to make the model work better with simulation tolerances set for unknowns and equations . note that in the verilog - a code , we include the standard ` constants.vams ` file and use physical constants from it . this practice ensures that we are using these constants with their best accuracy ; their values will also be consistent with other models also including ` constants.vams ` . although this is straightforward to understand , it is often neglected in existing models . for example , in the model released in , many constants are used with only two digits of accuracy . a variable named alpha , which can be calculated with 16 digits , is hard - coded to . since numerical errors propagate through computations , the best accuracy the model can possibly achieve is limited to two digits , and worse if the inaccurate variables are used in non - linear functions . in the verilog - a code , we have used ` limexp ` , ` smoothstep ` . as discussed earlier , these functions help with convergence greatly and are highly recommended for use in compact models . [ sec : rram_v0_simulation ] in this section , we simulate the rram model in a test circuit with the same schematic as in fig . [ fig : vsrc_hys ] . the transient simulation results are shown in fig . [ fig : rram_v0_tran ] , with the i - v relationship plotted in log scale in fig . [ fig : rram_v0_tran ] ( b ) . the results clearly show pinched hysteresis curves . the model we develop also work in dc and homotopy analyses . - relationship under dc conditions acquired from homotopy analysis are shown in fig . [ fig : rram_v0_homotopy ] . dc sweeps from both directions in this case give the same results since the model does nt have dc hysteresis . the - curve in fig . [ fig : rram_v0_homotopy ] matches our discussion on the solutions in sec . [ sec : rram_v0_eqn ] . note that in the transient results , gap is not perfectly flat at or ; same phenomenon can also be observed in the dc solutions obtained using homotopy . this is because that the clipping functions we use , although fast growing , can not set exact hard limits on the internal unknown . in other words , even when gap is close to or , changing the voltage can still affect gap slightly . this is not a modelling artifact . in fact , this makes the model numerically robust , and at the same time more physical . it maintains the smoothness of equations and reduces the chance for jacobian matrix to become singular in simulation . physically , even when is close to the boundary , changing voltage still causes the device s state to change . the small changes in in this scenario can be interpreted as reflecting the change in device s state , _ e.g. _ , the width of the filament . we conclude that , by making the model equations smooth , we are actually making the model more physical . [ [ section-4 ] ] [ sec : convergence ] a common issue with newly - developed compact models of non - linear devices is that they often do not converge in simulation . in this section , we discuss several techniques in compact modelling that can often improve the convergence of simulation . among these techniques , we focus on the use of spice - compatible limiting functions . we explain the intuition behind this technique and use this intuition to design a limiting function specific to the rram model . in the previous sections , we have already discussed several convergence aiding techniques used in our rram model . one of them is the proper scaling of both unknowns and equations . this improves both the accuracy of solutions and the convergence of simulation . the use of gmin makes sure that the two terminals are always connected with a finite resistance , reducing the chance for the circuit jacobian matrix to become singular during simulation . we have also discussed the use of smooth and safe functions ( ` smoothstep ( ) ` , ` safeexp ( ) ` ) . we highly recommend that compact model developers consider these techniques when they encounter convergence issues with their models . however , the above techniques do not solve all the convergence problems with the rram model . in particular , we have observed that the values and derivatives of ( [ eq : rram_v0_f1 ] ) and ( [ eq : rram_v0_f2 ] ) often become very large while the newton raphson ( nr ) iterations are trying different guesses during dc operating point analysis . this is because of the fast - growing sinh functions in the equations . one solution is to use safesinh instead of sinh . the safesinh function uses safeexp / limexp inside to eliminate the fast - growing part with its linearized version , keeping the function values from exploding numerically . although it has some physical justifications , it also has the potential problems of inaccuracy , especially since the exponential relationship is the key to the switching behaviour of rram devices ( sec . [ sec : rram_v0_eqn ] ) . therefore , in this section , we focus on another technique that can keep the fast - growing exp or sinh function intact , but prevent nr from evaluating these functions with large input values . the techniques are known as initialization and limiting ; they were implemented in berkeley spice , for nonlinear devices such as diodes , bjts and mosfets . initialization evaluates these fast - growing nonlinear equations of semiconductor devices with `` good '' voltage values at the first nr iteration ; limiting changes the nr guesses for these voltages in the subsequent iterations , based on both the current guess at each iteration and the value used in the last evaluation . the limiting functions in spice include ` pnjlim ` , ` fetlim ` and ` limvds ` . among them , ` pnjlim ` calculates new p - n junction voltage based on the current nr guess and the last junction voltage being used , in an attempt to avoid evaluating the exp function in the diode equation with large values . this mechanism is applicable to sinh as well . inspired by ` pnjlim ` , we design a ` sinhlim ` that can reduce the chance of numerical exposion for the rram model . [ fig : pnjlim ] [ fig : sinhlim ] ` pnjlim ` calculates the new junction voltage using the mechanism illustrated in fig . [ fig : pnjlim ] . the current nr guess is , which is too large a value for evaluating an exponential function . so ` pnjlim ` calculates the limited version , , in between and . since nr linearized the system equation at in the last nr iteration , and the linearization indicates that the new guess is , what nr actually wants is for the p - n junction to generate the current predicted for . because this prediction is based on the linearization at , the actual current at is apparently far larger than it . therefore , a more sensible choice for the junction voltage should be one that gives out the predicted current . from the above discussion , we can write an equation for the desired : solving from the above equation , we get the core of ` pnjlim ` . from the above formula , the operation of ` pnjlim ` is essentially inverting the diode i - v equation to calculate the desired voltage from the predicted current at . based on the same idea , we can write the limiting function for sinh . as illustrated in fig . [ fig : sinhlim ] , given and the current guess , we can calculate the desired `` current '' ( function value ) , then invert sinh to get the corresponding for function evaluation . such an satisfies which gives out the formulation of ` sinhlim ` : this new limiting function ` sinhlim ` can be easily implemented in any spice - compatible circuit simulator . to demonstrate its effectiveness , we implement a simple two - terminal device with its i - v relationship governed by a sinh function , _ i.e. _ , the device equation is . as sinh is a rapidly - growing function , even a simple circuit with a series connection of a voltage source , a resistor of and this device may not converge if the supply voltage is large . this is because when searching for the solution , plain nr algorithm may try large voltage values as inputs to the model s sinh function , resulting difficulties or failure in convergence . in contrast , spice - compatible nr can use ` sinhlim ` to calculate for use in iterations , preventing using large directly . we run dc operating point analyses on this simple circuit , with nr starting from all - zeros as initial guesses . as shown in table [ tab : sinhiv ] , with the same convergence criteria , the use of ` sinhlim ` improves convergence greatly . . [ tab : sinhiv ] [ cols="^,^,^",options="header " , ] we implement parameterized versions of the ` sinhlim ` function in our rram model to aid convergence ; the code is included in appendix [ app : rram_v0_modspec_code ] . since there are two sinh function used in the rram model , in both and , two limited variables are declared in the model , with two ` sinhlim ` with different parameters used in a vectorized limiting function . many simulators available today are spice - compatible , in the sense that they implement the equivalent limiting technique as in spice . however , we would like to note that the limiting functions available in literature today , 40 years after the introduction of spice , are still limited to only the original ` pnjlim ` , ` fetlim ` and ` limvds ` . the ` sinhlim ` we have developed for rram models , is a new one . moreover , among all these limiting functions , ` sinhlim ` is the only one that is smooth and continuous , making it more robust to use in simulation . [ [ section-5 ] ] [ sec : memristor ] in this section , we apply the modelling techniques and methodology we have developed in previous sections to the modelling of general memristive devices . we use the same model template we have demonstrated in sec . [ sec : hys ] , where specifies the device s i - v relationship , describes the dynamics of the internal unknown . for general memristive devices , there are several equations available for and , from existing models such as the linear and non - linear ion drift models , simmons tunnelling barrier model , team / vteam model , yakopcic s model , . in this section , we examine the reason why they do not work well in simulation , especially in dc analysis . we first summarize the common issues with the and functions used in them , then examine the individual problems of each / function , and list our improvements in table [ tab : f1 ] and table [ tab : f2 ] . as discussed earlier , both , the i - v relationship , and , the internal unknown dynamics , are often highly non - linear and asymmetric _ wrt _ positive and negative voltages ; available and functions often use discontinuous and fast - growing components in them , _ e.g. _ , exponential , sinh functions , power functions with a large exponent , . these components result in difficulty of convergence in simulation . to overcome these difficulties , similar to what we did in sec . [ sec : rram ] for the rram model , we can use smooth and safe functions . the key idea of the design of smooth functions is to combine common elementary functions to approximate the original non - smooth ones . a parameter common to all these functions , _ aka . _ smoothing factor , is used to control the trade - off between better approximation and more smoothness , which is often synonymous to better convergence . similar ideas apply to safe functions . for the fast - growing functions , their `` safe '' versions limit the maximum slope the functions can reach , then linearize the functions to keep the slopes constant beyond those points . for functions that are not defined for all real inputs , _ e.g. _ , sqrt , log , , their `` safe '' versions clip the inputs using smoothclip such that these functions will never get invalid inputs . specifically , for the available and functions , the ` if - then - else ` statements can be replaced with smoothswitch . the exp and sinh functions can be replaced with safeexp and safesinh . the power functions , _ e.g. _ , pow(a , b ) , can also be replaced with safeexp(b*safelog(a ) ) . we have implemented common smooth and safe functions in mapp . for example , issuing `` help smoothclip '' within mapp will display more information on the usage of smoothclip . for verilog - a , we have implemented these smooth and safe functions as `` ` analog functions ` '' , listed them in a separate file in appendix [ app : smoothfunctions_va_code ] for model developers to use conveniently . the use of smooth and safe functions are more than numerical tricks , and they do not necessarily make models less physical . on the contrary , physical systems are usually smooth . for example , when switching the voltage of a two - terminal device across zero , the current should change continuously and smoothly . therefore , compared with the original ` if - then - else ` statements , the smoothswitch version is likely to be closer to physical reality . the same applies to the safe functions we use in our models . for example , there are no perfect exponential relationships in physical reality . even the growth rate of bacteria , which is often characterized as exponential in time , will saturate eventually . another quantity often modelled using exponential functions is the current through a p - n junction . when the voltage indeed becomes large , the junction does nt really give out next to infinite current . instead , other factors come into play the temperature will become too high that the structure will melt . this is not considered when writing the exponential i - v relationship ; the use of exponential function is not to capture the physics exactly , but more an approximation and simplification of physical reality . so the use of safeexp and safesinh is more than just a means to prevent numerical explosion , but also a fix to the original over - simplified models . improved * + 1 & & can have division - by - zero when . we use then + 2 & & we change exponential function to ` safeexp ( ) ` . + 3 & & we change sinh to ` safesinh ( ) ` , exponential function to ` safeexp ( ) ` . + 4 & & we change sinh to ` safesinh ( ) ` , then smooth the function . + 5 & & we express using : then we change sinh to ` safesinh ( ) ` , exponential function to ` safeexp ( ) ` . + one common problem with existing functions is the range of the internal unknown . we have discussed this problem in sec . [ sec : rram ] in the context of rram device models . the functions available either neglect this issue or use window functions to set the bounds for the internal unknown . from the discussion in sec . [ sec : rram ] , using window functions introduces modelling artifacts that limit the usage of the model to only transient simulation . to fix this problem , we apply the same modelling technique using clipping functions in our memristor models . improved * + 1 & linear ion drift model : & no dc hysteresis . does nt ensure . we use the clipping technique to set bounds for . + 2 & nonlinear ion drift model : & no dc hysteresis . does nt ensure . we use the clipping technique to set bounds for . + 3 & simmons tunnelling barrier model : where . & no dc hysteresis . does nt ensure . contains fast - growing functions . we change sinh to ` safesinh ( ) ` , exponential function to ` safeexp ( ) ` , then implement the smooth version of this ` if - then - else ` statement . we use the clipping technique to set bounds for . + 4 & vteam model : & dc hysteresis is modelled by a flat region . we redesign the equation based on fig . [ fig : memristor_f2 ] . where such that when and , it is equivalent to vteam equation in the and regions respectively . we also make the function smooth : and finally , we use the clipping technique to set bounds for . + 5 & yakopcic s model : where and & dc hysteresis is modelled by a flat region . we redesign the equation based on fig . [ fig : memristor_f2 ] . where we also change exponential function to ` safeexp ( ) ` , make the function smooth , then use the clipping technique to set bounds for . + 6 & standford / asu rram model : where & we convert to : then we change sinh to ` safesinh ( ) ` , exponential function to ` safeexp ( ) ` . we also use the clipping technique to set bounds for . + another problem with the available functions is the way they handle dc hysteresis . as discussed earlier , dc hysteresis is observed in forward and backward dc sweeps ; it accounts for the pinched i - v curves when voltage is moving infinitely slow . from the model example ` hys_example ` in sec . [ sec : hys ] , we can conclude that dc hysteresis results from the model s dc solution curve folding backward in voltage , which creates multiple stable solutions of internal state variable at certain voltages . in fact , from the equations of team / vteam model and yakopcic s model , we can see an attempt to model dc hysteresis . however , the way it is done in both these models is to set within a certain voltage range , _ e.g. _ , when voltage is close to 0 . in this way , as long as the voltage is within this range , there are infinitely many solutions for the model , regardless of values of . during transient simulation , will just keep its old value from the previous time point . in dc analysis , if also keeps its old value from the last sweeping point , there can be dc hysteresis . however , since actually has infinitely many solutions within this voltage range , the equation system becomes ill - conditioned . the circuit jacobian matrix can also become singular , since has no control over the value of . homotopy analysis wo nt work with these device models since there is no solution curve to track . even in dc operating point ( op ) analysis , the op can have a random as part of the solution , depending on the initial condition , and if it is not provided , on how the op analysis is implemented . dc sweep results also depend on how dc sweep is written , particularly on the way the old values are used as initial guesses for current steps . may not stay flat . ] in other words , because of the model is ill - conditioned , the behaviour of the model is specific to the implementation of the analysis and will vary from simulator to simulator . to fix this problem , we modify the available functions such that the solutions form a single curve in state space , as illustrated in fig . [ fig : memristor_f2 ] ( b ) . for each model , this requires different modifications specific to its equations ; we list more detailed descriptions of these modifications in table [ tab : f2 ] . to summarize the problems with existing memristor models and our solutions to them , we fix the nonsmoothness and overflow problems of the existing equations with smooth and safe functions ; we fix the internal state boundry problem with the same clipping function technique we have used for the rram model ; we fix the `` flat '' problem by properly implementing the curve that bends backward for the modelling of dc hysteresis . table [ tab : f1 ] and table [ tab : f2 ] list our approaches in improving the available and functions in more detail . the result is a collection of memristor models , controlled by two variables ( which can be thought of as higher - level model parameters ) , f1_switch and f2_switch . all the combinations of 5 functions and 6 functions constitute 30 compact models for various types of memristors . different and functions describe different underlying physics of the devices , with different levels of accuracy . we would like to note that one particular combination f1_switch = 5 , f2_switch = 6 , is equivalent to the rram model we have discussed in sec . [ sec : rram ] . apart from this combination for rram devices , several other combinations in the general memristor model can also be used for rram devices . for example , when f2_switch = 5 and f2_switch = 4 , our proposed model uses the improved equations from the vteam and yakopcic s models . the range of the dc hysteresis in these models is controlled by two threshold voltages , _ e.g. _ , and for yakopcic s model , and for vteam model . when both these two thresholds are equal to zero , the dc hysteresis disappears , and the models are suitable for rram devices . also , when the two threshold voltages have the same sign , these models can also be used for unipolar memristive devices . they are more general and flexible than the model equations we have discussed in sec . [ sec : rram ] written only for bipolar rram devices . the ideas and techniques underlying these models are likely to also be applicable to new memristive devices and model equations to be developed in the future . [ fig : memristor_osc ] [ fig : memristor_osc_tran ] the modspec and verilog - a files of the proposed general memristor models are listed in appendix [ app : memristor_modspec_code ] and appendix [ app : memristor_va_code ] respectively . they can be used in the same test benches for rrams in sec . [ sec : rram ] . their parameters can also be fitted to generate similar results in fig . [ fig : rram_v0_tran ] and fig . [ fig : rram_v0_homotopy ] . as an extra example , we use f1_switch=2 , f2_switch=5 , corresponding to the improved yakopcic model , and adjust its parameters for a unipolar rram device , connect it with a resistor as shown in fig . [ fig : memristor_osc ] to make an oscillator . then we run both transient simulation and pss analysis with harmonic balance and show their results in fig . [ fig : memristor_osc_tran ] and fig . [ fig : memristor_hb ] . these results demonstrate that our model not only run in dc , transient and homotopy analyses , but also work for pss simulation . [ [ section-6 ] ] [ sec : conclusion ] our study in this paper centers around the compact modelling of memristive devices . memristor models available today do not work well in simulation , especially in dc analysis . their problems come from several main sources . firstly , some models are not in the differential equation format ; they are essentially hybrid models with memory states used for hysteresis . we clarified that the proper modelling of hysteresis should be achieved through the use of an internal state variable and an implicit equation . to make this concept clear , we developed a model template and implemented an example , namely ` hys_example ` , in both modspec and verilog - a . during this process , we examined the common mistakes model developers make when writing internal unknowns and implicit equations in the verilog - a language . then we applied the model template to model rram devices , which led to another common difficulty in memristor modelling enforcing the upper and lower bounds of the internal unknown . we proposed numerical techniques with clipping functions that can modify the filament growth equation such that the bounds are respected in simulation . we also discussed the physical justification behind our approaches . then we demonstrated that the same techniques can be applied to fix the similar problems with many other existing memristor models . as a result , we not only developed a suite of 30 memristor models , all tested to work with many simulation analyses in major simulators , but also took this process as an opportunity to identify and document many good and bad modelling practices . both the resulting models and the techniques used in developing them should be valuable to the compact modelling community . _ = _ _ _ _ _
|
additive manufacturing ( am ) or rapid prototyping technology has received much attention as an innovative manufacturing technique allowing quick fabrication of detailed and complicated objects .recent progress of such technology has improved the manufacturing accuracy and level of detail to a composite or porous material scale ( about 10 m ) of a three - dimensional ( 3d ) structure .thus , am could be applied to the formation of not only macro - scale parts but also new composite or porous materials. such materials would have various effective properties according to the mechanisms of their internal geometries .innovative materials have indeed been developed by integrating mathematical and numerical design methods and am technologies ; e.g. , tissue engineering a bone scaffold having compatibility with human bone in terms of stiffness and permeability and developing materials with a negative poisson s ratio . the greatest advantage of the additive manufacture of innovative materials is fast practical realization . in the conventional development of materials , even though innovative techniques have been developed in the laboratory , issues relating to bulk production and machining need to be overcome to allow practical use . in contrast , a geometry - based material fabricated employing commercial am technology could be directly used in engineering parts . moreover ,functionally graduated characteristics could easily be achieved by spatially varying the internal geometry . a further advantage of am is multi - material fabrication although this function is only realized by photopolymer - type am using commercially available devices . using multiple materials in the internal structure , there are more degrees of freedoms of realizable effective material properties than in the single - material case .effective thermal expansion is a representative example . by forming an internal structure using more than two materials with different positive coefficients of thermal expansion ( ctes ) and voids ,even negative effective thermal expansion could be achieved . with a rise in temperature ,usual thermal expansion occurs within the internal structure of materials .however , the mechanism of the internal structure converts this thermal expansion into an inward deformation of the outer shell and macroscopic negative thermal expansion is then observed .the studies of such internal geometry - based negative thermal expansion materials had been performed by both theoretically and experimentally . by choosing two general materials with known stiffness and cte as base materials ,the cte and stiffness of the composite can be tuned by changing the microstructural shape according to elastic mechanics ; compounds having a negative cte are limited to special cases and the characteristic tuning is an important research topic . however , manufacturability is a major issue relating to internal geometry - based negative - cte materials .first , since the internal structure must be composed of at least two materials having certain shapes , machining and assembly processes are required for each cell. this might be critical in mass production .moreover , special techniques such as microfabrication by co - extrusion and reduction sintering or microelectromechanical system fabrication are required to form a small - scale ( mm ) internal structure .am can resolve the above issue since it has the potential to realize multi - material production in almost the same process as that used for a single material by changing the added material according to position .however , only photopolymer - type am is currently available commercially , and the materials of such am have a smaller cte gap than the combination of metals introduced in previous experiments .thus , techniques are required to design an internal geometry that provides effective negative thermal expansion through the combination of materials having a small cte gap .numerical structural optimizations could be powerful tools for such difficult structural design .these optimizations could automatically lead to the optimal structure employing numerical structural analysis and optimization techniques .in particular , topology optimization could achieve the fundamental optimization of the target structure including a change in the number of holes .theoretical studies of internal geometry - based negative thermal expansion based on topology optimization have been conducted . against the background described above, the present study develops an internal geometry - based negative - thermal - expansion material fabricated by multilateral photopolymer am . after measuring physical properties of photopolymers ,the internal geometry is designed to maximize negative thermal expansion employing numerical topology optimization .the effective physical properties of the internal structure are calculated employing numerical homogenization .test pieces composed of the designed internal structure are fabricated by photopolymer am .negative thermal expansion of the internal structure is then experimentally verified by measuring the thermal deformation of the test pieces using a laser scanning dilatometer .the present work investigates the thermal expansion phenomena of a composite porous material composed of an internal structure that has a periodic layout in a plane .we assume the thermal expansion behavior of the material in the internal structure follows the constitutive relation where , , , and are the stress tensor , the elastic tensor , the strain tensor , the cte tensor and the thermal stress tensor and is a temperature change from a reference temperature . in this research , we realize negative thermal expansion by considering the effective ( or macroscopic , averaging ) stiffness and cte of the porous composite .the macroscopic effective physical properties of the periodic structure can be calculated employing numerical homogenization .the effective elastic tensor , cte tensor and thermal stress tensor of the periodic structure composed of a unit cell are calculated as the composite porous internal geometry is designed by optimizing the above effective stiffness and cte .we set a square domain as the base shape of the internal geometry and design it by allocating two types of photopolymers and voids . solid isotropic material with penalization ( simp)-based multiphase topology optimization is introduced as the design optimization tool for the internal geometry , which is represented by the layout of three phases , two materials and voids , in the specified domain by defining two artificial density functions and .the elastic modulus and cte are formulated as functions of and , which represent the existence of materials and the kinds of materials respectively .the local young modulus and cte of the design target domain are represented as where , , and are young s modulus and the cte of photopolymers 1 and 2 respectively . in other words , and means that photopolymer 1 exists , and means that photopolymer 2 exists , and means the void exists .since the state can not be identified from intermediate values between 0 and 1 , it should be avoided by contriving the problem and optimizer settings . to realize the isotropic effective cte, symmetry is assumed on the center and diagonal lines of the square design domain .thus , distributions of and are optimized only on the 1/8 domain shown in fig .[ oc ] ( a ) .when only the effective cte is considered in designing a structure having negative thermal expansion , a structure having very low stiffness could result , which would be unsuitable for fabrication and experiment .moreover , the effective thermal stress tensor is preferred over the effective cte both in in terms of the easiness of the calculation of its gradient used in the optimizer .we thus intend to maintain a certain level of effective stiffness while reducing the effective cte by setting the multi - objective function as the steps to the optimization are as follows .we first solve and employing the finite element method ( fem ) and a commercial fem solver comsol multiphysics ( comsol inc .second , the effective physical properties in and and the objective function in are calculated . since the density function is updated by gradient - based algorithms , the first - order gradient of the objective functionis then calculated using the adjoint method .the density functions and are updated by the method of moving asymptotes ( mma ) in the first stage of the optimization . in the latter stage , to obtain a clear shape avoiding intermediate values , the density function is updated using the phase field method . according to the numerical design methodology described above , the photopolymer composite internal geometry is designed .an objet connex 500 ( stratasys ltd . ,usa ) , which is the only commercial photopolymerization manufacturing machine offering multi - material 3d printing , is used to fabricate the test pieces .the machine produces 3d structures by spraying liquid photopolymer onto a build tray in thin layers and exposing the photopolymer to ultraviolet light . in producing a structure from multiple materials , a rigid material ,a rubber - like material and several admixture materials can be used .we measured young s modulus by tensile testing in a temperature - controlled bath at 20 , 30 and 40 and measured cte using a connecting rod dilatometer for several materials between rt and 50 , which is close to 45 , the heat deflection temperature ( astm d648 , 0.45mpa ) of a rigid material . according to the obtained physical properties , we chose verowhiteplus rgd835 and flx9895-dm , which is an admixture material of verowhiteplus and tangoblackplus , because this combination achieves a certain level of stiffness and a certain cte difference in the temperature range between rt and 40 .the measured physical properties are listed in table [ tablee ] and plotted in fig .poisson s ratios of these materials were measured only at 40 and they are both about 0.48 .although ctes of both materials have strong temperature dependency and dispersion within this temperature range , we found a difference in the cte on average , which is a necessary condition for a negative effective cte . with the aim of realizing a negative effective cte at 40 , which is the temperatureat which the cte difference between the materials is a maximum , we set young s modulus and the cte in and as follows according to the measured average physical properties : we employed laser scanning to measure the thermal deformation since it is suitable for relatively large specimens .the testing device was an sl-1600a ( shinagawa refractories co. , ltd . ) , which measures the axial thermal deformation of rod - shaped specimens by laser scanning . by arranging the eight base stls shown in fig .[ oc ] ( b ) in two lines , the stl model of the test piece was constructed .the size of each internal structure was mm and the total size of the test piece was mm .figure [ tp ] shows the outline and close - up pictures of the fabricated test piece . from the close - up picture of the internal geometry, it can be said that almost the same shape as the original stl model was achieved except for small branch - like features .the thermal expansions of the long sides of three test pieces were measured in the temperature range between rt and 50 with a temperature rising rate of 1 / min .the measured strains are plotted in fig .[ results ] .negative thermal expansion was clearly observed for each result between rt and about 45 .the early thermal expansions are linear , and corresponding ctes were calculated as 1.121.18k between rt and 34 . since the variation in ctes is relatively low in such a temperature range according to the bulk cte results shown in fig .[ cte ] , stable differences of ctes of the two materials are guessed for this linear region . however , beyond approximately 34 to 36 , the cte differences decreased and negative thermal expansion reduced or became positive . owing to the variability of the physical properties of photopolymers ,the performances of the fabricated porous composite also varied widely .however , planar negative thermal expansion characteristics could be clearly designed for photopolymer porous composites . in summary , we fabricated a porous material with planar negative thermal expansion employing multi - material photopolymer am .the internal geometry was designed using topology optimization .the mechanism of the effective negative thermal expansion is inward deformation resulting from the combination of bending and hinge mechanisms .the 2d optimal internal geometry was converted to an stl model and assembled as a test piece .the thermal expansion of the test piece was measured with a laser scanning dilatometer .negative thermal expansion corresponding to less than was certainly observed for each test piece of the experiment .owing to the temperature dependence and variable physical properties of the photopolymer , the control of the performance of the developed materials appears difficult .however , although multi - material am is currently commercially available only in the case of a photopolymer , multi - material metal am is under development . if such am is realized , the design process proposed in this paper could be used for materials that are more stiff and stable than photopolymer . m. castilho , m. dias , u. gbureck , j. groll , p. fernandes , i. pires , b. gouveia , j. rodrigues , and e. vorndran . fabrication of computationally designed scaffolds by low temperature 3d printing . ,5(3):035012 , 2013 . p.zhang , j. toman , y. yu , e. biyikli , m. kirca , m. chmielus , and a. c. to .efficient design - optimization of variable - density hexagonal cellular structure by additive manufacturing : theory and validation ., 137(2):021004 , 2015 . c. a. steeves , s. l. s. lucato , m. he , e. antinucci , j. w. hutchinson , and a. g. evans .concepts for structurally robust materials that combine low thermal expansion with high stiffness . , 55(9):18031822 , 2007 .d. c. hofmann , s. roberts , r. otis , j. kolodziejska , r. p. dillon , j. suh , a. a. shapiro , z. liu , and j. borgonia .developing gradient metal alloys through radial deposition additive manufacturing . , 4 , 2014 .the temperature range is between rt and 50 .rts of measurements no .1 , 2 and 3 are 17.45 , 19.45 and 22.85 respectively .approximation lines are plotted for the data within the temperature range between rt and 34 . ]
|
additive manufacturing ( am ) could be a novel method of fabricating composite and porous materials having various effective performances based on mechanisms of their internal geometries . materials fabricated by am could rapidly be used in industrial application since they could easily be embedded in the target part employing the same am process used for the bulk material . furthermore , multi - material am has greater potential than usual single - material am in producing materials with effective properties . negative thermal expansion is a representative effective material property realized by designing a composite made of two materials with different coefficients of thermal expansion . in this study , we developed a porous composite having planar negative thermal expansion by employing multi - material photopolymer am . after measurement of the physical properties of bulk photopolymers , the internal geometry was designed by topology optimization , which is the most effective structural optimization in terms of both minimizing thermal stress and maximizing stiffness . the designed structure was converted to a three - dimensional stl model , which is a native digital format of am , and assembled as a test piece . the thermal expansions of the specimens were measured using a laser scanning dilatometer . the test pieces clearly showed negative thermal expansion around room temperature .
|
solar polar plumes are bright rays located at coronal holes or polar areas .they can have different heights depending on particular wavelength of the observation .solar polar plumes are visiable from the base until approximately 1.2 if observed in ultraviolet wavelength.(eg .stereo telescope euvi ) .observations in x - rays(eg._hinode _ xrt ) are mainly only for hot gas distributed on the solar surface and bright points of the base of polar plumes . because they locate at coronal hole regions where the fast solar wind is originated ,solar polar plumes have been associated with a possible solar fast wind source .however , there still lacks evidence about this connection in terms of the underneath physical process. have shown plumes have higher electron density than interplumes but approach that of interplume regions as increasing height. have also shown the h i ly line has narrower width of plumes than that of interplumes , corresponding with a lower temperature . has the same results as by analyzing different uv lines(o vi 1032 line width is lower by 10%-15%). have studied intensity ratios of uv spectral lines in the low altitude of the corona .they found an increasing of temperature with respect to heights in the background corona(interplumes),but a similar temperature for lower parts of plumes .the geometry of polar plumes have been studied and argued for a long time .there are mainly 2 kinds of opinions regarding to the shape of it : quasi - cylindrical plume or curtain plume .curtain plumes are denser plasma sheet appearing as radial rays when observed edge on. have been expanded this curtain plume model with microplumes network. quasi - cylindrical geometry of plumes have been more widely accepted .many scholars have been investigated how plumes expand cylindrically .some work have shown that the density structure of coronal holes or polar plumes follow a radial expansion ;while some others concluded a superadial expansion .however , there still lacks a quantitative plume shape model .this is what this paper wants to investigate .stereo observatory was launched at oct.25th 2006 .data collected to study the width of the plume s cross section in this paper are all taken by stereo secchi with the wavelength of 171 .i first assumed plumes have an expanding cylindrical tube shape with a circular cross section .observations have shown that the light intensity of the central axis along a plume reduces as enhanced altitude .the light intensity variation across any plume of some height appears toughly as a gaussian curve .a cross section width or diameter was measured by full width half maximum(fwhm ) principle at 4 different heights from the sun disk center : 1.04,1.10,1.16 and 1.20 and did this for total 31 plumes .some measurements at 1.20 are excluded if they are too fuzzy .because of lacking enough statistics , a polynomial function with 4 parameters was assumed to approximate how the diameter of plumes vary as increasing height .these 4 unknown parameters of the polynomial function were calculated analytically from 4 measured average cross section diameters .notice that this polynomial function model only describes lower part of plumes , particularly lower than 1.2 normally observed by euv .the standard deviation of each calculated average diameter is also shown in table 1 .this standard deviation may be linked with the internal structure of plumes and can not represent purely as statistics error .figure 1 shows how a width was measured by fwhm and where the plot of pixel value versus circular circumference was taken from the image .figure [ fig : myfig3 ] contains graph of model results and data points. obviously , data points are on the graph of the model itself.the final model obtained is as following , where is the plume diameter and is the height from the sun center : ccc & & + 1.04 & 0.0614516 & 0.0171694 + 1.10 & 0.0703548 & 0.0213363 + 1.16 & 0.0841613 & 0.0242818 + 1.20 & 0.0865333 & 0.0200067 + [ fig : myfig3 ]diameters of each plume at different heights were measured by hand in this paper .samples contain just 31 plumes considering the balance between accuracy and timing .if an automatic recognition computer program is used , large amount of samples can be analyzed to obtain good statistics .fisher , r. , & guhathakurta , m. , 1995 , , v.447 , p.l39 gabriel , a. , bely - dubau , f. , tison , e. , wilhelm , k. , 2009 , , 700 , 551 giordano , s. , antonucci , e. , benna , c. , et al .esa sp-404,sep.,1997 hassler , d. m. ; wilhelm , k. ; lemaire , p. ; schhle , u. 1997 , sol .phys . , 175 , 375 kohl , j. l. , esser , r. , gardner , l. d. , et al .162 , 313 raouafi , n .- e . ,harvey , j. w. , solanki , s. k. , 2007 , , 658 , 643 wilhelm , klaus . , marsch , eckart . , dwivedi , bhola n. , 1998,,500,1023 king , i. r. 1966 , agu conf ., jul . , 1999 young , p. r. , klimchuk , j. a. , mason , h. e. , 1999 , 350,286
|
solar polar plumes are bright radial rays rooted at the sun s polar areas . they are widely believed to have the structure of expanding tube.a four degree polynomial function was assumed to represent the change of cross section diameter as a function of height and four unknown parameters were calculated with measured average widths of total 31 plumes at 4 heights(1.04,1.10,1.16,1.20 ) .
|
we motivate our studying on simulating a active plasma resonance spectroscopy which is well established in plasma diagnostic techniques .to study the model with simulation models , we concentrate on an abstract kinetic model , which described the dynamics of electrons in the plasma by using a boltzmann equation .the boltzmann equation is coupled with the electric field and we obtain coupled partial differential equations . the paper is outlined as follows . in section [ modell ]we present our mathematical model and a possible reduced model for the further approximations . the functional analytical setting with the higher order differential equationsare discussed in section [ higher ] .the splitting schemes are presented in in section [ splitt ] .numerical experiments are done in section [ num ] . in the contents , that are given in section [ concl ] ,we summarize our results .in the following a model is presented due to the motivation in , and .the models consider a fluid dynamical approach of the natural ability of plasmas to resonate in the near of the electron plasma frequency . herewe specialize to an abstract kinetic model to describe the dynamics of the electrons in the plasma , that allows to do the resonation .the boltzmann equation for the electron particles are given as and boundary conditions are postulated at the boundaries of ( plasma ) . in front of the materials we assume complete reflection of the electrons due to the sheath with is the parallel and perpendicular to the surface normal vector . is the electric field .we consider the abstract homogeneous cauchy problem in a banach space : where are bounded operators and is the corresponding norm in and let be the induced operator norm . for the transformation we have the following assumptions : [ assum1 ] 1 .) the function is given as : otherwise we solve a non - autonomous equation . 2 .) we assume that the characteristic polynomial : has solution of complex valued matrices in , given as : the higher order differential equation ( [ diff_1 ] ) can be decoupled with the assumptions [ assum1 ] to the following differential equation : where the analytical solution is given as : and are given via the initial conditions .the solutions can be derived via the characteristics polynomial ( idea of scalar linear differential equations ) and the idea of the superposition of the linear combined solutions .the initial conditions are computed by solving the vandermode matrix , see the ideas in .+ we have to solve : a further simplification can be done to rewrite the integral - differential equation in two first order differential equations .later such a reduction allows us to apply fast iterative splitting methods .the higher order differential equation ( [ diff_1 ] ) can be transformed with the assumptions [ assum1 ] to two first order differential equation : where we have for .the analytical solution are given as : the analytical solution of the first order differential equation ( [ diff_2_1 ] ) and ( [ diff_2_2 ] ) are given by each characteristic polynomial : while the solution is given as with the notations : and therefore the analytical solution is given as ( [ ana_1 ] ) .therefore this is the solution of our integro - differential equation ( [ diff_1 ] ) with the assumptions [ assum1 ] .the operator - splitting methods are used to solve complex models in the geophysical and environmental physics , they are developed and applied in , and .this ideas based in this article are solving simpler equations with respect to receive higher order discretization methods for the remain equations . for this aimwe use the operator - splitting method and decouple the equation as follows described . in the following we concentrate on the iterative - splitting method .the following algorithm is based on the iteration with fixed splitting discretization step - size , namely , on the time interval ] we solve the following sub - problems consecutively for . and . ) where are the number of equations .further is the known split approximation at the time level .the split approximation at the time - level is defined as .let us consider the abstract cauchy problem in a banach space : where is the constant based on the initial conditions , further are given linear operators being generators of the -semi - group and is a given element .then the iteration process ( [ gleich_kap33a])([gleich_kap33b ] ) is convergent and the and the rate of the convergence is of second order .the proof is done in the work of geiser .the algorithm is given as : for each , where is the known split approximation at the previous time level .further is a decomposition of the matrix .we reformulate to an algorithm that deals only with real numbers and rewrite : we have the following algorithm : in the following , we have two iteration processes : * first iteration process : iterates over the decomposition of the matrices . *second iteration process : iterates over the real and imaginary parts .we start with , first we iterate over if or the iteration error over is less then we iterate over .further is the known split approximation at the time level , cf . .further is a decomposition of the matrix .in the following , we present different examples . we deal with a simpler integro - differential equations : ,\end{aligned}\ ] ] and the transformed second order differential equation is given as : and the operators for the splitting scheme are given as : while is the transposed matrix of .the matrices are given as the figure [ first_1 ] present the numerical errors between the exact and the numerical solution .here we obtain results for one - side and two - side iterative schemes on operators and .the computational results are given in the figure [ first_2 ] , we present the one - side and two - side iterative results .the figure [ first_3 ] present the numerical errors between the exact and the numerical solution for the optimized iterative schemes . herewe obtain results for one - side and two - side iterative schemes on operators and . for the computations , we see the benefit of the optimal iterative schemes , which applied the two iterative steps of the two solutions in one scheme ,see section [ splitt ] .the best results are given by the one - side iterative scheme with respect to the operator .we deal with a simple third order differential equations : , \\ & & c(0 ) = ( 1 , \ldots , 1)^t \in { { \mathbb{c}}}^m , \\ & & c'(0 ) = \frac{1- \sqrt{2}}{3 } a^{1/3 } c(0 ) , \\ & & c''(0 ) = \frac{1}{3 } a^{2/3 }c(0 ) , \end{aligned}\ ] ] , is sufficient smooth ( ) and we have .the transformed first order differential equations are given as : where and are given with respect to the initial conditions and are given as .further the operators for the splitting scheme for the three iterative splitting schemes are given as : the matrix is given as here , we deal with the following splitting schemes : * is computed by a scalar iterative scheme .* are computed by a vectorial iterative scheme ( because of real and imaginary parts ) . for have : where and for and the solution is given as . for have : where and for and the solution is given as . for have : where and for and the solution is given as .the solution is given as : .the computational results for the optimized iterative schemes are given in the figure [ first_4 ] , we present the one - side and two - side iterative results . for the computations , we see the benefit of the optimal iterative schemes . while we deal with real and imaginary parts , it is important to reduce the computational costs .we applied in one scheme the real and imaginary solution , see section [ splitt ] .the best results are given by the one - side iterative scheme with respect to the operator .we present the coupled model for a transport model for deposition species in a plasma environment .we assume the flow field is computed by the plasma model and the transport of the deposition species with a transport - reaction model .based on the physical effects , we deal with higher order differential equations ( scattering parts , reaction parts , etc . ) .we validate a novel splitting schemes , that embedded the real and imaginary parts of the solutions .standard iterative splitting schemes can be extended to such complex iterative splitting schemes .first computations help to understand the important modeling of the plasma environment in a cvd reactor with scattering and higher order time - derivative parts . in future , we work on a general theory of embedding the complex schemes to standard splitting schemes .
|
in this paper we present an extension of standard iterative splitting schemes to multiple splitting schemes for solving higher order differential equations . we are motivated by dynamical systems , which occur in dynamics of the electrons in the plasma using a simplified boltzmann equation . oscillation problems in spectroscopy problems using wave - equations . the motivation arose to simulate active plasma resonance spectroscopy which is used for plasma diagnostic techniques , see , and . * keywords * : kinetic model , neutron transport , dynamics of electrons , transport equation , splitting schemes , semi - group . + * ams subject classifications . * 35k25 , 35k20 , 74s10 , 70g65 .
|
statistical language models estimating the distribution of various natural language phenomena are crucial for many applications . in machine translation, it measures the fluency and well - formness of a translation , and therefore is important for the translation quality , see and etc .common applications of lms include estimating the distribution based on n - gram coverage of words , to predict word and word orders , as in and .the independence assumption for each word is one of the simplifying method widely adopted .however , it does not hold in textual data , and underlying content structures need to be investigated as discussed in .utf8gbsn [ fig - example ] we model the prediction of phrase and phrase orders . by considering all word sequences as phrases ,the dependency inside a phrase is preserved , and the phrase level structure of a sentence can be learned from observations .this can be considered as an n - gram model on the n - gram of words , therefore word based lm is a special case of phrase based lm if only single - word phrases are considered .intuitively our approach has the following advantages : 1 ) _ long distance dependency _ : the phrase based lm can capture the long distance relationship easily . to capture the sentence level dependency ,e.g. between the first and last word of the sentence in table [ fig - example ] , we need a 7-gram word based lm , but only a 3-gram phrase based lm , if we take `` played the basketball '' and `` the day before yesterday '' as phrases .2 ) _ consistent translation unit with phrase based mt _ : some words may acquire meaning only in context , such as day " , or the " in the day before yesterday " in table [ fig - example ] . consideringthe frequent phrases as single units will reduce the entropy of the language model .more importantly , current mt is performed on phrases , which is taken as the translation unit .the translation task is to predict the next phrase , which corresponds to the phrased based lm .3 ) _ fewer independence assumptions in statistical models _ : the sentence probability is computed as the product of the single word probabilities in the word based n - gram lm and the product of the phrase probabilities in the phrase based n - gram lm , given their histories . the less words / phrases in a sentence , the fewer mistakes the lm may contain due to less independence assumption on words / phrases .once the phrase segmentation is fixed , the number of elements via phrase based lm is much less than that via the word based lm .therefore , our approach is less likely to obtain errors due to assumptions .4 ) _ phrase boundaries as additional information _ : we consider different segmentation of phrases in one sentence as a hidden variable , which provides additional constraints to align phrases in translation .therefore , the constraint alignment in the blocks of words can provide more information than the word based lm .[ [ comparison - to - previous - work ] ] comparison to previous work + + + + + + + + + + + + + + + + + + + + + + + + + + + in the dependency or structured lm , phrases corresponding to the grammars are considered , and dependencies are extracted , such as in and in . however , in the phrase based smt , even phrases violating the grammar structure may help as a translation unit .for instance , the partial phrase the day before " may appear both in the day before yesterday " and the day before spring " .most importantly , the phrase candidates in our phrase based lm are same as that in the phrase based translation , therefore are more consistent in the whole translation process , as mentioned in item 2 in section 1 .some researchers have proposed their phrase based lm for speech recognition . in and ,new phrases are added to the lexicon with different measure function . in ,a different lm was proposed which derived the phrase probabilities from a language model built at the lexical level .nonetheless , these methods do not consider the dependency between phrases and the re - ordering problem , and therefore are not suitable for the mt application .we are given a sentence as a sequence of words , where is the sentence length . in the word based lm , the probability of a sentence to denote general probability distributions with ( almost ) no specific assumptions .in contrast , for model - based probability distributions , we use the generic symbol .]is defined as the product of the probabilities of each word given its previous words : the positions of phrase boundaries on a word sequence is indicated by and , where , and is the number of phrases in the sentence .we use to indicate that the -th phrase segmentation is placed after the word and in front of word , where . is a boundary on the left side of the first word , which is defined as , and is always placed after the last word and therefore equals .an example is illustrated in table [ fig - example ] .the english sentence ( ) contains seven words ( ) , where denotes john " , etc . the first phrase segmentation boundary is placed after the first word , and the second boundary is after the third word ( ) and so on .the phrase sequence in this sentence have a different order than that in its translation , on the phrase level .hence , the phrase based lm advances the word based lm in learning the phrase re - ordering .[ [ model - description ] ] ( 1 ) model description + + + + + + + + + + + + + + + + + + + + + given a sequence of words and its phrase segmentation boundaries , a sentence can also be represented in the form of a sequence of phrases , and each individual phrase is defined as in phrase based lm , we consider the phrase segmentation as hidden variable and the equation [ eq - wordlm ] can be extended as follows : [ [ sentence - probability ] ] ( 2 ) sentence probability + + + + + + + + + + + + + + + + + + + + + + + + for the segmentation prior probability , we assume a uniform distribution for simplicity , i.e. , where the number of different , i.e. if not considering the maximum phrase or phrase n - gram length ; to compute the , we consider either two approaches : * sum model ( baum - welch ) + we consider all segmentation candidates .equation [ eq - plm ] is defined as * max model ( viterbi ) + the sentence probability formula of the second model is defined as in practice we select the segmentation that maximizes the perplexity of the sentence instead of the probability to consider the length normalization .[ [ perplexity ] ] ( 3 ) perplexity + + + + + + + + + + + + + + sentence perplexity and text perplexity in the sum model use the same definition as that in the word based lm .sentence perplexity in the max model is defined as ^{-1/j}\nonumber\ ] ] .[ [ parameter - estimation ] ] ( 4 ) parameter estimation + + + + + + + + + + + + + + + + + + + + + + + + we apply maximum likelihood to estimate probabilities in both sum model and max model : where is the frequency of a phrase .the uni - gram phrase probability is , and is the frequency of all single phrases , in the training text .since we generate exponential number of phrases to the sentence length , the number of parameters is huge .therefore , we set the maximum n - gram length on the phrase level ( note not the phrase length ) as in experiments .[ [ smoothing ] ] ( 5 ) smoothing + + + + + + + + + + + + + for the unseen events , we perform good - turing smoothing as commonly done in word based lms . moreover , we interpolate between the phrase probability and the product of single word probabilities in a phrase using a convex optimization: where phrase is made up of words .the idea of this interpolation is to make the probability of a phrase consisting of of words smooth with a -word unigram probability after normalization . in our experiments, we set for convenience .[ [ algorithm - of - calculating - phrase - n - gram - counts ] ] ( 6 ) algorithm of calculating phrase n - gram counts + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the training task is to calculate n - gram counts on the phrase level in equation [ eq - count1 ] . given a training corpus , where there are sentences ( ) , our goal is to to compute , for all phrasen - grams that the number of phrases is no greater than .therefore , for each sentence , we should find out every -gram phrases that .we do dynamic programming to collect the phrase n - grams in one sentence : where is the auxiliary function denoting the multiset of all phrase n - grams or unigram ending at position ( ) . denotes the starting word position of the last phrase in the multiset .the is a multiset , and means to append the element to each element in the multiset . denotes the union of multisets . after appending , we consider all that is no less than and no greater than .the phrase counts is the sum of all phrase n - grams from all sentences , with each sentence , and is the number of elements in a multiset : is an ongoing work , and we performed preliminary experiments on the iwslt task , then evaluated the lm performance by measuring the lm perplexity and the mt translation performance . because of the computational requirement , we only employed sentences which contain no more than 15 words in the training corpus and no more than 10 words in the test corpora ( dev2010 , on tst2010 and on tst2011 ) , as shown in table [ tab - data ] .we took word based lm in equation [ eq - wordlm ] as the baseline method ( base ) .we calculated the perplexities of tst2011 with different n - gram orders using both sum model and max model , with and without smoothing ( s. ) as in section 2 .table [ tab - ppl ] shows that perplexities in our approaches are all lower than those in the baseline . for mt , we selected the single best translation output based on the lm perplexity of the 100-best translation candiates , using different lms as shown in table [ tab - bleu ]. max model along with smoothing outperforms the baseline method under all three test sets with the bleu score increase of 0.3% on dev2010 , 0.45% on tst2010 , and 0.22% on tst2011 , respectively .table [ tab - outputs ] shows two examples from the tst2010 , where we can see that our max model generates better selection results than the baseline method in these cases .we showed the preliminary results that a phrase based lm can improve the performance of mt systems and the lm perplexity .we presented two phrase based models which consider phrases as the basic components of a sentence and perform exhaustive search .our future work will focus on the efficiency for a larger data track as well as the improvements on the smoothing methods .
|
we consider phrase based language models ( lm ) , which generalize the commonly used word level models . similar concept on phrase based lms appears in speech recognition , which is rather specialized and thus less suitable for machine translation ( mt ) . in contrast to the dependency lm , we first introduce the exhaustive phrase - based lms tailored for mt use . preliminary experimental results show that our approach outperform word based lms with the respect to perplexity and translation quality .
|
during the last decades , considerable efforts have been made to elucidate the genetic basis of rare and common human diseases .the discovery of so - called _ disease genes _ , whose disruption causes congenital or acquired disease , is indeed important both towards diagnosis and towards new therapies , through the elucidation of the biological bases of diseases .traditional approaches to discover disease genes first identify chromosomal regions likely to contain the gene of interest , e.g. , by linkage analysis or study of chromosomal aberrations in dna samples from large case - control populations .the regions identified , however , often contain tens to hundreds of candidate genes . finding the causal gene(s ) among these candidatesis then an expensive and time - consuming process , which requires extensive laboratory experiments .progresses in sequencing , microarray or proteomics technologies have also facilitated the discovery of genes whose structure or activity are modified in disease samples , on a full genome scale .however , again , these approaches routinely identify long lists of candidate disease genes among which only one or a few are truly the causative agents of the disease process , and further biological investigations are required to identify them . in both cases , it is therefore important to select the most promising genes to be further studied among the candidates , i.e. , to _ prioritize _ them from the most likely to be a disease gene to the less likely . + gene prioritization is typically based on prior information we have about the genes , e.g. , their biological functions , patterns of expression in different conditions , or interactions with other genes , and follows a `` guilt - by - association '' strategy : the most promising candidates genes are those which share similarity with the disease of interest , or with other genes known to be associated to the disease .the availability of complete genome sequences and the wealth of large - scale biological data sets now provide an unprecedented opportunity to speed up the gene hunting process .integrating a variety of heterogeneous information stored in various databases and in the literature to obtain a good final ranking of hundreds of candidate genes is , however , a difficult task for human experts .unsurprisingly many computational approaches have been proposed to perform this task automatically via statistical and data mining approaches .while some previous works attempt to identify promising candidate genes without prior knowledge of any other disease gene , e.g. , by matching the functional annotations of candidate genes to the disease or phenotype under investigation , many successful approaches assume that some disease genes are already known and try to detect candidate genes which share similarities with known disease genes for the phenotype under investigation or for related phenotypes .these methods vary in the algorithm they implement and in the data they use to perform gene prioritization . for example , endeavour and related work use state - of - the - art machine learning techniques to integrate heterogeneous information and rank the candidate genes by decreasing similarity to known disease genes , while prince uses label propagation over a protein - protein interaction ( ppi ) network and is able to borrow information from known disease genes of related diseases to find new disease genes .we refer the reader to for a recent review of gene prioritization tools available on the web .+ here we propose prodige , a new method for prioritization of disease genes based on the guilt - by - association concept .prodige assumes that a set of gene - disease associations is already known to infer new ones , and brings three main novelties compared to existing methods .first , prodige implements a novel machine learning paradigm to score candidate genes .while existing methods like those of score independently the different candidate genes in terms of similarity to known disease genes , prodige exploits the relative similarity of both known and candidate disease genes to jointly score and rank all candidates .this is done by formulating the disease gene prioritization problem as an instance of the problem known as _ learning from positive and unlabeled examples _ ( pu learning ) in the machine learning community , which is known to be a powerful paradigm when a set of candidates has to be ranked in terms of similarity to a set of positive data .second , in order to rank candidate genes for a disease of interest , prodige borrows information not only from genes known to be associated to the disease , but also from genes known to play a role in diseases or phenotypes related to the disease of interest .this again differs from which treat diseases independently from each other .it allows us , in particular , to rank genes even for _ orphan diseases _ , with no known gene , by relying only on known disease genes of related diseases . in the machine learning jargon, we implement a _ multi - task _ strategy to share information between different diseases , and weight the sharing of information by the phenotypic similarity of diseases .third , prodige performs heterogeneous data integration to combine a variety of information about the genes in the scoring function , including sequence features , expression levels in different conditions , ppi interactions or presence in the scientific literature .we use the powerful framework of _ kernel methods _ for data integration , akin to the work of .this differs from approaches like that of , which are limited to scoring over a gene or protein network .+ we test prodige on real data extracted from the omim database .it is able to rank the correct disease gene in the top 5% of the candidate genes for 69% of the diseases with at least one other known causal gene , and for 67% of the diseases when no other disease genes is known , outperforming state - of - the - art methods like endeavour and prince .we first assess the ability of prodige to retrieve new disease genes for diseases with already a few known disease genes , without sharing information across different diseases . as a gold standard we extracted all known disease - gene associations from the omim database , and we borrowed from nine sources of information about the genes , including expression profiles in various experiments , functional annotations , known protein - protein interactions ( ppi ) , transcriptional motifs , protein domain activity and literature data .each source of information was encoded in a kernel functions , which assesses pairwise similarities between each pair of genes according to each source of information .we compare two ways to perform data integration : first by simply averaging the nine kernel functions , and second by letting prodige optimize itself the relative contribution of each source of information when the model is estimated , through a multiple kernel learning ( mkl ) approach .we compare both variants with the best model of , namely , the mkl1class model which differs from prodige in this case only in the machine learning paradigm implemented : while prodige learns a model from positive and unlabeled examples , mkl1class learns it only from positive examples .we tested these three algorithm in a leave - one - out cross - validation ( loocv ) setting .in short , for each disease , each known disease gene was removed in turn , a model was trained on using the remaining disease genes as positive examples , and all 19540 genes in our database were ranked ; we then recorded the rank of the positive gene that was removed in this list .we focused on the 285 diseases in our dataset having at least 2 known disease genes , because all three methods require at least one known disease gene for training , and for the purpose of loocv we need in addition one known disease gene removed from the training set . + figure 1 presents the cumulative distribution function ( cdf ) of the rank of the left - out positive gene , i.e. , the number of genes that were ranked in the top genes of the list as a function of , for each method .note that the rank is always between ( best prediction ) and , where is the number of genes known to be associated to the disease of interest .the right panel zooms on the beginning of this curve which corresponds to the distribution of small values of the rank .we see clearly that both prodige variants outperform mkl1class in the sense that they consistently recover the hidden positive gene at a better rank in the list .a wilcoxon signed rank test confirms these visual conclusions at level with p - values and , respectively , for the average and mkl variants of prodige .this illustrates the benefits of formulating the gene ranking problem as a pu learning problem , and not as a 1-class learning one , since apart from this formulation both mkl1class and prodige1 use very similar learning engines , based on svm and mkl .+ both prodige1 variants recover roughly one third of correct gene - disease associations in the top genes among almost , i.e. , in the top . however , we found no significant difference between the mean and mkl variants of prodige in this setting ( p - value=0.619 ) .this means that in this case , assigning equal weights to all data sources works as well as trying to optimize these weights by mkl . supported by this result and by the fact that mkl is much more time - consuming than a svm with the mean kernel , we decided to restrict our experiments to the mean kernel in the following experiments .+ in a second run of experiments , we assess the performance of prodige when it is allowed to share informations across diseases .we tested three variants of prodige , as explained in material and methods : prodige2 , which uniformly shares information across all diseases without using particular informations about the diseases , prodige3 , which weights the sharing of informations across diseases by a phenotypic similarity between the diseases , and prodige4 , a variant of prodige3 which additionally controls the sharing of information between diseases that would have very similar phenotypic description but which remain different diseases .all variants are based on the same methodological backbone , namely , the use of a multitask learning strategy , and only differ in a function used to control the sharing of information .we limit ourselves to the 1873 diseases in the disease - gene association dataset which were also in the phenotypic similarity matrix that we used .this corresponds to a total of 2544 associations between these diseases and 1698 genes .we compare these variants to prince , a method recently proposed to rank genes by sharing information across diseases through label propagation on a ppi network .+ figure 2 shows the cdf curves for the four methods . comparing areas under the global curve , i.e. , the average rank of the left - out disease gene in loocv, the four methods can be ranked in the following order : prodige4 ( 1682 ) prodige3 ( 1817 ) prodige2 ( 2246 ) prince ( 3065 ) .the fact that prodige3 and prodige4 outperform prodige2 confirms the benefits of exploiting prior knowledge we have about the disease phenotypes to weight the sharing of information across diseases , instead of following a generic strategy for multitask learning .the fact that prodige4 outperforms prodige3 is not surprising and illustrates the fact that the diseases are not fully characterized by the phenotypic description we use .zooming to the beginning of the curves ( right picture ) , we see that the relative order between the methods is conserved except for prince which outperforms prodige2 in that case .in fact , prodige2 has a very low performance compared to all other methods for low ranks , confirming that the generic multitask strategy should not be pursued in practice if phenotypic information is available . + the fact that prodige3 and prodige4 outperform prince for all rank values confirm the competitiveness of our approach . on the other hand ,the comparison with prince is not completely fair since prodige exploits a variety of data sources about the genes , while prince only uses a ppi network . in order to clarifywhether the improvement of prodige over prince is due to a larger amount of data used , to the learning algorithm , or to both , we ran prodige3 with only the kernel derived from the ppi network which we call prodige - ppi in figure 2 . in that case , both prodige and prince use exactly the same information to rank genes .we see on the left picture that this variant is overall comparable to prince ( no significant difference between prince and prodige - ppi with a wilcoxon paired signed rank test ) , confirming that the main benefit of prodige over prince comes from data integration .interestingly though , at the beginning of the curve ( right picture ) , prodige - ppi is far above prince , and even behaves comparably to the best method prodige4 .since prodige - ppi and prince use exactly the same input data , this means that the better performance of prodige - ppi for low ranks comes from the learning method based on pu learning with svm , as opposed to label propagation over the ppi network .+ to better visualize the differences between the different variants of prodige , the scatter plots in figure 3 compare directly the ranks obtained by the different variants for each of the 2544 left - out associations . note that smaller ranks are better than large ones , since the goal is to be ranked as close as possible to the top of the list .on the left panel , we compare prodige3 to prodige4 .we see that many points are below the diagonal , meaning that adding a dirac kernel to the phenotype kernel ( prodige4 ) generally improves the performance as compared to using a phenotype kernel ( prodige3 ) alone . on the right panel ,the prodige2 is compared to the prodige3 .we see that the points are more concentrated above the diagonal , but with large variability on both sides of the diagonal .this indicates a clear advantage in favor of the phenotype kernel compared to the generic multitask kernel , although the differences are quite fluctuant .+ in order to check whether sharing information across diseases is beneficial , we restrict ourselves to diseases with phenotypic informations and with at least two known associated genes in the omim database .this way , we are able to share information across diseases and , at the same time , to run methods that do not share information because we ensure that there is at least one training gene in the loocv procedure .this leaves us with 265 diseases , corresponding to 936 associations .+ figure 4 shows the cdf curves of the rank for the various methods on these data , including the two methods mkl1class and prodige1 ( with the mean kernel for data integration ) , which do not share information across diseases , and prodige 2 , 3 , 4 and prince , which do share information .interestingly , we observe different retrieval behaviors on these curves , depending on the part of the curve we are interested in . on the one hand , if we look at the curves globally , prodige 4 and 3 perform very well , having high area under the cdf curve , i.e. , a low average rank ( respectively 1529 and 1770 ) . prince and mkl1class have the worse average ranks ( respectively 3220 and 3351 ) . a systematic test of differences between the methods , using a wilcoxon paired signed rank test over the ranks for each pair of methods , is summarized in figure 5 . in this picture, an arrow indicates that a method is significantly better than another at level .this confirms that prodige 4 is the best method , significantly better than all other ones except prodige 1 .three variants of prodige are significantly better than prince and mkl1class .+ on the other hand , in the context of gene prioritization , it is useful to focus on the beginning of the curve and not on the full cdf curves .indeed , only the top of the list is likely to deserve any serious biological investigation .therefore we present a zoom of the cdf curve in panel ( b ) of figure 4 .we see there that the local methods prodige1 and mkl1class present a sharper increase at the beginning of the curve than the global methods , meaning that they yield more often truly disease genes near the very top of the list than other methods . additionally , we observe that prodige1 is in fact the best method when we focus on the proportion of disease genes correctly identified in up to the top 350 among 19540 , i.e. , in up to the top 1.8% of the list .these results are further confirmed by the quantitative values in table [ tab : precision ] , which show the recall ( i.e. , cdf value ) as a function of the rank .prodige 1 , which does not share information across diseases , is the best when we only focus at the very top of the list ( up to the top 1.8% ) , while prodige 4 , which shares information , is then the best method when we go deeper in the list .+ .*recall of different methods at different rank levels , for diseases with at least one known disease gene . *the recall at rank level is the percentage of disease genes that were correctly ranked in the top candidate genes in the loocv procedure , where the number of candidate genes is near .top 1 and top 10 ( first two columns ) correspond respectively to the recall at the first and first ten genes among 19540 , while top x% ( last three columns ) refer to the recall at the first x% genes among 19540 . [ cols="<,^,^,^,^,^",options="header " , ] when heterogeneous sources of information for genes are available , the two strategies proposed in section [ sec : prodigemkl ] can be easily combined with each of the four prodige variants , since each particular gene kernel translates into a particular disease - gene kernel through . in the experiments below, we only implement the mkl approach for prodige1 to compare it to the mean kernel strategy .for other variants of prodige , we restrict ourselves to the simplest strategy where the different information sources are fused through kernel averaging .+ we assess the performance of various gene prioritization methods by leave - one - out cross - validation ( loocv ) on the dataset of known disease - gene association extracted from the omim database .given the list of all disease - gene associations in omim , we remove each pair in turn from the training set , train the scoring function from the remaining positive pairs , rank all genes not associated to in the training set by decreasing score , and check how well is ranked in the list .note that in this setting , we implicitly assume that the candidate genes for a disease are all genes not known to be associated to the disease , i.e. , . in the loocvsetting , each time a pair is removed from the training set , the ranking is then performed on .we monitor the success of the prioritization by the rank of among candidate genes in . since we are doing a loocv procedure , the rank of the left - out sample is directly related to the classical area under the receiver operating characteristics curve ( auc ) , via the formula .therefore , an easy way to visualize the performance of a gene prioritization method is to plot the empirical cumulative distribution function ( cdf ) of the ranks obtained for all associations in the training set in the loocv procedure . for a given value of the rank ,the cdf at level is defined as the proportion of associations for which gene ranked among the top in the prioritization list for disease , which can also be called the _ recall _ as a function of .+ we compare prodige to two state - of - the - art gene prioritization methods .first we consider the 1-svm l2-mkl from , which extends and outperforms the endeavour method , and which we denote mkl1class below .this method performs one - class svm while optimizing the linear combination of gene kernels with a mkl approach in the same time .we downloaded a matlab implementation of all functions from the supplementary information website of .we used as input the same 9 kernels as for prodige , and we set the regularization parameter of the algorithm , as done by .second , we compare prodige to the prince method introduced by , which is designed to share information across the diseases .prior information consists in gene labels that are a function of their relatedness to the query disease .they are higher for genes known to be directly related to the query disease , high but at a lesser extent for genes related to a disease which is very similar to the query , smaller for genes related to a disease that bears little similarity to the query and zero for genes not related to any disease .prince propagates these labels on a ppi network and produces gene scores that vary smoothly over the network .we used the same ppi network for prince as the one used by prodige .the first type of data required by prodige is the description of the set of human genes .we borrowed the dataset of , based on ensembl v39 and which contains multiple data sources .we removed genes whose i d had a `` retired '' status in ensembl v59 , leaving us with 19540 genes .these genes are described by microarray expression profiles from and ( ma1 , ma2 ) , expressed sequence tag data ( est ) , functional annotation ( go ) , pathway membership ( kegg ) , protein - protein interactions from the human protein reference database ( ppi ) , transcriptional motifs ( motif ) , protein domain activity from interpro ( ipr ) and literature data ( text ) . for ppi data which consists in a graph of interactions , a diffusion kernel with parameter 1was computed to obtain a kernel for genes .all other data sources provide a vectorial representation of a gene .the inner product between these vectors defines the kernel we create from each data source .all kernels are normalized to unit diagonal to ensure that kernel values are comparable between different data sources , using the formula : second , to define the phenotype kernel between diseases we borrow the phenotypic similarity measure of . the measure they proposeis obtained by automatic text mining .a disease is described in the omim database by a text record . in particular , its description contains terms from the mesh ( medical subject headings ) vocabulary . the similarity between two diseases by comparing the mesh terms content of their respective record in omim .we downloaded the similarity matrix for 5080 diseases from the mimminer webpage .finally , we collected disease - gene associations from the omim database , downloaded on august 8th , 2010 .we obtained 3222 disease - gene associations involving 2606 disorders and 2182 genes .we are grateful to lon - charles tranchevent , shi yu and yves moreau for providing the gene datasets , and to roded sharan and oded magger for making their matlab implementation of prince available to us .this work was supported by anr grants anr-07-blan-0311 - 03 and anr-09-blan-0051 - 04 .linghu b , snitkin e , hu z , xia y , delisi c ( 2009 ) genome - wide prioritization of disease genes and identification of disease - disease associations from an integrated human functional linkage network .genome biol 10 : r91 .hwang t , kuang r ( 2010 ) a heterogeneous label propagation algorithm for disease gene discovery . in : proceedings of the siam international conference on data mining , sdm 2010 , april 29 - may 1 , 2010 , columbus , ohio , usa . pp. 583594 .liu b , lee ws , yu ps , li x ( 2002 ) partially supervised classification of text documents . in : icml 02 : proceedings of the nineteenth international conference on machine learning .san francisco , ca , usa : morgan kaufmann publishers inc . , pp .387394 .calvo b , lpez - bigas n , furney s , larraaga p , lozano j ( 2007 ) a partially supervised classification approach to dominant and recessive human disease gene prediction .comput methods programs biomed 85 : 229237 .kondor ri , lafferty j ( 2002 ) diffusion kernels on graphs and other discrete input . in : proceedings of the nineteenth international conference on machine learning .san francisco , ca , usa : morgan kaufmann publishers inc . , pp . 315322 .
|
elucidating the genetic basis of human diseases is a central goal of genetics and molecular biology . while traditional linkage analysis and modern high - throughput techniques often provide long lists of tens or hundreds of disease gene candidates , the identification of disease genes among the candidates remains time - consuming and expensive . efficient computational methods are therefore needed to prioritize genes within the list of candidates , by exploiting the wealth of information available about the genes in various databases . here we propose prodige , a novel algorithm for prioritization of disease genes . prodige implements a novel machine learning strategy based on learning from positive and unlabeled examples , which allows to integrate various sources of information about the genes , to share information about known disease genes across diseases , and to perform genome - wide searches for new disease genes . experiments on real data show that prodige outperforms state - of - the - art methods for the prioritization of genes in human diseases .
|
the fate of the progeny of two seemingly identical cells can be markedly distinct .well studied examples include the immune system and hematopoietic system , for which the extent of clonal expansion and differentiation has been shown to vary greatly between cells of the same phenotype . fate and expression heterogeneity at the single - cell levelare also apparent in other systems including the brain and cancers . whether this heterogeneity is due to the stochastic nature of cellular decision making ,reflects limitations in phenotyping , is caused by external events , or a mixture of effects , is a subject of active study . as addressing this pivotal question through population - level analysis is not possible , tools have been developed that facilitate monitoring single cells and their offspring across generations .long - term fluorescence microscopy represents the most direct approach to assess fate heterogeneity at the single - cell level .studies employing that technique are numerous , and have revealed many significant features .filming and tracking of cell families _ in vitro _ remains technically challenging , is labor intensive , and only partially automatable . despite significant advances in the field , continuous tracking _ in vivo _ is confined to certain tissues , and time windows of up to twelve hours for slowly migrating cells .a radically different approach to long - term clonal monitoring is to mark single cells with unique dna tags via retroviral transduction , a technique known as cellular barcoding .as tags are heritable , clonally related cells can be identified via dna sequencing . by tagging multi - potent cells of the hematopoietic system and adoptively transferring them into irradiated mice , the contribution of single stem cells to hematopoiesishas been quantified . amongst other discoveries ,this has revealed heterogeneity in the collection of distinct cell types produced from apparently equi - potent progenitors .current barcoding techniques are unsuitable for tagging cells _ in vivo _ , and typically require _ ex vivo _ barcoding followed by adoptive cell transfer .this restricts its scope to cell types such as naive lymphocytes and cancer cells , as well as hematopoietic stem and progenitors which require perturbation of the new host , usually irradiation , to enable them to engraft .ideally , a cellular barcoding system would inducibly mark cells in their native environment , would be non - toxic , permanent and heritable , barcodes would be easy to read with a high - throughput technique , and the system would enable labeling large numbers of cells with unique barcodes .two recently published studies address some of these points .sun et al . employed a dox inducible form of the sleeping beauty transposase to genetically tag stem cells _ in situ _ , and followed clonal dynamics during native hematopoiesis in mice .there tags are the random insertion site of an artificial transposon , which upon withdrawal of dox is relatively stable .a second _ in situ _ cellular barcoding system based on site - specific dna recombination with the rci invertase has also been implemented .inspired by the brainbow mouse , this system induces a random barcode by stochastically shuffling a synthetic cassette pre - integrated into the genome of a cell .the authors predicted high code diversity from relatively small constructs ( approx . kb ) and demonstrated feasibility of random barcode generation in escherichia coli .each of those approaches elegantly overcome shortcomings of previous systems by generating largely unique tags without significant perturbation to the system of interest , but some difficulties remain . for barcode readout, the sleeping beauty system requires whole - genome amplification technology and three - arm - ligation - mediated pcr to efficiently amplify unknown insertion sites .furthermore , the random location of the transposon may impact behavior of some barcoded clones and lead to biased data. moreover , some background transposon mobilization was detected , subverting the stability of the barcodes .the rci invertase based system remains to be implemented outside bacteria . as with the sleeping beauty transposase ,the method requires tight temporal control over rci expression to make codes permanent .here we consider the cre lox system as a driver to induce _ in situ _ large numbers of distinct , permanent , randomly determined barcodes from a series of tightly spaced lox sites .in contrast to the brainbow construct , which relies on overlapping pairs of incompatible lox sites recombining randomly to one of several stable dna sequence configurations , our design exploits constraints on the distance between lox sites that arise during dna loop formation , a prerequisite for site - specific recombination .this known feature has not previously been exploited , but is a crucial design element for obtaining high barcode diversity . employing repeated usage of the same lox site , code diversityis solely restricted by cassette size and not , as in the brainbow construct , by the relatively small set of non - interacting lox sites . for a design without distance constraints ,the maximal diversity of stable barcodes creatable with the cre lox system is of order , where is the number of lox sites , but with distance constraints we establish that optimal barcode diversities of order are possible . boosting this scaling with the four incompatible lox sites that have been reported in the literature enables distinct codes of about 600 bp each from a genetic construct as small as 2.5 kb . in combination with the creer system ,this is sufficient to inducibly barcode label all naive cd8 t cells in a mouse or all nucleated cells in the bone marrow .desirable features are inherently part of the lox barcode cassette design , including : short and stable barcodes ; a single barcode per cell ; and robust read - out . before introducing the lox barcode cassette ,we revisit cre lox biology .cre is a bacteriophage pl recombinase that catalyzes site - specific recombination between lox sites .a lox site is a 34 bp sequence composed of two 13 bp palindromic flanking regions and an asymmetric 8 bp core region ( fig .[ fig : fig1 ] a ) . for recombination to occur , four cre proteins bind to the four palindromic regions of two lox sites and form a synaptic complex .a first pair of strand exchanges leads to a holliday junction intermediate .isomerization of the intermediate then allows a second pair of strand exchanges , and formation of the final recombinant product .the dna cleavage site is situated in the asymmetric core region .if the lox sites are on the same chromosome , their interaction requires formation of a dna loop . if they have the same orientation ( direct repeats ) , recombination results in excision of the intervening sequence . if lox sites are in the opposite orientation to each other ( inverted repeats ) , the sequence between the sites is inverted , becoming its reverse complement ( fig .[ fig : fig1 ] b ) . due to compatibility with eukaryotes ,the cre lox system has become an essential tool in genetic engineering and a large array of transgenic mouse models with inducible cell - type specific expression of cre have been created . in _ in vitro _trials with cre mediated lox reactions , a sharp decrease in recombination efficiency has been observed when the sequence separating two lox sites is less than 94 bp .recombination is still detectable at low levels at 82 bp , but not at 80 bp where dna stiffness appears to prevent dna loop formation , and as a consequence lox site interaction . for the distinct , but similar , flp / fr systemthis minimal distance was established to be smaller _ in vivo _ , with interactions possible at 74bp .the existence of a minimal distance is one of the key features that we exploit to make random barcodes stable , but in our proposed design it will only prove necessary for it to be greater than 44 bp . in complete generality , a lox barcode cassette is a series of lox sites interlaced with distinguishable dna code elements of size bp each .on cre expression , code elements change orientation and position , or are excised . through cre mediated excision , the number of elements eventually decreases until reaching a stable number ( fig .[ fig : fig1 ] c ) .sequences that have attained a stable number of code elements form size - stable barcodes .a cassette s code diversity is the number of size - stable barcodes that can be generated from the cassette via site - specific recombination .our main result is a robust lox cassette design that provably maximizes code diversity .the design is robust to both sequencing errors and to the minimal interaction distance between lox sites .the analysis that leads us to the design is provided in the optimal design section .the identification of code element sequences that avoid misclassification due to sequencing read errors then follows .finally , probabilistic aspects of code generation from an optimal barcode cassette are explored via monte carlo simulation .lox cassettes with code elements of size 4 bp , higher order lox interactions , and the impact of transient cre activation , are considered in the discussion .the optimal design will prove to have the orientation of both the outmost , and any two consecutive , lox sites inverted ( fig .[ fig : fig1 ] c ) .code elements between lox sites are of size longer than four bp , but shorter than 24 bp .the lower limit ensures that elements can be chosen sufficiently distinctly to correct two sequencing errors per element . due to the minimal lox interaction distance, the upper limit ensures that barcodes with three code elements are size - stable . the barcode diversity for this cassette design with code elements under constitutive cre expression will , as established in the optimal design section , transpire to be which is maximal for code elements that are larger than four base pairs . a good compromise between cassette , robustness to sequencing errors and barcode diversity is given by an alternating lox cassette with 13 elements of length 7 bp each as shown in fig .[ fig : fig1 ] c. the cassette is initially 567 bp long and generates a code diversity of barcodes .after excisions and inversions , size - stable barcodes are composed of either a single element or three elements , with lengths 75 bp and 157 bp respectively , including remaining non - interacting lox sites . concatenating four such cassettes with poorly - interacting lox variants ( e.g. loxp , lox2272 , lox5171 and m2 , fig .[ fig : fig1 ] d ) yields a bp construct with a size - stable code diversity of . to implement cre lox barcoding in the mouse, one could cross mice generated from embryonic stem cells that have been transduced with the concatenated lox barcoding cassettes described above onto a tamoxifen inducible cell - type specific creer expressing background .an experiment is initiated by administrating tamoxifen to the animal , which activates cre and induces generation of a barcode ( bp ) in each cell where cre becomes active .some time after activation , cells of interest are harvested and sorted for specific phenotypes , and sequenced using a next generation sequencing platform that produces read - lengths bp .cells originating from the same progenitor carry the same barcode and this information can then used for scientific inference . to identify the frequent barcodes that are to be discarded in the analysis ( see the barcode distribution is heterogeneous section ) , in a control experiment large numbers of cells would be harvested shortly after tamoxifen administration and sequenced .a simple upper bound on the barcode diversity of elements from a cassette initially containing elements is the number of possible outcomes when choosing from elements in arbitrary order and orientation : although loose , it will become clear that it captures the dominant growth , , indicating the importance of in generating barcode diversity and motivating a closer look at how cassette designs influence it . for what follows, we introduce some terminology : a cassette is alternating if the orientation of any two consecutive lox sites is inverted ( fig .[ fig : fig1 ] c ) ; outermost lox sites are termed flanking lox sites ; and flanking sites are direct or inverted if they have the same or opposite orientation , respectively .cre recombination requires a minimal distance between the interacting lox sites . in what followswe assume that the minimal distance for lox interaction is 82 bp , but our results will be robust for any minimal interaction distance greater than 44 bp . to understand how a minimal lox - lox interaction distance and cassette design determine size - stable barcodes and code diversity , we start with the simplest case , a barcode with a single code element ( fig . [fig : fig2 ] a ) .if the code element is less than 82 bp , the barcode is size - stable irrespective of the orientation of its flanking sites .if the element is larger than 82 bp , the code is only size - stable if the flanking sites are inverted as excision will remove the element . for a barcode with two elements , the sequence between the flanking sites contains an additional element and a lox site ( 34 bp ) , giving a sequence of bp .if the flanking sites have the same orientation , the barcode is size - stable if bp , hence if bp .if they are in opposite orientation , excisions can only occur if flanking sites interact with the middle lox site , and bp is sufficient for stability ( fig .[ fig : fig2 ] b ) . for given , in generalif there exists a barcode of size with direct flanking sites , a barcode with elements is possible that has inverted flanking sites .thus and the orientation of the flanking sites are critical features that determine the maximum . in fig .[ fig : fig2 ] c , the stability of barcodes with is shown as a function of for a cassette with inverted flanking sites .the stability depends on a critical distance , i.e. , the largest distance between two lox sites in the barcode that is , or can be brought into , the same orientation via recombination . as shown ,barcodes of size three and four become unstable if bp and bp , respectively , while barcodes of size five or greater are always unstable .orientation of a cassette s flanking sites is immutable under recombination. therefore cassettes with direct and inverted flanking sites generate barcodes with direct and inverted flanking sites only .having seen that maximal code diversity grows as , and that having inverted flanking sites relative to direct ones increases the maximum size of barcodes by one , it follows that the diversity for cassettes with inverted flanking sites is of the order .inverted flanking sites are thus superior in terms of code diversity and are an essential design decision .optimality regarding the size of the elements , , is more intricate . for ,the maximum size of barcodes is four elements , and according to the formula above , their diversity grows as .the stability of barcodes with four elements is , however , sensitive to the minimal distance estimate ( the gray interval in fig .[ fig : fig2 ] c ) . in addition, the short length of code elements limits error correction , a point revisited later .thus we focus on cassettes in the regime bp , which generate error - robust barcodes of up to size three and a code diversity that is insensitive to the reported minimal lox interaction distance . for the orientation of the remaining lox sites we prove , via a two - step strategy , that the alternating design produces maximal code diversity .first we derive a refined upper bound for the diversity that takes into account the structure of the lox cassette , but ignores constraints imposed by the recombination process .we then show that alternating lox cassettes with inverted flanking sites and elements are unconstrained in terms of barcode generation via sequential recombination events , thus achieving this upper bound. during cre induced recombination , cre proteins cleave the core region of the interacting lox sites asymmetrically .the sequences between subsequent cleavage sites are not affected by cre and represent the fundamental building blocks of the lox barcode cassette .each block contains a code element and half a lox site on each side . depending on the orientation of the lox sites ,there are four possible types of blocks ( fig .[ fig : fig2 ] d ) .three colours have been used to code these : red , green and blue . by definition ,the reverse complement of a block is of the same colour class .in contrast to blue blocks , red and green blocks have their lox cores cleaved in a way such that their flanking lox sites are unchanged after inversion , while the intervening sequence is reverse - complemented .blocks are similar to the concept of units in , introduced to derive expressions for the total number of sequences , stable or unstable , generated from a lox cassette where all sites can interact .their analysis implies and a code diversity of order .quite distinctly , here we focus on enumerating size - stable sequences that arise in the regime bp with code diversities of order .stable codes are necessarily made of blocks from the initial cassette , and as shown in fig .[ fig : fig2 ] e , their composition in terms of block colors is prescribed .letting , , and be the number of red , green , and blue blocks in the initial cassette with elements , an upper bound on the number of possible barcodes of size with red , green and blue blocks is the number of possible outcomes when choosing , and from , and elements in arbitrary order : where and .the additional factor arises as there are two valid orientations of every code element of a red and green block after recombination .conditioned on , , and , to derive an upper bound for a cassette s diversity , we add the numbers for the four possible stable barcode configurations of , , and ( fig . [fig : fig2 ] e ) , taking into account that certain configurations appear more than once ( e.g. the configurations with one red and two blue blocks appears three times ) . using the expression above for each of the four configurations , for bp bp , and cassettes with inverted flanking sites pointing at each other (the opposite case is similar ) this yields , by construction , , and since , substituting the respective terms leads to an expression that is a function of and alone . for given , this reduces the task of finding the optimal cassette design to an explicitly solvable one - dimensional optimization problem : for , the global maximum is achieved at the boundary .this implies , and a global upper diversity bound of , of order .it is easily verified that is only possible if the cassette design is alternating and n is odd , which implies the flanking sites are inverted . for an alternating cassette design , achieving the code diversityupper bound requires complete freedom in code generation via recombination events . by construction , we show that this is the case if .consider an alternating cassette with five elements and bp , and recombination events that do not alter the size of the cassette ( i.e. , inversions ) .first note that red blocks in position three and five can move into the first position via a single recombination event .furthermore , a red block in position one can be inverted by first moving to position three , then to five , and back again . a straight - forward recipe to createan arbitrary code made of a single red block is then to : i ) move the block into the first position ( if required ) ; ii ) change its orientation ( if required ) ; and finally iii ) excise the remaining blocks . similarly , to generate an arbitrary code composed of a red and a green block from an alternating cassette with six elements, we can perform steps i ) and ii ) .then we apply the same procedure to the green blocks , leaving the first block untouched .this results in the first two blocks of the cassette being the desired code . to generate the size - stable code , elements that are not part of the code are excised .finally , for a cassette with seven elements , sequentially following the recipe given above , the first three blocks can be populated such that they match any possible code before excising the remaining blocks .this shows that any possible code of size one to three can be created via lox recombination if the cassette is alternating , , bp , and flanking sites are inverted . under constitutive cre expression , barcodes with three elementscan still undergo inversions via the flanking sites , which reduces their code diversity by a factor of two .the code diversity is therefore that given in eq . .that barcodes generated from a lox cassette are pre - defined in terms of sequence and position in the genome represents an advantage over barcoding systems that rely on insertion site analysis for barcode readout . if codes - reading was error - free , choosing code elements of a particular color ( fig .[ fig : fig2 ] d ) from a set of sequences that differ at least by one bp pair in both orientations would be sufficient .the maximum number of such elements is and for even or odd , respectively , which is large even for small . in order to be perfectly robust to errors via nearest - neighbor match , all pairs of elements of a given color need to differ by a hamming distance of at least bp .the size of the sets of elements that meet this condition quickly decreases with increasing ( see fig . [fig : fig3 ] a for numerical estimates ) . to ensure correction of two sequencing errorsrequires bp . assuming that sequencing errors arise independently and error rates are identical for all bases , the number of read sequencing errors in a code element of size is binomial with the error probability per bp .any element that has or less errors will be classified correctly by nearest - neighbor matching .the probability of more than errors gives an upper bound for the expected proportion of misclassified code elements .[ fig : fig3 ] b shows this for elements of size bp as a function of the minimal distance and the read error rates for next - generation sequencing platforms .different symbols indicate different sequence data . even for low - fidelity platforms like pacific bioscience single molecule real time sequencing ,a minimal distance of five bp results in less than ten misclassified elements per million .in this section we explore stochastic features of the optimal design , specifically the probabilities to generate each of the final codes and the number of recombination events that are needed to create size - stable codes . for the analysis , we make two assumptions : first , all interactions with lox sites that are at least 82 bp apart are equally likely ; second , recombination events occur sequentially and independently. size - stable barcodes of a lox cassette are randomly generated and not all codes are equally likely .although an analytical expression for the probability mass function of final codes is not available , stochastic simulations enable us to study properties of practical importance such as the probability of generating a code more than once . ensuring this probability is low is important in practice because progeny of two cells that independently generate the same code will be confounded as pertaining to the same clone .[ fig : fig3 ] c shows the generation probability for each of the 1022 codes from a cassette with 13 elements . to produce this plot , barcodes were monte carlo generated _ in silico _ via sequential recombination of the initial cassette .the number of times a specific code appeared was recorded , normalized and sorted . while some codes are relatively frequent , most are rare . in fig .[ fig : fig3 ] d , the average number of recombination events ( inversions : blue , excision : black ) is plotted as a function of barcode probability .the number of inversions and barcode probability are negatively correlated , an indication that rare codes undergo , on average , more inversions .the number of excisions is close to two for all codes .ideally , each cell is tagged with a unique barcode .as with all existing barcoding techniques however , 100% unique barcodes can not be guaranteed .what influences the expected number of unique barcodes is the code diversity , , the probability of code , where , and , the total number of codes that are generated .using analysis of the generalized birthday party problem , the expected proportion of unique codes is where the numerically convenient approximation on the right hand side arises from a taylor expansion around and is appropriate if .relatively large s negatively affect the expected proportion of unique codes . for heterogeneous barcode distributions ,a natural strategy is to discard most frequent codes from the analysis .barcodes that are included in the final analysis are called informative . using the approximation eq . , in fig .[ fig : fig3 ] e we computed the maximum number of cells that can initially be barcoded versus the number of cells that generate an informative code , for one to three sequential cassettes ( indicated by the numbers 1 , 2 , 3 ) , with the requirement that no more than 1% of informative codes are generated more than once .the color represents the percentage of discarded codes relative to the total code diversity .this parameter can be adjusted to meet the needs of a given experiment .e.g. , for three concatenated cassettes with 13 elements each , informative codes that are 99% unique can be generated by inducing barcodes in either cells and including most codes or inducing barcodes in cells and discarding most codes from the analysis .these results show that by discarding frequent codes from the read - out , large numbers of clones can be confidently tracked , indicating this _ in situ _ barcoding is suitable for high - throughput lineage tracing experiments .if cre is expressed for long enough , lox cassettes will eventually become size - stable . the time this will take correlates with the number of recombination events that separate a stable barcode from its initial cassette .below , we estimate this quantity using the theory of absorbing markov chains . in a cassette with elements ,there are lox sites .the number of lox pairs that are flanking elements is .lox pairs that have less than three elements between them do not interact as they are separated by less than the minimal distance .pairs of lox sites that have three or more elements between them are termed productive . for number of productive pairs is , and the number of productive pairs , where recombination leads to excision , i.e. where an even number of elements separates the two sites , is for odd .the probability that a productive pair excises exactly elements is given by the ratio of productive pairs that are separated by elements to the total number of productive pairs , i.e. for even , , otherwise it is zero .the number of productive pairs where recombination leads to inversion is ( for is odd ) and the probability that interaction of a productive pair leads to an inversion is equations - allow the formulation of size - stable barcodes as a discrete - time absorbing markov chain .the number of elements in the cassette corresponds to its state , and eq . andeq . give the transition probabilities from to , and from to elements respectively .there are transient and absorbing states .absorbing states are cassettes that have either three , two , one , or zero elements . absorbing markov modelsare well understood , and a wealth of theoretical predictions regarding their properties are available .the fundamental matrix of this markov chain is where is an identity matrix , and is the transition matrix corresponding to the transient states .the expected number of recombination events , starting with a cassette of elements , until reaching a final code is the entry of the vector , where c is a column vector all of whose entries are 1 . in fig . [fig : fig3 ] f , the average number of recombination events from the initial cassette to final code is shown as a function of the cassette length .although code diversity grows as , the number of recombination events code generation increases linearly in .when we identify the optimal lox barcode cassette , we focus on code elements in the regime . these have maximal size - stable barcodes of three elements that are insensitive to over - estimation of the minimal lox interaction distance . for , size - stable barcodes of four elements are possible and their maximal code diversity grows as .these are stable , however , only if the minimal interaction distance between two lox sites is greater than bp , a distance at which interactions have shown to still be possible _ in vivo _ in the similar flp / fr system .most interesting is the case bp , which permits correction of one sequencing error with six code elements that are bp apart in both orientations ( see gray bars in fig .[ fig : fig3 ] a ) .the upper diversity bound is derived along the same lines as for bp ( see fig . [fig : fig4 ] e for possible stable codes ) , which gives to maximize usage of the 6 code elements , we start with a cassette that has six red , five green and six blue blocks , i.e. .this gives an upper diversity bound of 36996 barcodes .as confirmed by simulations , this upper bound is attained by a cassette with inverted flanking sites in which the first 11 lox sites are alternating , and the remaining sites , except the last , are oriented in the same direction as the first lox site ( fig .[ fig : fig4 ] f ) . under constitutive cre expression , barcodes with four elements can still undergo inversions , and the effective code diversity is 19,716 .careful measurements will be needed to determine whether lox sites at a distance of 80 bp still interact . if they do nt , the cassette shown in fig .[ fig : fig4 ] f with bp represents an interesting alternative to the design described in the main text , as with less elements it reaches higher code diversity , but at the cost of less robustness to sequencing error and hence barcode readout fidelity .single recombination events always involve exactly two lox sites .however nothing except dna flexibility prevents several pairs of lox sites to interact simultaneously .the rate at which pairs of lox sites bind depends on the number of lox sites and the kinetic rates of lox - lox complexes . _ in vitro _ , the latter appear stable and with the potentially large number of lox sites in the barcode cassettes , make simultaneous interactions a plausible possibility .higher order lox interactions lead to previously unreported , and in certain cases novel , recombination products ( fig .[ fig : fig4 ] c ) . for example , simultaneous interactions of two overlapping pairs of lox sites oriented in the same direction do not result in excision , but in a reordering of the sequences between the sites .similarly , if pairs are inverted , simultaneous recombinations do not invert but excise the sequence between the outermost sites . for the alternating cassette and , multiple concurrent lox interactions do not generate additional codes as the upper code diversity bound is already attained .therefore our results on lox barcode design and code elements remain unchanged in the presence of higher order lox interactions .what changes is the distribution over barcodes , which flattens in the tail if more than one lox pair recombines at a time ( fig .[ fig : fig4 ] d ) .code diversity strongly depends on the number of elements in size - stable barcodes .if cre is expressed constitutively , size - stable barcodes with code elements of size bp have a maximum of three elements .another possibility is to create transient cre activity rather than constitutive .a well tested system that provides temporal control over cre activity is tamoxifen inducible creer . in the presence of tamoxifen , the fusion protein creer , which is normally located in the cytoplasm ,is transported into the nucleus , where it can bind to lox sites and induce recombination .depending on the duration of cre activation and its efficiency , stable sequences with more than three elements are likely to be generated from a lox barcode cassette .although most of these sequences are stable only in the absence of cre , in this section we make no distinction between these and the size - stable barcodes defined earlier .[ fig : fig4 ] a shows barcode probabilities after activation of creer in cells with an optimal lox cassette of size 13 .the number of recombination events induced by transient creer activity is assumed poisson with mean one .about distinct barcodes are generated , and 30% of these appear only once . although promising in terms of code diversity , it should be noted that potential drawbacks of this approach are the length of the barcodes ( leading to more involved code sequencing ) , leakiness of creer into the nucleus in non - induced cells , and the relatively long half - life of tamoxifen .existing cellular barcoding approaches have already lead to significant biological discoveries and so new approaches that overcome their shortcomings are inherently desirable .here we have established that using cre lox , it would be feasible to create an _ in situ _, triggerable barcoding system with sufficient diversity to label a whole mouse , and propose this as a system for experimental implementation .nolan - stevaux o , tedesco d , ragan s , makhanov m , chenchik a , ruefli - brasse a , et al . measurement of cancer cell growth heterogeneity through lentiviral barcoding identifies clonal dominance as a characteristic of in vivo ttmor engraftment .2013;8(6):e67316 + .klauke k , broekhuis mjc , weersing e , dethmers - ausema a , ritsema m , gonzlez mv , et al . tracing dynamics and clonal heterogeneity of cbx7-induced leukemic stem cells by cellular barcoding .stem cell reports . 2015;4(1):7489 .gomes fl , zhang g , carbonell f , correa ja , harris wa , simons bd , et al .reconstruction of rat retinal progenitor cell lineages in vitro reveals a surprising degree of stochasticity in cell fate decisions . development .2011;138(2):227235 .ringrose l , chabanis s , angrand po , woodroofe c , stewart af .quantitative comparison of dna looping in vitro and in vivo : chromatin increases effective dna flexibility at short distances . the embo journal . 1999;18(23):66306641 .bp palindromic cre binding sites and an bp core ( original loxp sequence shown ) .cleavage sites in the core are indicated by arrows . *b * ) cre mediated site - specific excision and inversion of a sequence with a minimum of bp between two lox sites on the same chromosome . if lox sites are oriented in the same direction , recombination excises the sequence , while if they are oriented in opposite direction the sequence is inverted ( i.e. , the reverse complement ) .* c * ) an alternating lox cassette with 13 elements of size 7 bp . to illustrate how barcodes are generated ,two excision and one inversion event are shown , creating a size - stable barcode with three random elements .pairs of interacting lox sites are indicated by a , b , and c. elements affected by recombination have colored background .the barcode with three elements is size - stable as lox sites oriented in the same direction ( arrows ) are closer than the minimal lox interaction distance , precluding further excision . *d * ) four concatenated alternating lox cassettes of 13 elements each with poorly - interacting lox site variants result in a code diversity greater than .[ fig : fig1 ] ] to elements for a lox barcode cassette with inverted flanking sites .if the critical distance surpasses the minimal distance , stable codes ( green ) become unstable ( gray ) .barcodes of size three and four are unstable if and respectively , while codes of size five are always unstable .the gray interval illustrates potential uncertainty in the estimate of the minimal interaction distance . *d * ) sequences between lox cleavage sites represent the fundamental building blocks of the barcode cassette .there are two with inverted lox repeats ( red , green ) and two direct lox repeats ( blue ) types of blocks . in the example , code elements are of size 7 bp and n denotes an arbitrary base .* e * ) for a cassette with inverted flanking sites pointing at each other and , four block compositions are possible ( ) : two for barcodes of size three ( three and one ) , one for barcodes of size two ( two ) and one for barcodes of size one ( one ).[fig : fig2 ] ] bp . *b * ) upper bound for the expected proportion of misclassified elements as a function of empirical dna sequencing read error rates for common sequencing platforms ( illumina , ion torrent , pacific biosciences ) and different sequence data ( p. falciparum ( ) , e. coli ( ) , r. spha .( ) , h. sapiens ( ) ) .the minimal distance that separates the elements is 1 bp ( solid ) , 3 bp ( dotted ) , and 5 bp ( dashed ) .* c * ) ranked probabilities of the 1022 size - stable barcodes from a cassette with 13 elements generated under constitutive cre expression .a few codes are relatively frequent , but the majority are rare . *d * ) scatter - plot showing barcode probabilities against the average number of excisions ( black ) and the number of inversions ( blue ) that are generate size - stable barcodes from a 13 element optimal cassette .* e * ) number of cells in which a barcode can be induced versus the number of cells that produce informative codes , for one to three sequential cassettes , without exceeding 1% repeated occurrences in the informative codes .the color represents the percentage of discarded codes relative to the total code diversity , which can be adjusted to experimental conditions post acquisition . * f* ) although code diversity grows as , the expected number of recombination events that are needed to generate a size - stable code increases linearly in .[ fig : fig3 ] ] bp .* b * ) cassette with 17 elements and bp that attains an effective code diversity of 19,716 barcodes if the minimal lox interaction distance is greater than bp .* c * ) if two ore more pairs of lox sites recombine simultaneously , unexpected recombination products can occur .* d * ) estimated barcode distribution if two lox pairs can interact simultaneously ( blue ) .the distribution becomes flatter at the lower end , implying that rare codes are more likely than if recombination events only occur sequentially ( black ) .* e * ) mimicking a short cre activation pulse in a population of a million cells carrying a 13 element lox cassette , the number of recombination events is assumed poisson distributed with mean . after the pulse many barcodes have not experienced any excisions . * f * ) code abundance after the pulse .almost distinct barcodes are generated , with 30% being generated once .[ fig : fig4],title="fig : " ] + bp .* b * ) cassette with 17 elements and bp that attains an effective code diversity of 19,716 barcodes if the minimal lox interaction distance is greater than bp .* c * ) if two ore more pairs of lox sites recombine simultaneously , unexpected recombination products can occur . *d * ) estimated barcode distribution if two lox pairs can interact simultaneously ( blue ) .the distribution becomes flatter at the lower end , implying that rare codes are more likely than if recombination events only occur sequentially ( black ) .* e * ) mimicking a short cre activation pulse in a population of a million cells carrying a 13 element lox cassette , the number of recombination events is assumed poisson distributed with mean .after the pulse many barcodes have not experienced any excisions . * f * )code abundance after the pulse .almost distinct barcodes are generated , with 30% being generated once .[ fig : fig4],title="fig : " ] + bp . * b * ) cassette with 17 elements and bp that attains an effective code diversity of 19,716 barcodes if the minimal lox interaction distance is greater than bp .* c * ) if two ore more pairs of lox sites recombine simultaneously , unexpected recombination products can occur .* d * ) estimated barcode distribution if two lox pairs can interact simultaneously ( blue ) .the distribution becomes flatter at the lower end , implying that rare codes are more likely than if recombination events only occur sequentially ( black ) .* e * ) mimicking a short cre activation pulse in a population of a million cells carrying a 13 element lox cassette , the number of recombination events is assumed poisson distributed with mean . after the pulse many barcodes have not experienced any excisions . * f * ) code abundance after the pulse .almost distinct barcodes are generated , with 30% being generated once .[ fig : fig4],title="fig : " ]
|
cellular barcoding is a significant , recently developed , biotechnology tool that enables the familial identification of progeny of individual cells in vivo . most existing approaches rely on ex vivo viral transduction of cells with barcodes , followed by adoptive transfer into an animal , which works well for some systems , but precludes barcoding cells in their native environment , such as those inside solid tissues . with a view to overcoming this limitation , we propose a new design for a genetic barcoding construct based on the cre lox system that induces randomly created stable barcodes in cells in situ by exploiting inherent sequence distance constraints during site - specific recombination . leveraging this previously unused feature , we identify the cassette with maximal code diversity . this proves to be orders of magnitude higher than what is attainable with previously considered cre lox barcoding approaches and is well suited for its intended applications as it exceeds the number of lymphocytes or hematopoietic progenitor cells in mice . moreover , it can be built using established technology . * keywords : * cell fate tracking ; cellular barcoding ; cre lox system ; dna stochastic programme ; combinatorial explosion .
|
tolerance analysis is the branch of mechanical design dedicated to studying the impact of the manufacturing tolerances on the functional constraints of any mechanical system .minkowski sums of polytopes are useful to model the cumulative stack - up of the pieces and thus , to check whether the final assembly respects such constraints or not , see and .we are aware of the algorithms presented in , , and but we believe that neither the list of all edges nor facets are mandatory to perform the operation .so we only rely on the set of vertices to describe both polytope operands . in a first part we deal with a `` natural way '' to solve this problem based on the use of the convex hulls .then we introduce an algorithm able to take advantage of the properties of the sums of polytopes to speed - up the process .we finally conclude with optimization hints and a geometric interpretation .given two sets and , let be the minkowski sum of and a polytope is defined as the convex hull of a finite set of points , called the -representation , or as the bounded intersection of a finite set of half - spaces , called the -representation .the minkowski - weyl theorem states that both definitions are equivalent .in this paper we deal with -polytopes i.e. defined as the convex hull of a finite number of points .we note , and the list of vertices of the polytopes , and .we call the list of _ minkowski vertices_. we note and .let and be two -polytopes and , their respective lists of vertices .let and where and . we recall that in , we see that the vertex of , as a face , can be written as the minkowski sum of a face from and a face from . for obvious reasons of dimension , is necessarily the sum of a vertex of and a vertex of .moreover , in the same article , fukuda shows that its decomposition is unique .reciprocally let and be vertices from polytopes and such that is unique .let and such as with and because the decomposition of in elements from and is unique .given that and are two vertices , we have and which implies . asa consequence is a vertex of .let and be two -polytopes and , their lists of vertices , let . we know that because a minkowski vertex has to be the sum of vertices from and so .the reciprocal is obvious as as is a convex set . at this stepan algorithm removing all points which are not vertices of from could be applied to compute .the basic idea is the following : if we can build a hyperplane separating from the other points of then we have a minkowski vertex , otherwise is not an extreme point of the polytope .the process trying to split the cloud of points is illustrated in * figure * [ vsum ] .to perform such a task , a popular technique given in solves the following linear programming system . in the case of summing polytopes , testing whether the point is a minkowski vertex or not , means finding from a system of inequalities : if we define the matrix then the corresponding method is detailed in * algorithm * [ algbrut ] .now we would like to find a way to reduce the size of the main matrix as it is function of the product . -representation : list of vertices -representation : list of vertices compute with , in this section we want to use the basic property [ basicprop ] characterizing a minkowski vertex. then the algorithm computes , as done before , all sums of pairs and checks whether there exists a pair with , such as .if it is the case then , otherwise . with and with and .we get the following system : that is to say with matrices and under the hypothesis of positivity for both vectors and : we are not in the case of the linear feasibility problem as there is at least one obvious solution : the question is to know whether it is unique or not .this first solution is a vertex of a polyhedron in that verifies equality constraints with positive coefficients .the algorithm tries to build another solution making use of linear programming techniques .we can note that the polyhedron is in fact a polytope because it is bounded .the reason is that , by hypothesis , the set in of convex combinations of the vertices is bounded as it defines the polytope . same thing for in .so in the set of points verifying both constraints simultaneously is bounded too .so we can write it in a more general form : where only the second member is function of and .it gives the linear programming system : thanks to this system we have now the basic property the algorithm relies on : there exists only one pair to reach the maximum as and the decomposition of is unique it is also interesting to note that when the maximum has been reached : -representation : list of vertices -representation : list of vertices compute with and the current state of the art runs linear programming algorithms and thus is solvable in polynomial time .we presented the data such that the matrix is invariant and the parametrization is stored in both the second member and the objective function , so one can take advantage of this structure to save computation time .a straight idea could be using the classical sensitivity analysis techniques to test whether is a minkowski vertex or not from the previous steps , instead of restarting the computations from scratch at each iteration .let s switch now to the geometric interpretation , given , let s consider the cone generated by all the edges attached to and pointing towards its neighbour vertices . after translating its apex to the origin , we call this cone and we call the cone created by the same technique with the vertex in the polytope .the method tries to build a pair , if it exists , with , such that .let s introduce the variable , and the straight line .so the question about being or not a minkowski vertex can be presented this way : the existence of a straight line inside the reunion of the cones is equivalent to the existence of a pair such that which is equivalent to the fact that is not a minkowski vertex .this is illustrated in * figure * [ vsum2 ] .the property becomes obvious when we understand that if exists in then and are symmetric with respect to the origin .once a straight line has been found inside the reunion of two cones , we can test this inclusion with the same straight line for another pair of cones , here is the geometric interpretation of an improved version of the algorithm making use of what has been computed in the previous steps .we can resume the property writing it as an intersection introducing the cone being the symmetric of with respect to the origin .in this paper , our algorithm goes beyond the scope of simply finding the vertices of a cloud of points .that s why we have characterized the minkowski vertices . however , among all the properties , some of them are not easily exploitable in an algorithm . in all the caseswe have worked directly in the polytopes and , i.e. in the primal spaces and only with the polytopes -descriptions .other approaches use dual objects such as normal fans and dual cones .references can be found in , and but they need more than the -description for the polytopes they handle. this can be problematic as obtaining the double description can turn out to be impossible in high dimensions , see where fukuda uses both vertices and edges .reference works in in a dual space where it intersects dual cones attached to the vertices , and it can be considered as the dual version of property [ primcone ] where the intersection is computed with primal cones .it actually implements weibel s approach described in .such a method has been recently extended to any dimension for -polytopes in .we would like to thank pr pierre calka from the lmrs in rouen university for his precious help in writing this article .vincent delos and denis teissandier , `` minkowski sum of -polytopes in '' , proceedings of the 4th annual international conference on computational mathematics , computational geometry and statistics , singapore , 2015
|
minkowski sums are of theoretical interest and have applications in fields related to industrial backgrounds . in this paper we focus on the specific case of summing polytopes as we want to solve the tolerance analysis problem described in . our approach is based on the use of linear programming and is solvable in polynomial time . the algorithm we developped can be implemented and parallelized in a very easy way . * keywords : * computational geometry , polytope , minkowski sum , linear programming , convex hull .
|
what wins a race , acceleration or top speed ? in a long race , it s top speed ; in a short race , acceleration . in the evolutionary race to increase population size , an organism s `` top speed '' ( long - term growth rate ) depends on its _ robustness _ : a property of its underlying neutral network which determines the fraction of mutations that are selectively neutral . in a changing environment , however , the race is short. then what matters is not top speed " but `` acceleration '' ( how quickly a population achieves its long - term growth rate ) .we find that over shorter times , or in a changing environment , the most successful organisms are those which are able to reach their top growth rate quickly , even if they ultimately grow with a lower rate ._ robustness_. ever since kimura s initial work on neutral mutations in evolution , the role of robustness in determining population fate has been the subject of intense research . _ in vitro _studies on rna and protein evolution , analyses of molecular codes , and mounting evidence from _ in silico _ rna evolution have highlighted that robustness plays an important role in an organism s capacity to survive and , strikingly , to adapt .recent work suggests these observations also apply to non - biological models of self - assembly and programmable hardware .a detailed understanding of the relation between fitness and robustness in the long - time limit was put forward over a decade ago .this quantified how robustness affects an organisms long - term growth rate and led to the realisation that robustness could sometimes be more important than fitness itself , accounting for the qualitative distinction between survival of the fittest " and survival of the flattest " . _regularity_. we show that an organism s acceleration " depends on its _ regularity _ : a new property of its neutral network which determines how quickly a population reaches its steady - state growth rate . starting from a non - equilibrium distribution of population , regularity adds up the population loss due to deleterious mutations in excess of the population s long - term rate of loss . for some neutral networks ,this excess loss in the run - up to steady state can decimate the population size .we demonstrate this effect for small neutral networks and , by evolving the hammerhead ribozyme _ in silico _ , for large ones . in this paper we do the following six things , corresponding to the subsequent six sections : 1 .we re - derive the infinite - time population growth rate in terms of the fitness and robustness : .we show that is set by the principal eigenvalue of its neutral network .both were first done in .we derive the finite - time population size in terms of growth rate and regularity : .we show is set by the principal eigenvector of its neutral network .we calculate the critical mutation rate which separates the regime where higher fitness wins from the one where higher robustness wins : .we verify this crossover for simple neutral networks .we calculate the critical time which separates the regime where higher regularity wins from the one where higher growth rate wins : .we verify this crossover for simple and complex neutral networks .we provide numerical and analytic evidence that regularity and robustness are uncorrelated . we construct neutral networks with low and high , and high and low . 6 .we confirm that regularity is subject to selection on short time scales by simulating the evolution of the hammerhead rna in competition with a mutant phenotype ._ mutation graphs ._ we study a generalized genome ( the set of all genotypes ) of length and alphabet size .we associate with the genome a mutation graph , where each of the genotypes corresponds to a vertex and two vertices are connected by an edge if the genotypes differ by a single mutation .each vertex is connected to edges . in much of this paperwe take for simplicity but our results extend to alphabets of arbitrary size .we colour the vertices according to phenotype , where all genotypes in a phenotype have the same colour and belong to the same neutral network ( fig . 1 ) ._ mutational flux ._ mutation induces a population flux across neighbouring genotypes .if the mutation rate per nucleotide is , and the genome length is , the mutational flux is for .it is the fraction of a population that mutates per generation. some of this mutational flux will also cross phenotypic boundaries when neighbouring genotypes lie in two different phenotypes .the rest of the flux is neutral ._ genotype robustness ._ the genotype robustness is the probability of a mutation being neutral , that is , the fraction of edges leaving genotype which do not lead to a different phenotype .it is the number of neutral edges divided by the total number of edges , _ phenotype robustness ._ consider a neutral network ( possibly composed of disjoint clusters ) , of size ( there are genotypes in ) and whose genotypes have fitness .let be the total population on .it is distributed over genotypes in according to .the normalized population is distributed according to and is the fraction of the population on genotype . in the large - time limit, the population is distributed according to a unique distribution , which we abbreviate .we define phenotype robustness to be the large - time popuation - weighted average of the genotype robustness : it is the fraction of the population flux that is neutral .note that , where is the mean value of .this is because the population tends to concentrate in the network interior , away from surface genotypes with low ._ fitness ._ the fitness is the raw reproductive rate of the phenotype .after generations , the total population of a neutral network will have changed by a factor of , in the absence of mutations ._ structure factor ._ at large time , at every generation , a fraction of the population mutates off the neutral network . if we assume that neighbouring phenotypes have negligible fitness , then at each generation the population is multiplied by , in addition to its inherent fitness , where which we call the _ structure factor_. it depends on the shape of the neutral network . while may cause the population to increase or decrease , can only decrease it or leave it as is. _ growth rate ._ the overall factor by which a mutating population changes over the span of a generation is the product of its fitness and its structure factor , where we call the _ growth rate_. the growth rate is that fitness that can be usefully employed to increase the population and not spent replenishing population lost to deleterious mutations incurred at the boundary ; .for a single neutral network taken in isolation , it is rather than which will determine whether and to what extent the population will expand or diminish .= 4 on the 3-cube ( =3 , =2 ) , in order of robustness . for a given neutral network ,the area of a vertex is proportional to the fraction of population on it , in the large - time limit . ]_ dirac notation_. for the rest of this section and the next we use dirac , or bra - ket , notation , standard in quantum mechanics .a vector is denoted by and its transpose by .the inner and outer products of and are denoted by and .let the genotype selection vector ^t$ ] ; it selects that component of a vector which projects onto genotype ._ mutation matrix_. the action of mutation on the population distribution over a single generation can be expressed by the mutation matrix : here is the adjacency matrix of the neutral network : if nodes and share an edge and otherwise .the first term is the probability that no mutation occurs and the second the probability of mutating from to .being symmetric , can be diagonalised by its eigenvectors : where the satisfy and ._ population vector_. the population vector gives the size and distribution of the population .its component is the population on , where .the population vector is obtained by transforming an initial vector by and multiplying it by : since , all terms decay exponentially with respect to the first for . in the large time limitthe sum is dominated by the first term , whose eigenvalue is largest : where we have defined the phenotype robustness to be the quantity measures how well the shape of the neutral network can reduce the rate of deleterious mutation acting on the population as a whole .since the eigenvalues depend only on the adjacency matrix , the steady state population distribution depends only on the shape of and on neither the mutation rate nor the fitness .here we introduce a new property of neutral networks , _ regularity _ , which measures the reduction in population size whilst a population evolves out of equilibrium towards steady - state starting from a uniform distribution .we show that the regularity can have a dramatic effect on population size in the short term , long before steady state is reached ._ population size ._ in the previous section we calculated the population_ growth rate _ at infinite time ( steady state ) by computing . herewe explicitly calculate the population _ size _ at finite time by computing substituting from ( [ x_t_large_time_limit ] ) into the above yields the population size depends on the initial distribution . _simulation_. in fig . 3we plot the population size as a function of time for two neutral networks , ( top ) and ( bottom ) . in both, the initial condition is a single adaptive mutant : the population is confined to a single genotype ( grey lines ) . in , the long - term population distribution is very non - uniform .adaptive mutants starting from barren genotypes fare poorly , with long - term population sizes the size of the best performing genotypes . in ,the long - term population distribution is comparably uniform , and the average adaptive mutant fares much better .we explain this phenomenon in terms of the regularity below . in the run - up to steady state ,a uniform population distribution becomes more and more robust .accordingly , more of the population is depleted at early generations than at later ones .the difference between the depletion rates at finite time and at infinite time is the _excess depletion_. the regularity is the geometric reduction in population size due to the cumulative excess depletion , starting from a uniform population distribution . in other words , is the ratio of the long - term population size that develops when is a uniform distribution and when is the principal eigenvector .= 4 on the 3-cube ( =3 , =2 ) , in order of regularity . for a given neutral network ,the area of a vertex is proportional to the fraction of population on it , in the large time limit . ]the uniform distribution is . replacing with in ( [ population_size ] ) yields the steady - state distribution is . substituting it into ( [ population_size ] ) yields taking the ratio of ( [ uniform_initial ] ) and ( [ steady_state_initial ] ) ,we obtain for the regularity for an alternative but equivalent definition of regularity , imagine instead that the neutral network is discovered by a single adaptive mutant , with the initial condition .assuming all genotypes in the phenotype are equally likely to be the port of entry , " and recalling that the uniform distribution is the sum over all single adaptive mutants , we see that the fate of is the mean of the fates of , averaged over all in . finally , we can define the regularity in terms of the population distribution .recall that squaring both sides and summing over we find , having used the identity .then the regularity is proportional to the mean square of the population distribution at large time .since the mean square of a normalised distribution is maximised when all the weight is on a single point , and minimised when the distribution is uniform , we observe that satisfies the bounds the left relation is an inequality because it is not possible for a connected neutral network of size 2 or more to have an eigenvector confined to a single genotype .note that for large neutral networks the effect of regularity can be dramatic since can be very small . ) and with growth rate .the top network has low regularity ( ) ; the bottom has high ( ) .grey lines show the evolution of population size , starting from unit size at a single genotype .the population growth rate at steady state is unity , but the population size at steady state varies radically .the red line is the mean population size ; this is equivalent to the population size starting from a uniform initial distribution . at large times, the red line is the regularity . for the irregular network ( top ), the population can diminish by the factor . ]in this section we quantify the transition from fitness dominance to robustness dominance as a function of mutation rate . after a long time when the population is in steady state, it reproduces according to the growth rate where the robustness and is the principal eigenvalue of the network adjacency matrix . _ crossover . _ equation ( [ free_fitness2 ] ) predicts the onset of so - called survival of the flattest " for sufficiently large mutation rates .for example , two neutral networks and with and can show a crossover whereby the more fit network wins at low mutation rate , but the more robust network wins for large .the crossover occurs when , from which we see that the exchange rate between fitness and robustness is fixing all but and rearranging , the critical mutation rate at which we observe a cross - over from survival of the fittest " to survival of the flattest " is for the more fit network wins despite being less robust ; for the more robust wins despite being less fit . _ simulation ._ we illustrate this effect in fig . 4 , where we simulated five different neutral networks of size 6 drawn from the 5-cube ( ) .because the five network fitnesses are not in the same rank order as their robustnesses , they show a number of crossovers as mutation rate increases ._ comment . _note that , the mutation rate per genotype , plays the role of an effective temperature , governing the transition from rewarding fitness to rewarding flatness .this is analogous to the term in classical thermodynamics , there is temperature and is entropy . at ,robustness has no impact on the dynamics , and ; fitness alone is rewarded . of neutral networkswhose fitness rankings are in a different order to their robustness rankings can show cross over from survival of fittest " , to survival of flattest " . in this case at high mutation rates the positive effect of increased robustness can outweigh the negative effect of a reduced fitness . ]in this section we quantify the transition from regularity dominance to growth rate dominance as a function of time .the population at steady state , starting from a uniform distribution ( equally , averaged over all single adaptive mutants ) is where the regularity and is the principal eigenvector of the network adjacency matrix ._ crossover . _ equation ( [ shezzle ] ) predicts the transition to survival of the most regular " at sufficiently short times .again , two neutral networks and with and can show a crossover whereby the more regular network wins at small ( finite ) time , but the higher growth rate wins for large .the crossover occurs when , and therefore the exchange rate between growth rate and regularity is fixing everything but and rearranging , the critical time at which we observe a cross - over from dominance of most regular " to dominance of highest growth rate " is for the more regular network has a larger population , while for the highest growth rate network dominates ._ simulation . _we again illustrate this effect in fig . 5 , where we simulated five different neutral networks of size 6 drawn from the 5-cube ( ) .because the five network growth rates are not in the same rank order as their regularities , they show a number of crossovers as time increases ._ comment . _ where fitness and robustness combined to give a full picture of growth rate , now growth rate and regularity combine to give a full picture of population size .these effects refer to the _ mean _ size of an evolving population averaged over all single adaptive mutants . as shown in fig . 3, the variance around this mean can be large and the population of the less regular network can be reduced by up to which can be very small indeed . in fig .4 . over short time more regular networks ( orange , red ) can win out over irregular ones ( blue , green ) despite being at a selective disadvantage . ]our simulation in fig . 3 and otherslike it suggest that regular neutral networks , like robust ones , are highly connected. therefore it might seem that regularity is largely determined by robustness .we show in this section , however , that for all but the smallest values of , regularity is not correlated with robustness , and for most values of , there exist neutral networks with a broad range of . _ simulation ._ we studied the relation between robustness and regularity by enumerating all neutral networks for , and sampling neutral networks for and ; in all cases .( the number of neutral networks grows rapidly with , and calculating the principal eigenvectors for each gets more expensive . ) the networks were constructed as follows : ( i ) with a uniform probability , each of the genotypes was selected to be part of the neutral network ; ( ii ) the uniform probability was slightly increased , and we repeated .the results of the simulation are shown in fig .6 , in a scatter plot of - space . for ( red points ), varies from 0 to 1 , but only varies from 0.77 to 1 ; there are no very irregular subgraphs of a 4-cube . for ( blue points ), dips further down to 0.56 , while for ( grey points ) , dips to 0.42 and more of the - plane is filled .( in actuality , more of the plane is filled than shown due to under - sampling at the frontiers . ) in each case , as continues to increase , more and more of the unit plane is accessible , and for most there is a wide range of ._ tadpoles ._ we can show for large that one can find neutral networks that span almost all of - space .a tadpole network is a -dimensional hypercube with a path ( a graph with no branches ) appended to one corner . for any tadpole , the principal eigenvalue , since for a -cube and for any subgraph of , .then by ( [ r_eigen_def ] ) the robustness of a tadpole satisfies .whereas is dominated by the head of a tadpole , is dominated by the tail , since the population decays exponentially into the tail .this means that as one increases tail size for a given head size the regularity . making the head of the tadpole larger increases both and , but increasing the tail only reduces , leaving almost unchanged . by choosing a head size such that and , wecan then reduce by increasing the size of the tail while having almost no effect on , thereby forming almost any two values of - required .in this section , by simulating the evolution of two rna ribozymes , we provide further evidence that evolution can select for regularity at short time scales .( red points ) , and 10,000 randomly sampled neutral networks for ( blue points ) and ( grey points ) .as increases , more of the - unit square becomes accessible . for three specific grey data points ,the corresponding neutral networks are illustrated . ] _ two phenotypes ._ we consider two rna secondary structures in fig .the first is the hammerhead ribozyme ( ham ) ; the second is a mutant phenotype ( mut ) with a considerably different secondary structure .both are formed from an rna of length .we assign both phenotypes the same fitness . in order to visualise the neutral networks of ham and mut, we constrain mutation so that it can only occur at randomly chosen hotspots " on the sequence ( black squares ) . each network is a slice of its full neutral network along of the possible dimensions .vertex area is proportional to the steady - state population distribution , . of the allowed sequences ( genotypes ) , to ham ( fig . 7 , left ) and fold to mut ( fig . 7 , right ) ; the rest fold to various different secondary structures , not shown .all folding was performed using the viennarna package .ham has lower robustness than mut ( 0.33 vs 0.46 ) and therefore lower growth rate , but a higher regularity ( 0.97 vs 0.67 ) . __ we simulated the evolution of a population of sequences , equally split between ham and mut ( ) . for each run ,the ham population all began on the same genotype , randomly chosen from its neutral network ; the same applies to mut and its network . at every generation , for each sequence a point mutation occurred somewhere along the chain with probability , and the sequence survived if its phenotype was preserved , but died if it was not .then the total population was renormalized by randomly selecting sequences with replacement . _ at steady state . _ after a long time when steady state is reached , the story is simple : of the ham population advanced to the next generation , whereas of the mut population did so .the selective advantage conferred by the higher robustness meant that mut always won and ham always lost over 100 runs ( fig .8 , left ) ._ before steady state ._ at shorter time scales , the story is more subtle .although mut is more robust , it is less regular , and of the time the mut population suffered an early invasion from ham ( fig . 8 ,ham ultimately reproduces with a lower rate than mut , but _ on average it gets to its top rate more quickly_. in the mut network ( fig . 7 , right ), there are genotypes located on the left peninsula of the network whose components .if the mut population begins on one of these genotypes , it is likely to suffer an early invasion by ham . for our finite population ,this invasion is sometimes large enough for mut to go extinct due to drift. allowed sequences that fold to the hammerhead ribozyme ( ham ) secondary structure .( right ) the same but for the 141 sequences that fold to a mutant ( mut ) secondary structure .vertex area is proportional to the long - time population distribution , .mut is more robust than ham , but less regular . ] and . ( left ) at steady state , the more robust mut wins every time out of 100 runs .( right ) at shorter times , the more regular ham wins 10 times out of 100 runs .mut ultimately reproduces with a higher rate than ham , but on average ham gets to its top rate more quickly . ]the six main results of this paper are listed at the end of the introduction . herewe present some unifying observations , suggestions for experiment , and generalisations to other fields ._ fitness , growth rate and size . _the progression from fitness to growth rate to population size reflects both their chronology of discovery and their hierarchical relationship ; this is illustrated in fig .fitness alone does not capture population growth rate .robustness , a property of neutral network shape , combines with fitness to include the effect of deleterious mutations , giving the growth rate .similarly , growth rate alone does not capture population size .regularity , a different property of neutral network shape , combines with growth rate to include the effect of higher depletion rates early on , giving the size .both growth rate and size exhibit crossovers : from to as a function of mutation rate , and from to as a function of time , respectively . _ smooth shapes are very rare in the wild , " _ wrote mandelbrot , but extremely important in the ivory tower and the factory . " the robustness and regularity characterise the shape of a graph , one via the graph s principal eigenvalue , the other its principal eigenvector . butour intuition is valid for smooth , often euclidean , shapes , not the jagged , high - dimensional shapes of neutral networks , which are themselves subgraphs of hamming graphs .we have presented evidence that and are largely uncorrelated , but their behaviour is far from intuitive , and their precise relationship remains an open question .we speculate that simple models of evolutionary population dynamics , such as the evolution along the 1-d fitness gradient explored in , are likely to exhibit fundamentally different behaviours from the same dynamics on neutral networks . _ experimental implications . _selection for mutational robustness at high mutation rates has been observed in both sub - viral pathogens and clonal bacterial populations .recent experimental work has also shown that selection for second order effects , in this case evolvability , is observed in populations of _ e. coli _ .our work suggests that , in addition to selection for robustness , populations experiencing high mutation rates in a changing environment could be subject to selection for regularity .experiments such as those performed in , adjusted so that the environment is periodically altered , could directly test for the selection of regularity . over short periodswe anticipate that successful organisms would be selected on their ability to reach their top growth rates quickly , rather than on their top growth rates themselves ._ benefit of regularity in other fields ._ for many systems in a changing environment , the ability to achieve its top performance quickly may prevail over just how good its top performance is .for example , in society , a person s innate talent may be less important than the speed with which he acquires new habits or learns new skills . in industry , companies which can quickly produce acceptable versions of desirable productsmay consistently outperform those which eventually produce great versions . for living systems , where the environment is constantly changing, the ability to quickly adopt the most robust population distribution may be an essential attribute of a champion evolver .
|
we study the relative importance of top - speed " ( long - term growth rate ) and acceleration " ( how quickly the long - term growth rate can be reached ) in the evolutionary race to increase population size . we observe that fitness alone does not capture growth rate : robustness , a property of neutral network shape , combines with fitness to include the effect of deleterious mutations , giving growth rate . similarly , we show that growth rate alone does not capture population size : regularity , a different property of neutral network shape , combines with growth rate to include the effect of higher depletion rates early on , giving size . whereas robustness is a function of the principal eigenvalue of the neutral network adjacency matrix , regularity is a function of the principal eigenvector . we show that robustness is not correlated with regularity , and observe _ in silico _ the selection for regularity by evolving rna ribozymes . despite having smaller growth rates , the more regular ribozymes have the biggest populations .
|
compressed sensing ( or compressive sampling ) is a novel and emerging technology with a variety of applications in imaging , data compression , and communications . in compressed sensing, one can recover sparse signals of high dimension from few measurements that were believed to be incomplete .mathematically , measuring an -dimensional signal with a measurement matrix produces a -dimensional vector in compressed sensing , where . in recovering , it seems impossible to solve linear equations with indeterminates by basic linear algebra .however , imposing an additional requirement that is -sparse or the number of nonzero entries in is at most , one can recover exactly with high probability by solving the -minimization problem , which is computationally tractable .many research activities have been triggered on theory and practice of compressed sensing since donoho , and candes , romberg , and tao published their marvelous theoretical works .the efforts revealed that a measurement matrix plays a crucial role in recovery of -sparse signals .in particular , candes and tao presented the _ restricted isometry property ( rip ) _ , a sufficient condition for the matrix to guarantee sparse recovery .a _ random _matrix has been widely studied for the rip , where the entries are generated by a probability distribution such as the gaussian or bernoulli process , or from randomly chosen partial fourier ensembles .although a random matrix has many theoretical benefits , it has the drawbacks of high complexity , large storage , and low efficiency in its practical implementation . as an alternative , we may consider a _matrix , where well known codes and sequences have been employed for the construction , e.g. , chirp sequences , kerdock and delsarte - goethals codes , second order reed - muller codes , and dual bch codes .other techniques for deterministic construction , based on finite fields , representations , and cyclic difference sets , can also be found in , respectively . although it is difficult to check the rip and the theoretical recovery bounds are worse than that of a random matrix , the deterministic matrices guarantee reliable recovery performance in a statistical sense , allowing low cost implementation . to enjoy the benefits of deterministic construction ,this correspondence presents how to construct a measurement matrix for compressed sensing via _ additive character sequences_. we construct the matrix by employing additive character sequences with small alphabets as its column vectors .the weil bound is then used to show that the matrix has asymptotically optimal coherence for , and to present a sufficient condition on the sparsity level for unique sparse recovery .the rip of the matrix is also analyzed through the eigenvalue statistics of the gram matrices as in . using additive character sequences with small alphabets ,the matrix can be efficiently implemented by linear feedback shift registers . through numerical experiments ,we observe that the deterministic compressed sensing matrix guarantees reliable and noise - resilient matching pursuit recovery performance for sparse signals .the following notations will be used throughout this correspondence . 1 . is a primitive -th root of unity , where . is the finite field with elements and denotes the multiplicative group of .3 . ] is an element of .4 . let be prime , and and be positive integers with .a _ trace _ function is a linear mapping from onto defined by where the addition is computed modulo .let be prime and a positive integer .we define an _ additive character _ of as where for .the weil bound gives an upper bound on the magnitude of additive character sums .we introduce the bound as described in .[prop : weil ] let ] and any . the restricted isometry property ( rip ) presents a sufficient condition for a measurement matrix to guarantee unique sparse recovery .[ def : rip ] the restricted isometry constant of a matrix is defined as the smallest number such that holds for all -sparse vectors , where with .we say that obeys the rip of order if is reasonably small , not close to . in fact , the rip requires that all subsets of columns taken from the measurement matrix should be _ nearly orthogonal _ . indeed, candes asserted that if , a unique -sparse solution is guaranteed by -minimization , which is however a hard combinatorial problem .a tractable approach for sparse recovery is to solve the -minimization , i.e. , to find a solution of subject to , where .in addition , greedy algorithms have been also proposed for sparse signal recovery , including matching pursuit ( mp ) , orthogonal matching pursuit ( omp ) , and cosamp . in particular ,if a measurement matrix is deterministic , we may exploit its structure to develop a reconstruction algorithm for sparse signal recovery , providing fast processing and low complexity . in compressed sensing ,a deterministic matrix is associated with two geometric quantities , _ coherence _ and _ redundancy _the coherence is defined by where denotes a column vector of with , and is its conjugate transpose . in fact , the coherence is a measure of mutual orthogonality among the columns , and the small coherence is desired for good sparse recovery . in general , the coherence is lower bounded by which is called the _ welch bound _ .the redundancy , on the other hand , is defined as , where denotes the spectral norm of , or the largest singular value of .we have , where the equality holds if and only if is a _ tight _ frame . for unique sparse recovery ,it is desired that should be a tight frame with the smallest redundancy .[ cst : mat_add ] let be an odd prime , and and be positive integers where .let and . set a column index to where .for each , , let where and is a primitive element in .for a positive integer , let be distinct integers such that and for each , .then , we construct a compressed sensing matrix where each entry is given by where . in construction [ cst :mat_add ] , if and s are successive odd integers , then each column vector of is equivalent to a codeword of the dual of the extended binary bch code , which has been studied in for compressed sensing . in , xu also presented a similar construction by defining an additive character with large alphabet as where , which is a _ generalization _ of chirp sensing codes . using the weil bound on additive character sums , we determine the coherence of .[ th : coh_add ] in the matrix from construction [ cst : mat_add ] , the coherence is given by where if , the coherence is asymptotically optimal , achieving the equality of the welch bound ._ consider the column indices of and , where . according to ( [ eq : b_u ] ) ,let or , and or , respectively .similarly , from ( [ eq : phi_add ] ) , let if , or otherwise .then , the inner product of a pair of columns in is given by in ( [ eq : add_f ] ) , if , then is a nonzero polynomial in ] with reasonably small , the condition numbers should be as small as possible for unique sparse recovery .from this point of view , we observe from figure [ fig : eigen ] that our additive character sensing matrix shows better statistics of condition numbers than the gaussian matrix .this convinces us that in construction [ cst : mat_add2 ] is suitable for compressed sensing in a statistical sense . in construction [ cst :mat_add ] , excluding the first element of , each column of is a _ pseudo - random sequence _ where each element is represented as a combination of trace functions which is modulated by an exponential function .precisely , the pseudo - random sequence is .since a sequence of a trace function is generated by a linear feedback shift register ( lfsr ) , is generated by a combination of different lfsrs where each lfsr has at most registers .generating each column with lfsrs , we can efficiently implement the sensing matrix with low complexity . for more details on a trace function and its lfsr implementation ,see .as an example , figure [ fig : lfsr_add ] illustrates an lfsr implementation generating a sequence , for the matrix in construction [ cst : mat_add2 ] . in the example, we take and , and define the finite field by a primitive polynomial that has the roots of a primitive element and its conjugates .then , specifies a feedback connection of the upper lfsr that generates a ternary sequence of .the lower lfsr , on the other hand , has a feedback connection specified by that has the roots of and its conjugates , generating a ternary sequence of . in the structure ,note that each register can take a value of , or , and the addition and multiplication are computed modulo .finally , the sequences of are generated by the lfsr structure for every possible pairs of initial states corresponding to .as there exist total initial state pairs , the corresponding sequences make columns for .figure [ fig : succ_add ] shows numerical results of successful recovery rates of -sparse signals measured by a compressed sensing matrix in construction [ cst : mat_add2 ] , where total sample vectors were tested for each sparsity level . for comparison , the figure also shows the rates for randomly chosen partial fourier matrices of the same dimension , where we chose a new matrix at each instance of an -sparse signal , in order to obtain the average rate .each nonzero entry of an -sparse signal is independently sampled from the normal distribution with zero mean and variance , where its position is chosen uniformly at random . for both sensing matrices ,the matching pursuit recovery with maximum iteration of was applied for the reconstruction of sparse signals .a success is declared in the reconstruction if the squared error is reasonably small for the estimate , i.e. , . and.,scaledwidth=75.0% ] in the experiment , we observed that if , more than of -sparse signals are successfully recovered for the matrix , which verifies the sufficient condition in corollary [ co : add2 ] .furthermore , the figure reveals that our sensing matrix has fairly good recovery performance as the sparsity level increases .for instance , more than successful recovery rates are observed for , which implies that the sufficient condition is a bit pessimistic . the sensing matrix also shows better recovery performance than randomly chosen partial fourier matrices with matching pursuit recovery .we made a similar observation from additive character and partial fourier compressed sensing matrices with and . in figure[ fig : succ_add ] , each element of takes , , or , while the partial fourier matrix has the element of where .therefore , the compressed sensing matrix from additive character sequences with small alphabets has low implementation complexity as well as good recovery performance . and .,scaledwidth=75.0% ] in practice , a measured signal contains measurement noise , i.e. , , where denotes a -dimensional complex vector of noise .thus , a compressed sensing matrix must be robust to measurement noise by providing stable and noise resilient recovery .figure [ fig : succ_n ] displays the matching pursuit recovery performance of our sensing matrix in the presence of noise. the experiment parameters and the sparse signal generation are identical to those of noiseless case . in the figure , is -spare for , and signal - to - noise ratio ( snr ) is defined by , where each element of is an independent and identically distributed ( i.i.d . )complex gaussian random process with zero mean and variance .in noisy recovery , a success is declared if after iterations . from figure[ fig : succ_n ] , we observe that the recovery performance is stable and robust against noise corruption at sufficiently high snr , which is similar to that of randomly chosen partial fourier matrices .this correspondence has presented how to deterministically construct a measurement matrix for compressed sensing via additive character sequences .we presented a sufficient condition on the sparsity level of the matrix for unique sparse recovery .we also showed that the deterministic matrix with is ideal , achieving the optimal coherence and redundancy .furthermore , the rip of the matrix has been statistically analyzed , where we observed that it has better eigenvalue statistics than gaussian random matrices . the compressed sensing matrix from additive character sequencescan be efficiently implemented using lfsr structure . through numerical experiments ,the matching pursuit recovery of sparse signals showed reliable and noise resilient performance for the compressed sensing matrix .e. j. candes , j. romberg , and t. tao , `` robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , '' _ ieee trans . inf .theory _ , vol .52 , no . 2 , pp . 489 - 509 , feb .2006 .r. calderbank , s. howard , and s. jafarpour , `` construction of a large class of deterministic sensing matrices that satisfy a statistical isometry property , '' _ ieee journal of selected topics in signal processing _ , vol . 4 , no . 2 , pp .358 - 374 .l. applebaum , s. d. howard , s. searle , and r. calderbank , `` chirp sensing codes : deterministic compressed sensing measurements for fast recovery , '' _ appl . and comput ._ , vol . 26 , pp283 - 290 , 2009 .r. calderbank , s. howard , and s. jafarpour , `` a sublinear algorithm for sparse reconstruction with recovery guarantees , '' _ ieee international workshop on computational advances in multi - sensor adaptive processing _ , pp . 209 - 212 , 2009 .s. howard , r. calderbank , and s. searle , `` a fast reconstruction algorithm for deterministic compressive sensing using second order reed - muller codes , '' _ conference on information systems and sciences ( ciss ) _ , princeton , nj , mar .
|
compressed sensing is a novel technique where one can recover sparse signals from the undersampled measurements . in this correspondence , a measurement matrix for compressed sensing is deterministically constructed via additive character sequences . the weil bound is then used to show that the matrix has asymptotically optimal coherence for , and to present a sufficient condition on the sparsity level for unique sparse recovery . also , the restricted isometry property ( rip ) is statistically studied for the deterministic matrix . using additive character sequences with small alphabets , the compressed sensing matrix can be efficiently implemented by linear feedback shift registers . numerical results show that the deterministic compressed sensing matrix guarantees reliable matching pursuit recovery performance for both noiseless and noisy measurements . additive characters , compressed sensing , restricted isometry property , sequences , weil bound .
|
single particle tracking dates back to the classic study of perrin on brownian motion ( bm ) .it generates the position time series of an individual particle trajectory in a medium ( see , e.g. , refs . ) and when properly interpreted , the information drawn from a single , or a finite number of trajectories , can provide insight into the mechanisms and forces that drive or constrain the motion of the particle .the method is thus potentially a powerful tool to probe physical and biological processes at the level of a single molecule . at the present time , single particle tracking is widely used to characterize the microscopic rheological properties of complex media , and to probe the active motion of biomolecular motors . in biological cells and complex fluids , single particle trajectory ( spt ) methods have , in particular , become instrumental in demonstrating deviations from normal bm of passively moving particles ( see , e.g. , refs . ) .the reliability of the information drawn from spt analysis , obtained at high temporal and spatial resolution but at expense of statistical sample size is not always clear .time averaged quantities associated with a given trajectory may be subject to large fluctuations among trajectories . for a wide class of anomalous diffusions described by continuous - timerandom walks , time - averages of certain particle s observables are , by their very nature , themselves random variables distinct from their ensemble averages .an example is the square displacement time - averaged along a given trajectory , which differs from the ensemble averaged mean squared displacement . by analyzing time - averaged displacements of a particular trajectory realization, subdiffusive motion can actually look normal , although with strongly differing diffusion coefficients from one trajectory to another .standard bm is a much simpler and exceedingly well - studied random process than anomalous diffusion , but still it is far of being as straightforward as one might be tempted to think . even in bounded systems , despite the fact that the first passage time distribution has all moments , first passages to a given target of two independent identical bms , starting at the same point in space , will most likely occur at two distinctly different time moments , revealing a substantial manifestation of sample - to - sample fluctuations .ergodicity , that is , equivalence of time- and ensemble - averages of square displacement holds only in the infinite sample size limit . in practice , it means that standard fitting procedures applied to finite ( albeit very long ) trajectories of a given particle will unavoidably lead to fluctuating estimates of the diffusion coefficient .indeed , variations by orders of magnitude have been observed in spt measurements of the diffusion coefficient of the laci repressor protein along elongated dna ( see also section [ z ] ) .significant sample - to - sample fluctuations resulting in broad histograms for the value of the diffusion coefficient have been observed experimentally for two - dimensional ( 2d ) diffusion in the plasma membrane , as well as for diffusion of a single protein in the cytoplasm and nucleoplasm of mammalian cells .such a broad dispersion of the value of the diffusion coefficient extracted from spt measurements , raises important questions about the correct or optimal methodology that should be used to estimate .indeed , these measurements are performed in rather complex environments and each spt has its own history of encounters with other species , defects , impurities , etc . , which inevitably results in rather broad histograms for observed .on the other hand , it is highly desirable to have a reliable estimator of the diffusion coefficient even for the hypothetical `` pure '' cases , such as , e.g. , unconstrained standard bm .a reliable estimator should produce a distribution of as narrow as possible and with the most probable value as close as possible to the ensemble average one .a knowledge of the distribution of such an estimator could provide a useful gauge to identify effects of the medium complexity as opposed to variations in the underlying thermal noise driving microscopic diffusion .commonly used methods of extraction of from the spt data are based on a least square ( ls ) estimate of the time - averaged square displacement and some of its derivatives ( see , e.g. , and the next section ) . a recent study , ref . , focussed on estimators for for 1d bm , the statistics of which is amenable to analytical analysis .several methods for estimating from the spt data were studied and it was shown that a completely different approach - consisting of maximizing the unconditional probability of observing the whole trajectory - is superior to those based on the ls minimization . as a matter of fact , at least in 1d systems the distribution of the maximum likelihood ( ml ) estimator of the diffusion coefficient not only appears narrower than the ls ones , resulting in a smaller dispersion , but also the most probable value of the diffusion coefficient appears closer to the ensemble average . in this paperwe focus first on the case of pure standard bm and calculate exactly , for arbitrary spatial dimension , the distribution of the maximum likelihood estimator of the diffusion coefficient of a single bm trajectory .the parameter here is the lag time ( at which the measurement is started ) which can be set equal to zero for standard bm without any lack of generality .however for anomalous diffusion , or bm in presence of disorder will play a significant role .the symbol denotes ensemble average , so that being the ensemble - average diffusion coefficient. consequently , the random variable is defined as the ratio of the realization - dependent diffusion coefficient , calculated as the weighted time - average along a single trajectory , and the ensemble average diffusion coefficient . clearly , .further on , we analyze here a useful measure of sample - to - sample fluctuations - the distribution function of the random variable where and are two identical independent random variables with the distribution . hence , the distribution probes the likelihood of the event that the diffusion coefficients drawn from two different trajectories are equal to each other . finally , we discuss the effect of disorder on the distributions and for 1d bm in random media .we consider two different models of diffusion in 1d random environments - diffusion in presence of a random quenched potential with a finite correlation length , as exemplified here by the slutsky - kardar - mirny model , and diffusion in a random forcing landscape - the so - called sinai model .the former is appropriate for diffusion of proteins on dna , which is affected by the base - pair reading interaction and thus is sequence dependent , while the latter describes , for example , the dynamics of the helix - coil boundary in a melting heteropolymer .note that in the former case , at sufficiently large times , one observes a diffusive - type motion with , while in the latter case dynamics is strongly _ anomalous _ so that is logarithmically confined , .the paper is outlined as follows : in section [ a ] we recall some common fitting procedures used to calculate the diffusion coefficient from single particle tracking data . in section [ b ] we focus on the maximum likelihood estimator and , generalizing the approach developed in ref . for 1d systems , obtain new results for the moment generating function and the probability density function of the ml estimator for arbitrary spatial dimension . in that sectionwe also obtain the asymptotical behavior of the probability distribution function , as well as its kurtosis and skewness .next , in section [ c ] we focus on the probability distribution function of the random variable - a novel statistical diagnostics of the broadness of the parental distribution which probes the likelihood of the event that two estimates of the diffusion coefficient drawn from two different trajectories are the same .further on , section [ d ] presents a comparison of the commonly used least squares estimator and the maximum likelihood estimator .we show that the latter outperforms the former in any spatial dimension producing a lower variance and the most probable value being closer to the ensemble average value .next , in section [ e ] we focus on brownian motion in presence of disorder .as exemplified by two models of dynamics in systems with quenched disorder - sinai diffusion ( random force ) and slutsky - kardar - mirny model ( random potential ) , disorder substantially enhances the importance of sample - to - sample fluctuations .we show that the observation of values of the diffusion coefficient significantly lower than the ensemble average becomes more probable .we show , as well , that as the strength of disorder is increased , the distribution undergoes a surprising shape - reversal transition from a bell - shaped unimodal to a bimodal form with a local minimum at .finally , we conclude in section [ f ] with a brief recapitulation of our results and some outline of our further research .to set up the scene , we first briefly recall several fitting procedures commonly used to calculate the diffusion coefficient from the spt data .more detailed discussion can be found in refs .we focus here on estimators which yield a first power of .non - linear estimators , e.g. a mean maximal excursion method which has been used to study anomalous diffusion and produces , will be analyzed elsewhere .one of the simplest methods consists in calculating a least squares estimate based on the minimization of the integral where the diffusion law is taken either as a linear , , or an affine function , . in particular , for the linear case the least squares minimization yields the following linear - least - squares estimator : where is the normalization factor , , conveniently chosen so that . a second , more sophisticated, approach is based on which is the temporal moving average over a sufficiently long trajectory produced by the underlying process of duration .the diffusion coefficient is then extracted from fits of , or from a related least squares estimator , which is given by the following functional of the trajectory where is the same normalization constant as in eq .( [ alpha ] ) .note that the random variable is again conveniently normalized so that , which enables a direct comparison of the respective distributions of different estimators . as shown in ref . , only provides a slightly better estimate of than .a conceptually different fitting procedure has been discussed in ref . which amounts to maximizing the unconditional probability of observing the whole trajectory , assuming that it is drawn from a brownian process with mean - square displacement ( see eq .( [ msd ] ) ) .this is the maximum likelihood estimate which takes the value of that maximizes the likelihood of , defined as : where the trajectory is appropriately discretized . differentiating the logarithm of with respect to and setting , one finds the maximum likelihood estimate of , which upon a proper normalization is defined by eq .( [ u ] ) . below we will derive the distribution function of the ml estimator and compare it against numerical results for the distribution function of the ls estimator for .let denote the moment generating function of the random variable defined in eq .( [ u ] ) , the squared distance from the origin , of -dimensional bm at time for a given realization , decomposes into the sum being realizations of trajectories of independent 1d bms ( for each spatial direction ) .thus , factorizes where here , in order to calculate , we follow the strategy of ref . and introduce an auxiliary functional : where the expectation is for a bm starting at at time . we derive a feynman - kac type formula for considering how the functional in eq .( [ fc ] ) evolves in the time interval . during this interval the bm moves from to , where is an infinitesimal brownian increment such that and , where denotes now averaging with respect to the increment .for such an evolution we have to order : expanding the right - hand - side of the latter equation to second order in , linear order in and performing averaging , we find eventually the following schrdinger equation : the solution of this equation has been obtained in ref . and gives where is the modified bessel function .consequently , we find the following general result note that is independent of and , as it should in virtue of the scaling properties of the bm .we turn next to the analysis of the distribution of the ml estimator defined in eq .( [ u ] ) .first of all , we calculate several first moments of by merely differentiating the result in eq .( [ t ] ) : consequently , one may expect that all moments tend to as , so that . for fixed , the variance , the coefficient of asymmetry and the kurtosis .all these characteristics vanish when . in eq .( [ integral ] ) for ml estimates of bm : the blue solid line ( left ) corresponds to , the red line ( middle ) to and the green line ( right ) to .the open circles present the results of numerical simulations . in the insetthe dashed lines correspond to the small- and large- asymptotics in eqs .( [ small ] ) and ( [ large]).,scaledwidth=45.0% ] note next that since , the poles of are located at , where is the zero of the bessel function .consequently , for even , we can straightforwardly find in form of an infinite series in the zeros of the bessel function .for , has only simple poles so that the expansion theorem gives for and the standard residue calculus yields and where similar results can be readily obtained for greater even . for arbitrary , including odd values , the distribution is defined by inverting the laplace transform and is given by the following integral : where the phase is given by and being the order kelvin functions .finally , we consider the small- and large- asymptotic behavior of the probability density function . to extract the small- asymptotic behavior of we consider the large- form of . from eq .( [ t ] ) we get as .consequently , we find the following strongly non - analytical behavior : the large- behavior of the distribution is defined by the behavior of the moment generating function in the vicinity of , consequently , we find that for , decays as this behavior is , of course , consistent with the series expansions in eqs .( [ 2d ] ) , ( [ 4d ] ) and ( [ 6d ] ) . our results on the distribution are summarized in fig .[ fig1 ] . for standard bm obtained from eqs .( [ integral ] ) and ( [ pomega ] ) . increases with the dimension .the blue ( lower ) solid line corresponds to , the red ( middle ) line to and the green ( upper ) line to .open circles with the same color code ( and relative positions ) present the results of numerical simulations for the ls estimator defined in eq .( [ deltat ] ) . note that an apparent coincidence of the results for the distributions for the ml estimator in 2d and that for the ls estimator in 3d is accidental .it just signifies that the former outperforms the latter ., scaledwidth=45.0% ]suppose next that we have two different independent realizations of bm trajectories , and which we use to generate to independent random variables and . a natural question arising about their suitability as estimators ishow likely is it that they will have the same value ?of course the distributions and thus moments of these two random variables are the same , however a measure of their relative dispersion can be deduced by studying the distribution function of the random variable , defined in eq .( [ omega ] ) .this distribution is given explicitly by and hence , it suffices to know in order to determine . for , ( and ,in fact , for any other even ) , can be evaluated exactly .plugging eq .( [ 2d ] ) into ( [ pomega ] ) , we get : performing the sum over , we arrive at the following result for the distribution in 2d systems numerically obtained distributions for and dimensional systems are presented in fig . [ 2dp(w ) ] .notice that in all dimensions is the most probable value of the distribution so that most probably .nevertheless , the distributions are rather broad which signifies that sample - to - sample fluctuations are rather important .we will now show that the ml estimator defined in eq .( [ u ] ) substantially outperforms the ms estimator as defined in eq .( [ deltat ] ) in any spatial dimension .this is a very surprising result as one would intuitively expect , and it is often stated in the literature , that using the process has the effect of a reducing the fluctuations of the estimate of because the process is partially averaged in time . to demonstrate this , we present in fig . [ comp ] a comparison of the analytical results for of the ml estimator with the corresponding distributions of the ls estimator for obtained numerically . in eq .( [ integral ] ) ( solid lines ) and the results of the numerical simulations for the distribution ( histograms ) . from left to right : , 2 and 3.,scaledwidth=45.0% ] indeed , we find that the variance of the distribution equals , and for and , respectively .the distribution of the ml estimator appears to be substantially narrower so that the variance is significantly lower , , and .moreover , the most probable values of are closer to the ensemble average value than the most probable values of to : we observe that the distribution attains its maximal values at , and for and , respectively , while the corresponding maxima of the distribution are located at and .last but not least , the distribution appears to be significantly broader than , as revealed by fig . [ 2dp(w ) ] .the worst performance of the ls estimator is in 1d systems in which the distribution has a bimodal -shape with a local _ minimum _ at , and maxima ( most likely values ) around and .this means that the values and drawn from two different trajectories will most probably be different by an order of magnitude !in this final section we address the question of how the distribution of the ml estimator of a single trajectory diffusion coefficient will change in presence of quenched disorder .we will consider two different models of bm in random 1d environments - diffusion in presence of a random correlated potential and diffusion in presence of a random force . for a particle diffusing in a random energy landscape with variance , correlation length and various disorder strengths . from right to left : blue histogram corresponds to , red - to , green - to , yellow - to and brown - to .the walk duration is , and averages are taken over walks occurring in independent landscapes . for comparison we present the distribution ( solid black curve ) for standard 1d bm ( ) .the corresponding distributions are shown in the inset ( decreases with increasing ).,scaledwidth=45.0% ] first we consider a bm in a 1d inhomogeneous energy landscape , where disorder is correlated over a finite length .this model gives a simple description of diffusion of a protein along a dna sequence , for instance , where the particle interacts with several neighboring base pairs at a time .the total binding energy of the protein is assumed to be a random variable .when the particle hops one neighboring base further to the right or to the left , its new energy is highly correlated to the value it had before the jump . modeled this process as a point - like particle diffusing on a 1d lattice of unit spacing with random site energies , whose distribution is gaussian with zero mean , variance and is correlated in space as ] and $ ] , respectively , where .diffusion is asymptotically normal for any disorder strength .nevertheless , the particle can be trapped in local energy minima for long periods of time . during an extended intermediate time regime , it is observed that first passage properties fluctuate widely from one sample to another .our numerical simulations reveal that disorder has a dramatic effect on the distributions and . as shown by fig .[ figpumirny ] , the distribution broadens significantly in the small regime : very small values of the time - average diffusion constant ( compared to the thermal and disorder average ) become increasingly more probable as the disorder strength increases .however , the right tail of is much less affected .similarly , two independent measurements are likely to differ significantly , even in moderately disordered media ( see inset of fig .[ figpumirny ] ) . when , the distribution undergoes a continuous shape reversal transition - from a unimodal bell - shaped form to a bimodal -shape one with the minimum at and two maxima approaching and at larger disorder strengths .unfortunately , it does not seem possible to obtain this critical value analytically .even for the case of a pure brownian motion considered in ref . , such an analysis appears to be extremely difficult .therefore , for sample - to - sample fluctuations becomes essential and it is most likely that the diffusion coefficients drawn from two different trajectories will be different . for sinai diffusion and different strengths of the disorder . from right to left : blue histogram corresponds to , red - to , green - to and yellow - to .the solid black curve depicts the distribution for 1d bm ( ) , and the corresponding distributions are shown in the inset with the same color code ( for close to 0 or 1 , increases with ) . in panel _ ( b ) _ , distribution for sinai diffusion with , the integration time is set to and is varied . decreases with increasing : the different curves corresponds to ( from darker gray to lighter gray ) : ( blue ) , ( red ) , ( green ) and ( yellow ) . in panel _ ( c ) _ , the lag time is set to and the integration time is varied . decreases with increasing : ( blue ) , ( red ) , ( green ) and ( yellow).,scaledwidth=45.0% ] we discuss now the effect of disorder on the distributions and for 1d bm in presence of a quenched uncorrelated random force - the so - called sinai diffusion . in this model, one considers a random walk on a 1d infinite lattice and site dependent hopping probabilities : for hopping from to the site and for hopping to the site .here , are independent , uncorrelated , identically distributed random variables with distribution and the strength of disorder is bounded away from and .it is well - known that in the large- limit the model produces an anomalously slow sub - diffusion , where the angle brackets denote averaging with respect to different realizations of disorder . at shorter times , however , one observes an extended stage with a transient behavior which is substantially different from the asymptotic one . as a consequence, the statistics of , defined in equation , and will depend not only on the integration time but also on the lag time from which a single trajectory is analyzed .we have numerically computed the distributions and . in fig .[ fig : sinai] we present the dependence of the and statistics on the strength of the disorder . as in the previous disordered potential case , we find that the maximum of shifts toward zero as the disorder gets stronger . for comparison , the solid black line in fig .[ fig : sinai] represents observed for standard bm , .moreover , the stronger the disorder is , the broader the distribution becomes , yielding more peaked maxima in ( see the inset of fig . [fig : sinai] ) .we also note that has a bimodal -shaped form even for the weakest disorder that we have considered , suggesting that the zero - disorder limit is non - analytic compared to the continuous transition observed for diffusion in the random energy landscape of the previous section . in fig .[ fig : sinai] and fig .[ fig : sinai] , we show the statistics of for different values of and . increasing ( or )we observe that changes from an almost uniform form , for which any relation between and is equally probable , to a bimodal -shaped distribution , which signifies that in this regime two ml estimates and will most likely have different values .bimodality is a property of the sinai regime , as also noticed earlier in ref . , and thus it shows up only at sufficiently long times when the trajectories follow the asymptotic ultra - slow diffusion .the distribution is remarkably sensitive to the characteristic aging of sinai diffusion . as a final observation, one may also study the statistics of for trajectories evolving in the same random force field . in this case ( not shown here ) one gets a narrow distribution for of and a unimodal that converges to a delta singularity at when the disorder becomes infinite .this is due to the fact , known as golosov phenomenon , that two trajectories in the same disorder will move together .we have analyzed the reliability of the ml estimator for the diffusion constant of standard brownian motion and shown its superiority over the more commonly used ls estimator in a number of important aspects , notably the variance of the estimator , the proximity of the most probable value to the true mean value and the distribution of the random variable which is a measure of the extent to which two estimations of vary .going beyond the important test case of pure brownian motion we have also analyzed the effect of quenched disorder , modeling fluctuations of the local energy landscape and forces .as one may have intuitively expected , the presence of short range disorder tends to broaden the distribution of the so measured value of , as it presents an additional source of fluctuation .however in the sinai model , in the same realization of the force field , trajectories are disorder dominated and are almost independent of the thermal noise , leading to highly peaked distributions of . analytically understanding the distribution of in the presence of disorder presents an interesting mathematical challenge which will involve analysis of the corresponding schrdinger equation with a random drift term .further interesting questions remain to be addressed , in particular can one use the two time correlation function of measured trajectories to obtain better estimators ?single particle tracking technology will undoubtedly further improve in the coming years and many interesting mathematical , statistical and physical challenges will arise in the ultimate goal of getting the most out of the trajectories so obtained .dsd , cmm and go gratefully acknowledge support from the esf and hospitality of nordita where this work has been initiated during their stay within the framework of the `` non - equilibrium statistical mechanics '' program .
|
modern developments in microscopy and image processing are revolutionizing areas of physics , chemistry and biology as nanoscale objects can be tracked with unprecedented accuracy . the goal of single particle tracking is to determine the interaction between the particle and its environment . the price paid for having a direct visualization of a single particle is a consequent lack of statistics . here we address the optimal way of extracting diffusion constants from single trajectories for pure brownian motion . it is shown that the maximum likelihood estimator is much more efficient than the commonly used least squares estimate . furthermore we investigate the effect of disorder on the distribution of estimated diffusion constants and show that it increases the probability of observing estimates much smaller than the true ( average ) value .
|
in many concrete situations the statistician observes a finite path of a real temporal phenomena which can be modeled as realizations of a stationary process ( we refer , for example , to , and references therein ) .here we consider a second order weakly stationary process , which implies that its mean is constant and that only depends on the distance between and . in the sequel, we will assume that the process is gaussian , which implies that it is also strongly stationary , in the sense that , for any , our aim is to predict this series when only a finite number of past values are observed .moreover , we want a sharp control of the prediction error . for this , recall that , for gaussian processes , the best predictor of , when observing , is obtained by a suitable linear combination of the . this predictor , which converges to the predictor onto the infinite past , depends on the unknown covariance of the time series .thus , this covariance has to be estimated . here, we are facing a blind filtering problem , which is a major difficulty with regards to the usual prediction framework .kriging methods often impose a parametric model for the covariance ( see , , ) .this kind of spatial prediction is close to our work .nonparametric estimation may be done in a functional way ( see , , ) .this approach is not efficient in the blind framework . here, the blind problem is bypassed using an idea of bickel for the estimation of the inverse of the covariance .he shows that the inverse of the empirical estimate of the covariance is a good choice when many samples are at hand .we propose in this paper a new methodology , when only a path of the process is observed . for this, following comte , we build an accurate estimate of the projection operator .finally this estimated projector is used to build a predictor for the future values of the process .asymptotic properties of these estimators are studied .the paper falls into the following parts . in section [ s : notations ] , definitions and technical properties of time series are given .section [ s : frame ] is devoted to the construction of the empirical projection operator whose asymptotic behavior is stated in section [ s : rate ] .finally , we build a prediction of the future values of the process in section [ section_schur ] .all the proofs are gathered in section [ s : append ] .in this section , we present our general frame , and recall some basic properties about time series , focusing on their predictions .let be a zero - mean gaussian stationary process . observing a finite past ( ) of the process , we aim at predicting the present value without any knowledge on the covariance operator .since is stationary , let be the covariance between and . herewe will consider short range dependent processes , and thus we assume that so that there exists a measurable function defined by this function is the so - called spectral density of the time series .it is real , even and non negative .as is gaussian , the spectral density conveys all the information on the process distribution .+ define the covariance operator of the process , by setting note that is the toeplitz operator associated to .it is usually denoted by ( for a thorough overview on the subject , we refer to ) .this hilbertian operator acts on as follows for sake of simplicity , we shall from now denote hilbertian operators as infinite matrices .recall that for any bounded hilbertian operator , the spectrum is defined as the set of complex numbers such that is not invertible ( here stands for the identity on ) .the spectrum of any toeplitz operator , associated with a bounded function , satisfies the following property ( see , for instance ) : .\ ] ] now consider the main assumption of this paper : [ a : fbornee ] this assumption ensures the invertibility of the covariance operator , since is bounded away from zero . as a positive definite operator, we can define its square - root .let be any linear operator acting on , consider the operator norm and define the warped operator norm as note that , under assumption , hence the warped norm is well defined and equivalent to the classical one finally , both the covariance operator and its inverse are continuous with respect to the previous norms .+ the warped norm is actually the natural inducted norm over the hilbert space where from now on , all the operators are defined on . set <+\infty \right\}\ ] ] the following proposition ( see for instance ) shows the particular interest of : the map defines a canonical isometry between and .the isometry will enable us to consider , in the proofs , alternatively sequences or the corresponding random variables ..1 in we will use the following notations : recall that is the covariance operator and denote , for any , the corresponding minor by note that , when and are finite , is the covariance matrix between and .diagonal minors will be simply written , for any . + in our prediction framework , let and assume that we observe the process at times .it is well known that the best linear prediction of a random variable by observed variables is also the best prediction , defined by ] . here is a growing suitable sequence .hence , the predictor will be here where denotes some estimator of the projection operator onto , built with the full sample . as usual, we estimate the accuracy of the prediction by the quadratic error .\ ] ] the bias - variance decomposition gives = \mathbb{e}\big [ \left(\hat{p}_{o_{k(n ) } } y - p_{o_{k(n ) } } y \right)^2 \big ] + \mathbb{e}\big [ \left(p_{o_{k(n)}}y- p_{\mathbb{z}^- } y \right)^2 \big ] + \mathbb{e}\big [ \left(p_{\mathbb{z}^- } y - y \right)^2\big ] , \ ] ] where ,\ ] ] and .\ ] ] this error can be divided into three terms * the last term ] is a bias induced by the temporal threshold on the projector . *the first term ] denotes the indices of the subset used for the prediction step .we define the empirical spectral density as we now build an estimator for ( see section [ s : notations ] for the definition of ) .first , we divide the index space into where : * denotes the index of the past data that will not be used for the prediction ( missing data ) * the index of the data used for the prediction ( observed data ) * the index of the data we currently want to forecast ( blind data ) * the remaining index ( future data ) in the following , we omit the dependency on to alleviate the notations . as discussed in section [ s : notations ] , the projection operator may be written by blocks as : .\ ] ] since , we will apply this operator only to sequences with support in , we may consider .\ ] ] the last expression is given using the following block decomposition , if denotes the complement of in : .\ ] ] hence , the two quantities and have to be estimated . on the one hand , a natural estimator of the first matrix is given by defined as on the other hand , a natural way to estimate could be to use ( defined as ) and invert it .however , it is not sure that this matrix is invertible .so , we will consider an empirical regularized version by setting for a well chosen .set so that .remark that is the toeplitz matrix associated to the function , that has been tailored to ensure that is always greater than , yielding the desired control to compute .other regularization schemes could have been investigated .nevertheless , note that adding a translation factor makes computation easier than using , for instance , a threshold on .indeed , with our perturbation , we only modify the diagonal coefficients of the covariance matrix . .1 infinally , we will consider the following estimator , for any : where the estimator of , with window , is defined as follows this section , we give the rate of convergence of the estimator built previously ( see section [ s : frame ] ) .we will bound uniformly the bias of prediction error for random variables in the close future .first , let us give some conditions on the sequence : [ a : k ] the sequence satisfies * * recall that the pointwise risk in is defined by \right)^2 \right].\ ] ] the global risk for the window is defined by taking the supremum of the pointwise risk over all random variables notice that we could have chosen to evaluate the prediction quality only on .nevertheless the rate of convergence is not modified if we evaluate the prediction quality for all random variables from the close future .indeed , the major part of the observations will be used for the estimation , and the conditional expectation is taken only on the most recent observations .our result will be then quite stronger than if we had dealt only with prediction of . to get a control on the bias of the prediction , we need some regularity assumption .we consider sobolev s type regularity by setting and define [ a : sobol ] there exists such that .we can now state our results .the following lemmas may be used in other frameworks than the blind problem .more precisely , if the blind prediction problem is very specific , the control of the loss between prediction with finite and infinite past is more classical , and the following lemmas may be applied for that kind of questions .the case where independent samples are available may also be tackled with the last estimators , using rates of convergences given in operator norms .the bias is given by the following lemma [ l : bias ] for large enough , the following upper bound holds, where . in the last lemma , we assume regularity in terms of sobolev s classes .nevertheless , the proof may be written with some other kind of regularity . the proof is given in appendix , and is essentially based on proposition [ p : schur ] .this last proposition provides the schur block inversion of the projection operator .the control for the variance is given in the following lemma : [ l : var ] where again , we choose this concentration formulation to deal with the dependency of the blind prediction problem , but this result gives immediately a control of the variance of the estimator whenever independent samples are observed ( one for the estimation , and another one for the prediction ) .the proof of this lemma is given in section [ s : proof_lemma ] .it is based on a concentration inequality of the estimators ( see comte ) . integrating this rate of convergence over the blind data , we get our main theorem . [t : main ] under assumptions [ a : fbornee ] , [ a : k ] and [ a : sobol ] , for large enough , the empirical estimator satisfies where and are given in appendix .again , the proof of this result is given in section [ s : proof_main_thm ] .it is quite technical .the main difficulty is induced by the blindness . indeed , in this step , we have to deal with the dependency between the data and the empirical projector . obviously , the best rate of convergence is obtained by balancing the variance and the bias and finding the best window .indeed , the variance increases with while the bias decreases .define as the projector associated to the sequence that minimizes the bound in the last theorem .we get : [ tmain ] under assumptions and , for large enough and choosing , we get notice that , in real life issues , it would be more natural to balance the risk given in theorem [ t : main ] , with the macroscopic term of variance given by \big].\ ] ] this leads to a much greater .nevertheless , corollary [ tmain ] has a theoretical interest .indeed , it recovers the classical semi - parametric rate of convergence , and provides a way to get away from dependency .notice that , the estimation rate increases with the regularity of the spectral density .more precisely , if , we obtain .this is , up to the -term , the optimal speed . as a matter of fact , in this case , estimating the first coefficients of the covariance matrix is enough .hence , the bias is very small . proving alower bound on the mean error ( that could lead to a minimax result ) , is a difficult task , since the tools used to design the estimator are far from the usual estimation methods .we aim at providing an exact expression for the projection operator . for this, we generalize the expression given by bondon ( , ) for a projector onto infinite past .recall that , for any , and if denotes the complement of in , the projector may be written blockwise ( see for instance ) as : .\ ] ] denote also the inverse of the covariance operator , the following proposition provides an alternative expression of any projection operators .[ p : schur ] one has \end{aligned}\ ] ] furthermore , the prediction error verifies =u^t\lambda_{m}^{-1}u,\ ] ] where the proof of this proposition is given in appendix .we point out that this proposition is helpful for the computation of the bias .indeed , it gives a way to calculate the norm of the difference between two inverses operators .first of all , is a toeplitz operator over with eigenvalues in ] : by applying this result respectively with and and we obtain or , equivalently , with probability lower than . by taking an equivalent , we obtain that there exists such that , for all , for all
|
we tackle the issue of the blind prediction of a gaussian time series . for this , we construct a projection operator build by plugging an empirical covariance estimation into a schur complement decomposition of the projector . this operator is then used to compute the predictor . rates of convergence of the estimates are given .
|
the radio frequency ( rf ) spectrum is becoming overly crowded due to an exponential growth in the number of applications and users that require high - speed wireless communication , anywhere , anytime .the situation just gets worse in indoor environments , such as conference halls or shopping malls , where both the user density and the bandwidth demands are tremendous .moreover , the indoor rf spectrum is crowdedly occupied by a plethora of coexisting wireless services , including cellular networks , wifi networks , bluetooth systems , wireless sensor networks or the internet of things , to name a few . to accommodate the tremendous wireless data demands from high density of users of different services ,high - efficiency spectrum - sharing approaches are highly desired .currently , the cognitive radio and adaptive beamforming are two main dynamic spectrum access solutions that have drawn most attentions . on the one hand , cognitive radio techniques enable unlicensed wireless users to share channels with licensed users that are already using an assigned spectrum .although cognitive radio achieves better spectrum efficiency when the licensed users do not access the band frequently ( such as tv white space ) , it does not help when all users are very active with high density , especially in the aforementioned indoor environments .more importantly , cognitive radio techniques require each wireless device to be able to scan a wide range of frequencies to identify spectrum holes and then lock to that frequency , which necessitates expensive transceivers , antennas , and processors that are not available in most existing devices , if not all . on the other hand ,usually beamforming techniques require smart antenna systems to adaptively focus the transmission and reception of wireless signals . to achieve high spatial resolution to differentiate multiple simultaneous transmissions in the crowded indoor environments , each wireless device needs to be equipped with very large array of antenna elements , which is impossible for the current and future portable , wearable , or even smaller devices .[ fig.0 ] to address the aforementioned problems , in this paper , we propose a new spectrum sharing technique based on smart reflect - arrays to improve indoor network capacity for literally any existing wireless services without any modification of the hardware and software in the devices . as shown in fig .[ fig.0 ] , the smart reflect - array panels are hung on the walls in the indoor environment . although the reflect - array does not buffer or process any incoming signals , it can change the phase of the reflected wireless signal . by optimally controlling the phase shift of each element on the reflect - array , the useful signals for each transmission pair can be enhanced while the interferences can be canceled . as a result , multiple wireless users in the same room can access the same spectrum band at the same time without interfering each other . to prove the feasibility of the proposed solution , an experimental testbed is first developed and evaluated . then, the effects of the reflect - array on transport capacity of the indoor wireless networks is investigated . through experiments , theoretical deduction , and simulations, this paper demonstrates that significantly higher spectrum - spatial efficiency can be achieved by using the smart reflect - array without any modification of the hardware and software in the users devices .the remainder of this paper is organized as follows .the system design and proof experiment are introduced in section ii .then , the transport capacity , including theoretical upper bounds and achievable bounds for arbitrary networks , is derived and analyzed in section iii .numerical analysis is presented in section iv .finally , the paper is concluded in section v.in this section , we first present the system architecture of the new reflect - array - based spectrum sharing solution . then, an experimental testbed is designed and implemented to prove the feasibility of the proposed solution .the architecture of the proposed system is illustrated in fig .[ fig.0 ] .there are two pairs of wireless users in a conference room , whose devices can adopt any existing or future wireless services .the smart reflect - arrays hung on the walls can effectively change the signal propagations of any wireless transmission by tuning the electromagnetic response ( phase shift ) of each reflector unit on the panel .hence , the wireless signal from either transmitter can be spatially modulated and projected to arbitrary regions while not interfering other regions . as a result, each receiver clearly hears from the expected transmitter as if only such transmitter accesses the spectrum , while actually there can be many other wireless users and services simultaneously use the same frequency band .the reflect - array panel actually reconfigure the signal propagation .the spatial distribution of the signal strength from different transmitters forms a chessboard of high resolution regions , each of which is private for only one wireless transmission .since it is the reflect - array that manipulates the spatial modulation , the users can use any type of wireless devices and wireless services without any change in hardware or software .different from existing mimo , beamforming , or active relay techniques , the proposed smart reflect - array moderates the spatial distribution of multiple wireless transmissions in a passive way . as long an em wave - carried wireless transmission exists in the indoor environment ,no matter where it comes from ( e.g. , cell phone , laptop , bluetooth speaker , smart home sensors , or cleaning robots ) , the smart reflect - array(s ) can reconfigure the spatial distribution of the wireless energy due to such transmission .moreover , different from existing beamforming mechanisms , the proposed system achieves the spatial diversity in the middle of wireless transmissions , neither in the transmitter nor in the receiver .this property further guarantees the compatibility to all possible wireless systems . to validate the feasibility of the proposed system, we develop a experiment testbed .since we expect to have a flexible control of the electromagnetic response on reflect - array , it is necessary to optimally design the reflect - array panel and its peripheral circuits . for the reflect - array design ,the basic idea is that by loading the microstrip patches with electronically - controlled capacitors , the resonant frequency of each reflector unit can be changed to increase the usable frequency range .more importantly , since the signals are required to be efficiently reflected on the reflect - array , the patches should be designed to have a satisfying reflection coefficient . in this design, the reflect - array is used to work at an operating frequency of 2.4 ghz , which is suitable for wifi service in indoor environments .we design rectangle - structure patches as introduced in for the reflector units as shown in fig .[ fig.design ] , where the dimensions of the patch are developed as , , .the distance between the patch and ground plane is .the relative dielectric constant , which can be realized by most of the pcb fabrications .we design totally reflector units and each reflector is controlled by a bias voltage to tune the varactors ( ) for changing the capacitance . a view of developed smart reflect - arrayis shown in fig .[ fig.view ] .simulations in fig .[ fig.simulation ] show the electromagnetic response the reflector unit by comsol . as shown in fig .[ fig.rcs ] , by using an operating frequency of 2.4 ghz , the designed patch get the maximum radar cross section ( rcs ) , which means the reflection is optimized at such operating frequency . the energy distribution on the patch shown in fig . [ fig.energy ]further demonstrates that the resonance can be obtained at 2.4 ghz . in this reflect - array , we use mega2560 micro - controllers as fig .[ fig.02a ] to generate pwm signals to control the reflectors .since totally reflectors need to be independently controlled , micro - controllers are used that each one outputs 12 independent pwm signals .rc low - pass filter is designed as fig .[ fig.02b ] to convert the pwm signal to a certain bias voltage from 0 to 5 volts .therefore , the reflectors become flexible to change the electromagnetic response to control the signals reflected on them .[ fig.11 ] [ fig.12 ] the experiment setup is illustrated in fig .[ fig.01a ] , where a receiver ( rx ) is deployed in front of the reflect - array .a transmitting antenna ( tx1 ) working as the signal source is deployed away on left of the receiver .another transmitting antenna ( tx2 ) is deployed as well to interfere the communication between tx1 and rx .the reflect - array consists of reflectors and each reflector is controlled by a bias voltage to tune the varactors for changing the phase shift .micro - controllers are used to give a control voltage to each reflector patch .an overview of experimental facilities is shown in fig . [ fig.01b ] . during the experiment , tx1 andtx2 are transmitting signals along the whole spectrum and the frequency response at rx is observed by spectrum analyzer . in fig .[ fig.11 ] , we respectively measure the received signal strength from tx1 and tx2 without reflect - array nearby .since the two transmission distances are the same ( ) , the received signal strengths are almost the same around -45 dbm . in fig .[ fig.12 ] , by deploying the reflect - array and optimally tuning each reflector , the interference has been canceled to -73 dbm and the interference - plus - noise ratio ( sinr ) is increased to about 30 db . in this way , the communication can be established between tx1 and rx by preventing them from the interference of tx2 .the experimental results prove the concept of the proposed spectrum sharing solution , where two simultaneous transmissions are considered .furthermore , we expect the smart reflect - array can be used to simultaneously accommodate a large number of indoor wireless users with different services in the very limited spectrum bands .the rf spectrum utilization efficiency can be leveraged to a brand new height , which benefit all possible wireless systems and services .thus , it is necessary to explore the capacity of spectrum sharing by considering multiple users in indoor environments . in this section ,we first analyze the effect of reflect - array on the communication by developing accurate channel models . based on the channel model , an upper bound on transport capacityis theoretically derived .then , an achievable transport capacity is derived based on a practical reflect - array control algorithm .it should be noted that , the objective in this paper is to find the maximum possible transport network capacity .therefore , we assume the network topology can be arbitrarily designed to achieve such capacity .such network topology is commonly considered as arbitrary network .[ fig.1 ] [ fig.2 ] the influence from the reflect - array to the channel is shown as fig .[ fig.1 ] .the signal received at the receiving side is the superposition of the direct signal and the signals reflected by patch antennas .thus , the received signal strongly depends on the phases of the multi - path propagation . assuming that the signal transmitted on baseband is a train of raised cosines bearing bpsk symbols , the received signal from channel shown in fig .[ fig.1 ] can be expressed in time domain : where is the operating frequency . , are the attenuation and phase shift of the -th path ( -th path corresponds to the los ) , respectively . is the phase induced by the -th reflector for all . is the noise component of the received signal .the reflect - array extends to serve in wireless networks accessed by multiple users . as shown in fig .[ fig.2 ] , a number of users are served by the reflect - array to establish pairs of communication .not only the signal from the source will be reflected by the reflector but also the interferences from other transmitting nodes can get effect . therefore , the received signal at a certain receiving node can be written as : where the subscript indicates a transmitting node different from node .for example , is the attenuation from transmitting node to receiving node through path .the first term in ( [ multireceived ] ) indicates the signal from the transmitting node while the second term is the signal from other interference sources .the phases of both signal and interference are controlled by induced phase on each reflector . since the received signal consists of the signal from the source , interferences and noise , whether the communication can be established or not depends on the signal to interference plus noise ratio ( sinr ) . according to the received signal expressed in ( [ multireceived ] ), the sinr for the -th receiver can be derived as : where ^t , \quad \rho_l^2(t)\triangleq e\{\left|m_l(t)\right|^2\ } , \\\textbf{v}_\phi \triangleq [ 1 \quad e^{j \phi_1 } ...\quad e^{j \phi_n } ] , \quad \quad \quad \quad \sigma_l^2(t ) \triangleq e\{\left| n_l(t)\right|^2\}. \end{split}\end{aligned}\ ] ] the effects from the reflect - array to the wireless networks are determined by the vector of the phase control .the focus of this subsection is to derive information theoretical upper bounds of transport capacity . according to , the transport capacity of the network shown in fig .[ fig.2 ] is defined as : where is the distance of the direct path between the -th transmitter and -th receiver . is the rate of signal decay . is the feasible data rate if simultaneous reliable communication at rate is possible for all communication pairs .since is usually set to be fixed in theoretical analysis , the upper bound of transport capacity is determined by , which can be derived from the restriction of sinr : where is the threshold of sinr .according to ( [ sinr ] ) and ( [ restrictionsinr ] ) , we have : where is the set of all receiving nodes .thus , the denominator of the left side in ( [ sinrresult ] ) indicates the noise plus the received power from the source and interferences . by considering the attenuation and phase shift with propagation distance , we have where is the wave number . is the propagation length from transmitter to receiver reflected on the -th element of reflect - array . means the direct path without reflection . is the difference of the propagation lengths that . from ( [ sinrresult ] ) and ( [ set ] ), we can derive : where and is derived by : ^ 2 \\ & + \sum_{k \in \gamma } \left [ \sum_{i=1}^n \frac{1}{d_{l , k , i}^\alpha } sin \left(k_0 \delta d_{l , k , i}-\phi_i \right ) \right]^2 . \end{split}\end{aligned}\ ] ] it is obvious that the upper bound is determined by the lower bound of the denominator of fraction in ( [ step1 ] ) .therefore , the parameter expressed in ( [ i ] ) becomes the dominating factor of the transport capacity . by expanding the first term in ( [ i ] ), we have : ^ 2 + \left[\sum_{i=1}^n d_{d , k , i}^{-\alpha } sin(k_0 \delta d_{l , k , i } - \phi_i)\right]^2 \right\ } \\ \quad & \geq \sum_{k \in \gamma } \left\ { d_{l , k , o}^{-2\alpha } + \sum_{i=1}^n 2 d_{l , k,0}^{-\alpha } d_{l , k , i}^{-\alpha } cos(k_0 \delta d_{l , k , i } - \phi_i ) \right . \\ & \left .+ \left[\sum_{i=1}^n d_{l , k , i}^{-\alpha } sin(k_0 \delta d_{l , k , i } - \phi_i + \frac{\pi}{4})\right]^2 \right\}. \end{split}\end{aligned}\ ] ] the inequation in ( [ step2 ] ) is achieved by the arithmetic - geometric average inequation that .then , the result can be continuously derived as : ^ 2 \right\}.\\ & \geq \sum_{k \in \gamma } \left\ { \sum_{i=1}^n 2d_{l , k,0}^{-\alpha } d_{l , k , i}^{-\alpha } cos(k_0 \delta d_{l , k , i } -\phi_i - \frac{\pi}{4 } ) \right.\\ & \left .+ \sum_{i=1}^n 2d_{l , k,0}^{-\alpha } d_{l , k , i}^{-\alpha } cos(k_0 \delta d_{l , k , i } -\phi_i ) \right\}\\ & = 2 \sqrt{\sqrt{2}+2 } \sum_{k \in \gamma } \sum_{i=1}^n d_{l , k,0}^{-\alpha } d_{l , k , i}^{-\alpha } cos(k_0 \delta d_{l , k , i } -\phi_i -\frac{\pi}{8 } ) .\end{split}\end{aligned}\ ] ] in ( [ step3 ] ) , we use the fact that and trigonometric formula . in this derivation , we consider that the phase control ] , where and are respectively the minimum and maximum propagation length in the networkobviously , and are determined by the geometry and dimension of the network . by substituting ( [ step3 ] ) into ( [ step1 ] ) and considering pairs of communications with a rate of for each, we have : the upper bound of transport capacity is theoretically calculated in above subsection . however , due to the restriction of the geometric of network , the propagation lengths can not independently vary from to .thus , the upper bound becomes unachievable when a practical deployment of network is considered . in this subsection, we develop an algorithm to find out the achievable bound on transport capacity by considering the geometry of arbitrary networks . as shown in fig .[ fig.3 ] , a square is divided by grid into small pixels with a number of . therefore , the distance between two adjacent intersection points is determined by .a reflect - array is located at to serve the communication of the network .then , we define a status vector to denote the positions of nodes shown in fig .[ fig.3a ] : where the superscript indicates that the vector is for status . and are the and positions of node in status , respectively . after fixing the positions of the nodes , the phase control varies from to to search for the maximum transport capacity using ( [ step1 ] ) . for status , the maximum transport capacity with optimal phase control denoted as . then , shown as fig .[ fig.3b ] , we move one node in this network to get the next status by only changing similarly , a maximum capacity for status can be achieved as . therefore , by traversing all the combinations of deployment of nodes , the achievable bound on transport capacity can be found by : where is the number of status in total .detailed traversing for all the deployment status is shown in algorithm 1 .* input * , , , ; + * for * * to * * step * + + * for * * to * * step * + * for * * to * * step * + + * for * * to * * step * + * for * * to * * step * + ; + * if * + * then * ; + * else * ; + * end if * + * end for * + * end for * + + * end for * + * end for * + + * end for * + * output * ; +in this section , we first compare the upper bound of transport capacity mathematically derived in above section to the case that without reflect - arrays . then , we evaluate the achievable bound by considering the geometry of networks .[ fig.4a ] shows an evaluation of the upper bound by varying the number of communication pairs . in this evaluation ,communication pairs are considered to be deployed in a square room with an edge length of .we use a transmitting power of mw for all transmitting nodes .the noise level is set to be dbm .the threshold of sinr db . the rate of signal decay .the transmission rate .the red , blue and green curves respectively show the upper bound of transport capacity by using a reflect - array of 24 , 36 and 48 patched reflectors .the black curve shows the upper bound of capacity without reflect - array .obviously , the transport capacity can be improved by using a reflect - array .about increase of capacity can be obtained by increasing the number of patches from 24 to 48 . by increasing the number of communication pairs ,the upper bounds become higher since the transport capacity of a network is defined as the summation of the capacities of all communications . in fig .[ fig.4b ] , the edge length is varied from to .the transport capacity increases with edge length increasing since the interference nodes can be optimally deployed further away from the receiving nodes in a lager indoor space .achievable bounds on transport capacity are shown in fig .[ fig.5 ] . as shown in fig .[ fig.5a ] , the reflect - array is deployed at .we divide the square into pixels for the deployment of nodes . the phase control on reflect - array varies from to with a step of to search for the optimal phases .shown as the results in fig .[ fig.5b ] , the transport capacity can be improved by using a reflect - array with more patches . in fig .[ fig.7 ] , an extra reflect - array is deployed at to serve the communication . compared to the result shown in fig .[ fig.5 ] , the capacity is further improved by when two reflect - arrays are used . we observe the results by deploying the third and fourth reflect - array successively as shown in fig . [ fig.8 ] and fig .[ fig.9 ] . from [ fig.5b ] to [ fig.9b ] , the transport capacity obviously increases for about . a comparison between the upper bound and the achievable bound is shown in fig .[ fig.10 ] . as shown in fig .[ fig.10a ] , 5 pairs of nodes are deployed in a square room with one reflect - array located at .the edge length varies from to .the results of using 24-patch reflect - array and without reflect - array are shown in fig .[ fig.10b ] .there exists a difference between two red / black curves since the theoretical upper bounds are derived by considering an ideal deployment of nodes which dose not exist in practical situations .in this paper , we present a novel approach to improve the spectrum sharing capacity in indoor environments by using smart reflect - arrays .the feasibility of the approach has been verified by the experimental results .theoretical derivation and algorithm have been developed to evaluate the effects of using reflect - array .compared to the case without reflect - arrays , the numerical analysis shows a significant improvement on transport capacity when reflect - arrays are utilized .although the two - user experiments in this paper validates the feasibility of new spectrum sharing solution , our theoretical analysis shows that the smart reflect - array can be used for multiple simultaneous communications .thus , new testbed for spectrum sharing with multi - users will be designed and implemented in the next step . moreover , to accommodate multi - users and maintain robust communications in real time , optimal control algorithm will designed to configure the reflect - arrays according to the real - time spectrum usage in the indoor environment .k. huang , v. k. n. lau and y. chen `` spectrum sharing between cellular and mobile ad hoc networks : transmission - capacity trade - off , '' _ ieee journal on selected areas incommunications _27 , no . 7 , pp . 1256 - 1267 , september , 2009 .e. z. tragos , s. zeadally , a. g. fragkiadakis and v. a. siris , `` spectrum assignment in cognitive radio networks : a comprehensive survey , '' _ ieee communications surveys and tutorials _ , vol .1108 - 1135 , january , 2013 .k. t. phan , s. a. vorobyov , n. d. sidiropoulos and c. tellambura , `` spectrum sharing in wirelss networks via qos - aware secondary multicast beaforming , '' _ ieee transactions on signal processing _ ,2323 - 2335 , february , 2009 .j. liu , w. chen , z. cao and y. j. zhang , `` cooperative beamforming for cognitive radio networks : a cross - layer design , '' _ ieee transactions on communications _ ,5 , pp . 1420 - 1431 , april , 2012 .t. s. g. basha , g. aloysius , b. r. rajakumar , m. n. g. prasad and p. v. sridevi``a constructive smart antenna beam - forming technique with spatial diversity , '' _ iet microwaves , antennas and propagation _ , vol. 6 , no . 7 , pp . 773 - 780 , may , 2012 .
|
the radio frequency ( rf ) spectrum becomes overly crowded in some indoor environments due to the high density of users and bandwidth demands . to accommodate the tremendous wireless data demands , efficient spectrum - sharing approaches are highly desired . to this end , this paper introduces a new spectrum sharing solution for indoor environments based on the usage of a reconfigurable reflect - array in the middle of the wireless channel . by optimally controlling the phase shift of each element on the reflect - array , the useful signals for each transmission pair can be enhanced while the interferences can be canceled . as a result , multiple wireless users in the same room can access the same spectrum band at the same time without interfering each other . hence , the network capacity can be dramatically increased . to prove the feasibility of the proposed solution , an experimental testbed is first developed and evaluated . then , the effects of the reflect - array on transport capacity of the indoor wireless networks are investigated . through experiments , theoretical deduction , and simulations , this paper demonstrates that significantly higher spectrum - spatial efficiency can be achieved by using the smart reflect - array without any modification of the hardware and software in the users devices .
|
in recent years , we have witnessed the growth of a number of theories of uncertainty , where imprecise ( lower and upper ) probabilities and previsions , rather than precise ( or point - valued ) probabilities and previsions , have a central part .here we consider two of them , glenn shafer and vladimir vovk s game - theoretic account of probability , which is introduced in section [ sec : shafer - and - vovk ] , and peter walley s behavioural theory , outlined in section [ sec : walley ] . these seem to have a rather different interpretation , and they certainly have been influenced by different schools of thought : walley follows the tradition of frank ramsey , bruno de finetti and peter williams in trying to establish a rational model for a subject s beliefs in terms of her behaviour .shafer and vovk follow an approach that has many other influences as well , and is strongly coloured by ideas about gambling systems and martingales .they use cournot s principle to interpret lower and upper probabilities ( see ; and ( * ? ? ?* chapter 2 ) for a nice historical overview ) , whereas on walley s approach , lower and upper probabilities are defined in terms of a subject s betting rates . what we set out to do here , and in particular in sections [ sec : connections ] and [ sec : interpretation ] , is to show that in many practical situations , the two approaches are strongly connected .. ] this implies that quite a few results , valid in one theory , can automatically be converted and reinterpreted in terms of the other .moreover , we shall see that we can develop an account of coherent immediate prediction in the context of walley s behavioural theory , and prove , in section [ sec : weak - law ] , a weak law of large numbers with an intuitively appealing interpretation .we use this weak law in section [ sec : scoring ] to suggest a way of scoring a predictive model that satisfies a. philip dawid s _ prequential principle _ .why do we believe these results to be important , or even relevant , to ai ?probabilistic models are intended to represent an agent s beliefs about the world he is operating in , and which describe and even determine the actions he will take in a diversity of situations .probability theory provides a normative system for reasoning and making decisions in the face of uncertainty .bayesian , or precise , probability models have the property that they are completely decisive : a bayesian agent always has an optimal choice when faced with a number of alternatives , whatever his state of information . while many may view this as an advantage , it is not always very realistic .imprecise probability models try to deal with this problem by explicitly allowing for indecision , while retaining the normative , or coherentist stance of the bayesian approach .we refer to for discussions about how this can be done .imprecise probability models appear in a number of ai - related fields .for instance in _probabilistic logic _ :it was already known to george boole that the result of probabilistic inferences may be a set of probabilities ( an imprecise probability model ) , rather than a single probability .this is also important for dealing with missing or incomplete data , leading to so - called partial identification of probabilities , see for instance .there is also a growing literature on so - called _ credal nets _ : these are essentially bayesian nets with imprecise conditional probabilities .we are convinced that it is mainly the mathematical and computational complexity often associated with imprecise probability models that is keeping them from becoming a more widely used tool for modelling uncertainty .but we believe that the results reported here can help make inroads in reducing this complexity .indeed , the upshot of our being able to connect walley s approach with shafer and vovk s , is twofold .first of all , we can develop a theory of _ imprecise probability trees _ : probability trees where the transition from a node to its children is described by an imprecise probability model in walley s sense .our results provide the necessary apparatus for making inferences in such trees . andbecause probability trees are so closely related to random processes , this effectively brings us into a position to start developing a theory of ( event - driven ) random processes where the uncertainty can be described using imprecise probability models .we illustrate this in examples [ ex : coins ] and [ ex : many - coins ] , and in section [ sec : backwards - recursion ] . secondly , we are able to prove so - called marginal extension results ( theorems [ theo : natex ] and [ theo : concatenation ] , proposition [ prop : local - models ] ) , which lead to backwards recursion , and dynamic programming - like methods that allow for an exponential reduction in the computational complexity of making inferences in such imprecise probability trees .this is also illustrated in examples [ ex : many - coins ] and section [ sec : backwards - recursion ] . for ( precise ) probability trees ,similar techniques were described in shafer s book on causal reasoning .they seem to go back to christiaan huygens , who drew the first probability tree , and showed how to reason with it , in his solution to pascal and fermat s problem of points . for more details and precise references . ]in their game - theoretic approach to probability , shafer and vovk consider a game with two players , reality and sceptic , who play according to a certain _ protocol_. they obtain the most interesting results for what they call _ coherent probability protocols_. this section is devoted to explaining what this means .we begin with a first and basic assumption , dealing with how the first player , reality , plays . 1 .reality makes a number of moves , where the possible next moves may depend on the previous moves he has made , but do not in any way depend on the previous moves made by sceptic .this means that we can represent his game - play by an event tree ( see also for more information about event trees ) .we restrict ourselves here to the discussion of _ bounded protocols _ , where reality makes only a finite and bounded number of moves from the beginning to the end of the game , whatever happens .but we do nt exclude the possibility that at some point in the tree , reality has the choice between an infinite number of next moves .we shall come back to these assumptions further on , once we have the appropriate notational tools to make them more explicit .= [ circle , draw = black!70,fill = black!70 ] = [ rectangle , draw = black!60 ] = [ circle , draw = black!60,fill = black!30,minimum size=.5 mm ] = [ sibling distance=12 mm ] = [ sibling distance=10 mm ] = [ sibling distance=5 mm ] = [ semithick , dashed , draw = black ] ( root ) [ grow = right , inner sep=.7 mm ] child node[nonterminal ] [ label = below: child node[nonterminal][label = below left: ( cut1 ) child node[terminal ] child node[terminal ] child node[terminal][label = right: ( cut2 ) child node[terminal ] [ label = right: ( cut3 ) child node[nonterminal ] [ label = left: ( cut4 ) child node[nonterminal ] child node[terminal ] [ label = right: child node[terminal ] child node[terminal ] child node[terminal ] ; ( cut3 ) ( cut4 ) + ( 0,.75 ) node[left ] ; ( cut2 ) ( cut1 ) + ( 0,-.75 ) ; ( cut3.south ) .. controls + ( down:.25 ) and + ( up:.25 ) .. ( cut2.north ) ; let us establish some terminology related to reality s event tree. a _ path _ in the tree represents a possible sequence of moves for reality from the beginning to the end of the game .we denote the set of all possible paths by , the _ sample space _ of the game . a _ situation _ is some connected segment of a path that is _ initial _, i.e. , starts at the root of the tree .it identifies the moves reality has made up to a certain point , and it can be identified with a node in the tree .we denote the set of all situations by .it includes the set of _ terminal _ situations , which can be identified with paths .all other situations are called _ non - terminal _ ; among them is the _ initial _situation , which represents the empty initial segment .see fig .[ fig : tree - one ] for a simple graphical example explaining these notions .if for two situations and , is a(n initial ) segment of , then we say that _ precedes _ or that _ follows _ , and write , or alternatively . if is a path and then we say that the path _ goes through _ situation .we write , and say that _ strictly precedes _ , if and . an _ event _ is a set of paths , or in other words , a subset of the sample space : . with an event , we can associate its _ indicator _ , which is the real - valued map on that assumes the value on , and elsewhere .we denote by the set of all paths that go through : is the event that corresponds to reality getting to a situation .it is clear that not all events will be of the type .shafer calls events of this type _ exact_. further on , in section [ sec : connections ] , exact events will be the only events that can be legitimately conditioned on , because they are the only events that can be foreseen may occur as part of reality s game - play . call a _ cut _ of a situation any set of situations that follow , and such that for all paths through , there is a unique that goes through . in other words : a. ; and b. ; see also fig .[ fig : tree - one ] .alternatively , a set of situations is a cut of if and only if the corresponding set of exact events is a partition of the exact event .a cut can be interpreted as a ( complete ) stopping time .if a situation precedes ( follows ) some element of a cut of , then we say that _ precedes _ ( _ follows _ ) , and we write ( ) .similarly for ` strictly precedes ( follows ) ' . for two cuts and of , we say that _ precedes _ if each element of is followed by some element of . a _ child _ of a non - terminal situation is a situation that immediately follows it .the set of children of constitutes a cut of , called its _children cut_. also , the set of terminal situations is a cut of , called its _terminal cut_. the event is the corresponding terminal cut of a situation .we call a _ move _ for reality in a non - terminal situation an arc that connects with one of its children , meaning that is the concatenation of the segment and the arc .see fig .[ fig : moves ] .= [ circle , draw = black!70,fill = black!70 ] = [ rectangle , draw = black!60 ] = [ circle , draw = black!60,fill = black!30,minimum size=.5 mm ] = [ sibling distance=15 mm ] = [ sibling distance=15 mm ] = [ sibling distance=15 mm ] ( root ) [ grow = right , inner sep=.7 mm ] child node[nonterminal , label = below: ( t ) child node[nonterminal , label = above right: ( tw2 ) child node[terminal ] child node[terminal ] edge from parent node[below ] child node[terminal , label = above right: ( tw1 ) edge from parent node[above ] child node[terminal ] ; ( tw1 ) + ( 0,.85 ) ; ( tw1 ) + ( 0,-1 ) node[right ] ; ( t ) + ( -2,-1 ) node[below ] ; reality s_ move space _ in is the set of those moves that reality can make in : .we have already mentioned that may be ( countably or uncountably ) infinite : there may be situations where reality has the choice between an infinity of next moves .but every should contain at least two elements : otherwise there is no choice for reality to make in situation .we now have all the necessary tools to represent reality s game - play .this game - play can be seen as a basis for an _ event - driven _ , rather than a time - driven , account of a theory of uncertain , or random , processes .the driving events are , of course , the moves that reality makes .. see shafer ( * ? ? ?* chapter 1 ) for terminology and more explanation . ] in a theory of processes , we generally consider things that depend on ( the succession of ) these moves .this leads to the following definitions .any ( partial ) function on the set of situations is called a _ process _ , andany process whose domain includes all situations that follow a situation is called a _-process_. of course , a -process is also an -process for all ; when we call it an -process , this means that we are restricting our attention to its values in all situations that follow .a special example of a -process is the _ distance _ which for any situation returns the number of steps along the tree from to .when we said before that we are only considering _ bounded protocols _ , we meant that there is a natural number such that for all situations and all . similarly ,any ( partial ) function on the set of paths is called a _ variable _ , and any variable on whose domain includes all paths that go through a situation is called a _ -variable_. if we restrict a -process to the set of all terminal situations that follow , we obtain a -variable , which we denote by .if is a cut of , then we call a -variable _ -measurable _ if for all in , assumes the same value for all paths that go through . in that casewe can also consider as a variable on , which we denote as .if is a -process , then with any cut of we can associate a -variable , which assumes the same value in all that follow .this -variable is clearly -measurable , and can be considered as a variable on .this notation is consistent with the notation introduced earlier .similarly , we can associate with a new , _-stopped _ , -process , as follows : the -variable is -measurable , and is actually equal to : the following intuitive example will clarify these notions .[ ex : coins ] consider flipping two coins , one after the other .this leads to the event tree depicted in fig .[ fig : coins ] .the identifying labels for the situations should be intuitively clear : e.g. , in the initial situation ` ' none of the coins have been flipped , in the non - terminal situation ` ' the first coin has landed ` heads ' and the second coin hasnt been flipped yet , and in the terminal situation ` ' both coins have been flipped and have landed ` tails ' .= [ rectangle , rounded corners , draw = black!50,fill = black!20 ] = [ rectangle , rounded corners , draw = black!50,fill = black!20 ] = [ rectangle , rounded corners , draw = black!50,fill = black!20 ] = [ sibling distance=20 mm ] = [ sibling distance=10 mm ] = [ semithick , dashed , draw = black ] = [ semithick , dotted , draw = black ] ( root ) [ grow = right , inner sep=.7 mm ] child node[first ] ( t ) child node[second ] ( tt ) child node[second ] ( th ) child node[first ] ( h ) child node[second ] ( ht ) child node[second ] ( hh ) ; ( in ) [ above left of = h ] ; ( uit ) [ below right of = tt ] ; ( h ) + ( 0,1 ) ; ( h ) + ( 0,-1.5 ) node[left ] ; ( in.east ) to [ bend left ] ( h.north ) ; ( h.south ) .. controls + ( down:.5 ) and + ( up:.5 ) .. ( th.north ) ; ( th ) ( tt ) + ( 0,-1 ) node[left ] ; first , consider the real process , which in each situation , returns the number of heads obtained so far , e.g. , and .if we restrict the process to the set of all terminal elements , we get a real variable , whose values are : , and .consider the cut of the initial situation , which corresponds to the following stopping time : `` stop after two flips , or as soon as an outcome is heads '' ; see fig .[ fig : coins ] .the values of the corresponding variable are given by : , and .so is -measurable , and can therefore be considered as a map on the elements and and of , with in particular .next , consider the processes , defined as follows : [ cols="^,^,^,^,^,^,^,^ " , ] returns the outcome of the latest , the outcome of the first , and that of the second coin flip .the associated variables and give , in each element of the sample space , the respective outcomes of the first and second coin flips . the variable is -measurable : as soon as we reach ( any situation on ) the cut , its value is completely determined , i.e. , we know the outcome of the first coin flip ; see fig .[ fig : coins ] for the definition of .we can associate with the process the variable that is also -measurable : it returns , in any element of the sample space , the outcome of the first coin flip .alternatively , we can stop the process after one coin flip , which leads to the -stopped process .this new process is of course equal to , and for the corresponding variable , we have that ; also see eq . . we now turn to the other player , sceptic .his possible moves may well depend on the previous moves that reality has made , in the following sense . in each non - terminalsituation , he has some set of moves available to him , called sceptic s _ move space _ in .we make the following assumption : 1 . in each non - terminalsituation , there is a ( positive or negative ) gain for sceptic associated with each of the possible moves in that sceptic can make .this gain depends only on the situation and the next move that reality will make .this means that for each non - terminal situation there is a _gain function _ , such that represents the change in sceptic s capital in situation when he makes move and reality makes move .let us introduce some further notions and terminology related to sceptic s game - play .a _ strategy _ for sceptic is a partial process defined on the set of non - terminal situations , such that is the corresponding move that sceptic will make in each non - terminal situation . with each such strategy there corresponds a _ capital process _ , whose value in each situation gives us sceptic s capital accumulated so far , when he starts out with zero capital in and plays according to the strategy .it is given by the recursion relation with initial condition .of course , when sceptic starts out ( in ) with capital and uses strategy , his corresponding accumulated capital is given by the process .in the terminal situations , his accumulated capital is then given by the real variable .if we start in a non - terminal situation , rather than in , then we can consider -strategies that tell sceptic how to move starting from onwards , and the corresponding capital process is then also a -process , that tells us how much capital sceptic has accumulated since starting with zero capital in situation and using -strategy .the assumptions g1 and g2 outlined above determine so - called _ gambling protocols_. they are sufficient for us to be able to define lower and upper prices for real variables .consider a non - terminal situation and a real -variable .the _ upper price for in _ is defined as the infimum capital that sceptic has to start out with in in order that there would be some -strategy such that his accumulated capital allows him , at the end of the game , to hedge , whatever moves reality makes after : where is taken to mean that for all terminal situations that go through .similarly , for the _ lower price for in _ : . if we start from the initial situation , we simply get the _ upper and lower prices _ for a real variable , which we also denote by and . requirements g1 and g2 for gambling protocols allow the moves , move spaces and gain functions for sceptic to be just about anything .we now impose further conditions on sceptic s move spaces .a gambling protocol is called a _ probability protocol _ when besides g1 and g2 , two more requirements are satisfied . 1 . for each non - terminal situation ,sceptic s move space is a convex cone in some linear space : for all non - negative real numbers and and all and in .2 . for each non - terminal situation ,sceptic s gain function has the following linearity property : for all non - negative real numbers and , all and in and all in .finally , a probability protocol is called _ _ coherent _ _ when moreover : a. for each non - terminal situation , and for each in there is some in such that .it is clear what this last requirement means : in each non - terminal situation , reality has a strategy for playing from onwards such that sceptic ca nt ( strictly ) increase his capital from onwards , whatever -strategy he might use . for such coherent probability protocols , shafer and vovk prove a number of interesting properties for the corresponding lower ( and upper ) prices .we list a number of them here . for any real -variable , we can associate with a cut of another special -measurable -variable by , for all paths through , where is the unique situation in that goes through . for any two real -variables and , is taken to mean that for all paths that go through .[ prop : shafer - and - vovk ] consider a coherent probability protocol , let be a non - terminal situation , , and real -variables , and a cut of . then 1 . [ convexity ] ; 2 . [ super - additivity ] ; 3 . for all real [ non - negative homogeneity ] ; 4 . for all real [ constant additivity ] ; 5 . for all real [ normalisation ] ; 6 . implies that [ monotonicity ] ; 7 . [ law of iterated expectation ] .what is more , shafer and vovk use specific instances of such coherent probability protocols to prove various limit theorems ( such as the law of large numbers , the central limit theorem , the law of the iterated logarithm ) , from which they can derive , as special cases , the well - known measure - theoretic versions .we shall come back to this in section [ sec : weak - law ] . the game - theoretic account of probability we have described so far , is very general . but it seems to pay little or no attention to _ beliefs _ that sceptic , or other , perhaps additional players in these games might entertain about how reality will move through its event tree .this might seem strange , because at least according to the personalist and epistemicist school , probability is all about beliefs . in order to find out how we can incorporate beliefs into the game - theoretic framework, we now turn to walley s imprecise probability models .in his book on the behavioural theory of imprecise probabilities , walley considers many different types of related uncertainty models . we shall restrict ourselves here to the most general and most powerful one , which also turns out to be the easiest to explain , namely coherent sets of really desirable gambles ; see also .consider a non - empty set of possible alternatives , only one of which actually obtains ( or will obtain ) ; we assume that it is possible , at least in principle , to determine which alternative does so . also consider a subject who is uncertain about which possible alternative actually obtains ( or will obtain ) .a _ gamble _ on is a real - valued map on , and it is interpreted as an uncertain reward , expressed in units of some predetermined linear utility scale : if actually obtains , then the reward is , which may be positive or negative .we use the notation for the set of all gambles on .walley assumes gambles to be bounded .we make no such boundedness assumption here .if a subject _ accepts _ a gamble , this is taken to mean that she is willing to engage in the transaction where , ( i ) first it is determined which obtains , and ( ii ) then she receives the reward .we can try and model the subject s beliefs about by considering which gambles she accepts .suppose our subject specifies some set of gambles she accepts , called a _ set of really desirable gambles_. such a set is called _ coherent _ if it satisfies the following _ rationality requirements _ :if then [ avoiding partial loss ] ; 2 . if then [ accepting partial gain ]if and belong to then their ( point - wise ) sum also belongs to [ combination ] ; 4 .if belongs to then its ( point - wise ) scalar product also belongs to for all non - negative real numbers [ scaling ] . here` ' means ` and not ' .walley has also argued that , besides d1d4 , sets of really desirable gambles should satisfy an additional axiom : 1 . is -conglomerable for any partition of : if for all , then also [ full conglomerability ] . when the set is finite , all its partitions are finite too , and therefore full conglomerability becomes a direct consequence of the finitary combination axiom d3 .but when is infinite , its partitions may be infinite too , and then full conglomerability is a very strong additional requirement , that is not without controversy .if a model is -conglomerable , this means that certain inconsistency problems when conditioning on elements of are avoided ; see for more details and examples .conglomerability of belief models was nt required by forerunners of walley , such as williams , and [ theo : matching ] . ] or de finetti . while we agree with walley that conglomerability is a desirable property for sets of really desirable gambles, we do not believe that _ full _ conglomerability is always necessary : it seems that we only need to require conglomerability with respect to those partitions that we actually intend to condition our model on .this is the path we shall follow in section [ sec : connections ] .given a coherent set of really desirable gambles , we can define conditional lower and upper previsions as follows : for any gamble and any non - empty subset of , with indicator , so , and _ the lower prevision of , conditional on _ is the supremum price for which the subject will buy the gamble , i.e. , accept the gamble , contingent on the occurrence of .similarly , _the upper prevision of , conditional on _ is the infimum price for which the subject will sell the gamble , i.e. , accept the gamble , contingent on the occurrence of . for any event , we define the conditional lower probability , i.e. , the subject s supremum rate for betting on the event , contingent on the occurrence of , and similarly for .we want to stress here that by its definition [ eq . ] , is a conditional lower prevision on what walley ( * ? ? ?* section 6.1 ) has called the _ contingent interpretation _ : it is a supremum acceptable price for buying the gamble _ contingent _ on the occurrence of , meaning that the subject accepts the contingent gambles , , which are called off unless occurs .this should be contrasted with the _ updating interpretation _ for the conditional lower prevision , which is a subject s _ present _ ( before the occurrence of ) supremum acceptable price for buying after receiving the information that has occurred ( and nothing else ! ) .walley s _ updating principle _* section 6.1.6 ) , which we shall accept , and use further on in section [ sec : connections ] , ( essentially ) states that conditional lower previsions should be the same on both interpretations .there is also a third way of looking at a conditional lower prevision , which we shall call the _ dynamic interpretation _ , and where stands for the subject s supremum acceptable buying price for _ after she gets to know_ has occurred . for precise conditional previsions , this last interpretation seems to be the one considered in .it is far from obvious that there should be a relation between the first two and the third interpretations .justifies peter walley s updating principle '' . ]we shall briefly come back to this distinction in the following sections .for any partition of , we let be the gamble on that in any element of assumes the value , where is any element of .the following properties of conditional lower and upper previsions associated with a coherent set of really desirable bounded gambles were ( essentially ) proved by walley , and by williams .we give the extension to potentially unbounded gambles : [ prop : walley ] consider a coherent set of really desirable gambles , let be any non - empty subset of , and let , and be gambles on .then , we implicitly assume that whatever we write down is well - defined , meaning that for instance no sums of and appear , and that the function is real - valued , and nowhere infinite .shafer and vovk do nt seem to mention the need for this.[fn : well - defined ] ] 1 . [ convexity ] ; 2 . [ super - additivity ] ; 3 . for all real [ non - negative homogeneity ] ; 4 . for all real [ constant additivity ] ; 5 . for all real [ normalisation ] ; 6 . implies that [ monotonicity ] ; 7 . if is any partition of that refines the partition and is -conglomerable , then [ conglomerative property ] . the analogy between propositions [ prop : shafer - and - vovk ] and [ prop : walley ] is striking , even if there is an equality in proposition [ prop : shafer - and - vovk].7 , where we have only an inequality in proposition [ prop : walley].7 . in the next section , we set out to identify the exact correspondence between the two models .we shall find a specific situation where applying walley s theory leads to equalities rather than the more general inequalities of proposition [ prop : walley].7 .we now show that there can indeed be a strict inequality in proposition [ prop : walley].7 .consider an urn with red , green and blue balls , from which a ball will be drawn at random .our subject is uncertain about the colour of this ball , so .assume that she assesses that she is willing to bet on this colour being red at rates up to ( and including ) , i.e. , that she accepts the gamble .similarly for the other two colours , so she also accepts the gambles and .it is not difficult to prove using the coherence requirements d1d4 and eq .that the smallest coherent set of really desirable gambles that includes these assessments satisfies , where for the partition ( a daltonist has observed the colour of the ball and tells the subject about it ) , it follows from eq .after some manipulations that if we consider , then in particular and , so and therefore whereas , and therefore . the difference between infimum selling and supremum buying prices for gambles represents imprecision present in our subject s belief model .if we look at the inequalities in proposition [ prop : walley].1 , we are led to consider two extreme cases .one extreme maximises the ` degrees of imprecision ' by letting and .this leads to the so - called _ vacuous model _ , corresponding to , and intended to represent complete ignorance on the subject s part .the other extreme minimises the degrees of imprecision by letting everywhere .the common value is then called the _ prevision _ , or _ fair price _ , for on .we call the corresponding functional a ( conditional ) _ linear prevision_. linear previsions are the precise probability models considered by de finetti .they of course have all properties of lower and upper previsions listed in proposition [ prop : walley ] , with equality rather than inequality for statements 2 and 7 .the restriction of a linear prevision to ( indicators of ) events is a finitely additive probability measure .in order to lay bare the connections between the game - theoretic and the behavioural approach , we enter shafer and vovk s world , and consider another player , called forecaster , who , _ in situation _ , has certain _ piece - wise _ beliefs about what moves reality will make . more specifically , for each non - terminal situation , she has beliefs ( in situation ) about which move reality will choose from the set of moves available to him if he gets to .we suppose she represents those beliefs in the form of a _ _ coherent _ _ , we impose no extra conglomerability requirements here , only the coherence conditions d1d4 . ]set of really desirable gambles on .these beliefs are conditional on the updating interpretation , in the sense that they represent forecaster s beliefs in situation about what reality will do _ immediately after he gets to situation . we call any specification of such coherent , , an _ immediate prediction model _ for forecaster .we want to stress here that should _ not _ be interpreted dynamically , i.e. , as a set of gambles on that forecaster accepts in situation .we shall generally call an event tree , provided with local predictive belief models in each of the non - terminal situations , an _ imprecise probability tree_. these local belief models may be coherent sets of really desirable gambles .but they can also be lower previsions ( perhaps derived from such sets ) .when all such local belief models are precise previsions , or equivalently ( finitely additive ) probability measures , we simply get a _ probability tree _ in shafer s ( * ? ? ?* chapter 3 ) sense .we can now ask ourselves what the behavioural implications of these conditional assessments in the immediate prediction model are . for instance , what do they tell us about whether or not forecaster should accept certain gambles on , the set of possible paths for reality ?in other words , how can these beliefs ( in ) about which next move reality will make in each non - terminal situation be combined coherently into beliefs ( in ) about reality s complete sequence of moves ? in order to investigate this , we use walley s very general and powerful method of _ natural extension _ , which is just _ conservative coherent reasoning_. we shall construct , using the local pieces of information , a set of really desirable gambles on for forecaster in situation that is ( i ) coherent , and ( ii ) as small as possible , meaning that no more gambles should be accepted than is actually required by coherence . consider any non - terminal situation and any gamble in .with we can associate a -gamble ,- gamble as a partial gamble whose domain includes . ]also denoted by , and defined by for all , where we denote by the unique element of such that .the -gamble is -measurable for any cut of that is non - trivial , i.e. , such that .this implies that we can interpret as a map on .in fact , we shall even go further , and associate with the gamble on a -process , also denoted by , by letting for any , where is any terminal situation that follows ; see also fig . [fig : local ] .= [ circle , draw = black!70,fill = black!70 ] = [ rectangle , draw = black!60 ] = [ circle , draw = black!60,fill = black!30,minimum size=.5 mm ] = [ sibling distance=15 mm ] = [ sibling distance=15 mm ] = [ sibling distance=15 mm ] = [ snake = snake , segment amplitude=.2mm , segment length=1 mm , line after snake=1 mm ] ( root ) [ grow = right , inner sep=.7 mm ] child node[nonterminal , label = below: ( t ) child node[nonterminal ] ( tw2 ) child node[terminal ] ( fin1 ) child node[terminal ] ( fin2 ) edge from parent node[below ] child node[terminal ] ( tw1 ) edge from parent node[above ] child node[terminal ] ; ( t ) + ( -2,-1 ) node[below ] ; ( fin1 ) + ( 1,0 ) node[right ] ; ( fin2 ) + ( 1,0 ) node[right ] ; ( tw1 ) + ( 1,0 ) node[right ] ; ( tw2 ) + ( 0,-1 ) node[below ] ; represents the gamble on that is called off unless reality ends up in situation , and which , when it is nt called off , depends only on reality s move immediately after , and gives the same value to all paths that go through . the fact that forecaster , in situation , accepts on conditional on reality s getting to , translates immediately to the fact that forecaster accepts the contingent gamble on , by walley s updating principle .we thus end up with a set of gambles on that forecaster accepts in situation .the only thing left to do now , is to find the smallest coherent set of really desirable gambles that includes ( if indeed there is any such coherent set ) . herewe take coherence to refer to conditions d1d4 , together with d5 , a variation on d5 which refers to conglomerability with respect to those partitions that we actually intend to condition on , as suggested in section [ sec : walley ] .these partitions are what we call _ cut partitions_. consider any cut of the initial situation .the set of events is a partition of , called the _-partition_. d5 requires that our set of really desirable gambles should be _ cut conglomerable _ ,i.e. , conglomerable with respect to every cut partition .are finite , cut conglomerability ( d5 ) is a consequence of d3 , and therefore needs no extra attention .but when some or all move spaces are infinite , then a cut may contain an infinite number of elements , and the corresponding cut partition will then be infinite too , making cut conglomerability a non - trivial additional requirement . ]why do we only require conglomerability for cut partitions ? simply because we are interested in _ predictive inference _ :we eventually will want to find out about the gambles on that forecaster accepts in situation , conditional ( contingent ) on reality getting to a situation .this is related to finding lower previsions for forecaster conditional on the corresponding events .a collection of such events constitutes a partition of the sample space if and only if is a cut of .because we require cut conglomerability , it follows in particular that will contain the sums of gambles for all _ non - terminal _cuts of and all choices of , .this is because for all .because moreover should be a convex cone [ by d3 and d4 ] , any sum of such sums over a finite number of non - terminal cuts should also belong to .but , since in the case of bounded protocols we are discussing here , reality can only make a bounded and finite number of moves , is a finite union of such non - terminal cuts , and therefore the sums should belong to for all choices , .consider any non - terminal situation , and call _ -selection _ any partial process defined on the non - terminal such that . with a -selection ,we associate a -process , called a _ gamble process _ , where in all situations ; see also fig .[ fig : gamble - processes ] .alternatively , is given by the recursion relation for all non - terminal , with initial value .in particular , this leads to the -gamble defined on all terminal situations that follow , by letting then we have just argued that the gambles should belong to for all non - terminal situations and all -selections . as before for strategy and capital processes , we call a -selection simply a _ selection _ , and a -gamble process simply a _ gamble process_. = [ circle , draw = black!70,fill = black!70 ] = [ rectangle , draw = black!60 ] = [ circle , draw = black!60,fill = black!30,minimum size=.5 mm ] = [ sibling distance=12 mm ] = [ sibling distance=12 mm ] = [ sibling distance=12 mm ] = [ snake = snake , segment amplitude=.2mm , segment length=1 mm , line after snake=1 mm ] ( root ) [ grow = right , inner sep=.7 mm ] child node[nonterminal , label = below: ( t ) child node[nonterminal , label = above: ] ( s ) child node[terminal ] ( fin1 ) edge from parent node[above ] child node[terminal ] ( fin2 ) edge from parent node[above ] edge from parent node[below ] child node[terminal ] ( tw1 ) edge from parent node[below ] child node[terminal ] ; ( t ) + ( -2,-1 ) node[below ] ; ( s ) + ( -2,-1 ) node[below ] ; ( fin1 ) + ( .5,0 ) node[right ] ; ( fin2 ) +( .5,0 ) node[right ] ; ( tw1 ) +( .5,0 ) node[right ] ; ( s ) + ( 1,-1 )node[below ] ; ( t ) +( .5,.5 ) node[above right ] ; it is now but a technical step to prove theorem [ theo : natex ] below .it is a significant generalisation , in terms of sets of really desirable gambles rather than coherent lower previsions , for expressions in terms of predictive lower previsions that should make the connection much clearer .] of the _ marginal extension theorem _ first proved by walley ( * ? ?* theorem 6.7.2 ) , and subsequently extended by de cooman and miranda .[ theo : natex ] there is a smallest set of gambles that satisfies d1d4 and d5 and includes .this natural extension of is given by moreover , for any non - terminal situation and any -gamble , it holds that if and only if there is some -selection such that , where as before , is taken to mean that for all terminal situations that follow .we now use the coherent set of really desirable gambles to define special lower previsions for forecaster in situation , conditional on an event , i.e. , on reality getting to situation , as explained in section [ sec : walley ]. we shall call such conditional lower previsions _ predictive _ lower previsions .we then get , using eq . and theorem [ theo : natex ] , that for any non - terminal situation , we also use the notation .it should be stressed that eq .is also valid in terminal situations , whereas eq .clearly is nt . besides the properties in proposition [ prop : walley ] , which hold in general for conditional lower and upper previsions ,the predictive lower ( and upper ) previsions we consider here also satisfy a number of additional properties , listed in propositions [ prop : prevision - properties - general ] and [ prop : separate - coherence ] .[ prop : prevision - properties - general ] let be any situation , and let , and be gambles on . 1 .if is a terminal situation , then ; 2 . and ; 3 . ( on ) implies that [ monotonicity ] .before we go on , there is an important point that must be stressed and clarified .it is an immediate consequence of proposition [ prop : prevision - properties - general].2 that when and are any two gambles that coincide on , then .this means that is completely determined by the values that assumes on , and it allows us to define on gambles that are only necessarily defined on , i.e. , on -gambles .we shall do so freely in what follows . for any cut of a situation , we may define the -gamble as the gamble that assumes the value in any , where .this -gamble is -measurable by construction , and it can be considered as a gamble on .[ prop : separate - coherence ] let be any situation , let be any cut of , and let and be -gambles , where is -measurable . 1 . ; 2 . ; 3 . ; 4 .if is moreover non - negative , then .there appears to be a close correspondence between the expressions [ such as ] for lower prices associated with coherent probability protocols and those [ such as ] for the predictive lower previsions based on an immediate prediction model .say that a given coherent probability protocol and given immediate prediction model _ match_ whenever they lead to identical corresponding lower prices and predictive lower previsions for all _ non - terminal _ .the following theorem marks the culmination of our search for the correspondence between walley s , and shafer and vovk s approaches to probability theory .[ theo : matching ] for every coherent probability protocol there is an immediate prediction model such that the two match , and conversely , for every immediate prediction model there is a coherent probability protocol such that the two match .the ideas underlying the proof of this theorem should be clear . if we have a coherent probability protocol with move spaces and gain functions for sceptic , define the immediate prediction model for forecaster to be ( essentially ) . if , conversely , we have an immediate prediction model for forecaster consisting of the sets , define the move spaces for sceptic by , and his gain functions by for all in .we discuss the interpretation of this correspondence in more detail in section [ sec : interpretation ] .the marginal extension theorem allows us to calculate the most conservative global belief model that corresponds to the local immediate prediction models . herebeliefs are expressed in terms of sets of really desirable gambles .can we derive a result that allows us to do something similar for the corresponding lower previsions ? to see what this question entails , first consider a local model : a set of really desirable gambles on , where .using eq . , we can associate with a lower prevision on .each gamble on can be seen as an uncertain reward , whose outcome depends on the ( unknown ) move that reality will make if it gets to situation .and forecaster s _ local _ ( predictive ) lower prevision for is her supremum acceptable price ( in ) for buying when reality gets to .but as we have seen in section [ sec : predictive - lower - upper ] , we can also , in each situation , derive _ global _ predictive lower previsions for forecaster from the global model , using eq . .for each -gamble , is forecaster inferred supremum acceptable price ( in ) for buying , contingent on reality getting to .is there a way to construct the global predictive lower previsions directly from the local predictive lower previsions ?we can infer that there is from the following theorem , together with propositions [ prop : cut - reduction ] and [ prop : local - models ] below .[ theo : concatenation ] consider any two cuts and of a situation such that precedes .for all -gambles on , is a real number for all , making sure that is indeed a gamble . ] 1 . ; 2 . . to make clear what the following proposition [ prop : cut - reduction ] implies , consider any -selection , and define the _-called off -selection _ as the selection that mimics until we get to , where we begin to select the zero gambles : for any non - terminal situation , let if strictly precedes ( some element of ) , and let otherwise . if we stop the gamble process at the cut , we readily infer from eq .that for the -stopped process we see that stopped gamble processes are gamble processes themselves , that correspond to selections being ` called off ' as soon as reality reaches a cut .this also means that we can actually restrict ourselves to selections that are -called off in proposition [ prop : cut - reduction ] .[ prop : cut - reduction ] let be a non - terminal situation , and let be a cut of . then for any -measurable -gamble , if and only is there is some -selection such that , or equivalently , .consequently , if a -gamble is measurable with respect to the children cut of a non - terminal situation , then we can interpret it as gamble on .for such gambles , the following immediate corollary of proposition [ prop : cut - reduction ] tells us that the predictive lower previsions are completely determined by the local modal .[ prop : local - models ] let be a non - terminal situation , and consider a -measurable gamble . then .these results tells us that all predictive lower ( and upper ) previsions can be calculated using backwards recursion , by starting with the trivial predictive previsions for the terminal cut , and using only the local models .this is illustrated in the following simple example .we shall come back to this idea in section [ sec : backwards - recursion ] .[ ex : many - coins ] suppose we have coins .we begin by flipping the first coin : if we get tails , we stop , and otherwise we flip the second coin .again , we stop if we get tails , and otherwise we flip the third coin , in other words , we continue flipping new coins until we get one tails , or until all coins have been flipped .this leads to the event tree depicted in fig .[ fig : many - coins ] .= [ circle , draw = black!70,fill = black!70 ] = [ rectangle , draw = black!60 ] = [ circle , draw = black!60,fill = black!30,minimum size=.5 mm ] = [ semithick , dashed ] ( root ) [ grow = right , inner sep=.7 mm ] child node[nonterminal , label = below left: ( h1 ) child node[nonterminal , label = below left: ( h2 ) child node[nonterminal , label = below left: ( h3 ) child node[nonterminal , label = below left: ( hnm1 ) child node[terminal , label = right: ( hn ) edge from parent[solid ] child node[terminal , label = right: ( tn ) edge from parent[solid ] edge from parent[dotted ] child node[terminal , label = right: ( t4 ) child node[terminal , label = right: ( t3 ) child node[terminal , label = right: ( t2 ) child node[terminal , label = right: ( t1 ) ; ( t1 ) + ( 0,.5 ) ; ( t1 ) ( h1 ) + ( 0,-1 ) node[right ] ; ( t2 ) + ( 0,.5 ) ; ( t2 ) ( h2 ) + ( 0,-1 ) node[right ] ; ( t3 ) + ( 0,.5 ) ; ( t3 ) ( h3 ) + ( 0,-1 ) node[right ] ; ( hnm1 ) + ( 0,.5 ) ; ( hnm1 ) + ( 0,-1 ) node[right ] ; ( tn ) + ( 0,.5 ) ; ( tn ) ( hn ) + ( 0,-1 ) node[right ] ; its sample space is .we will also consider the cuts of , of , of , , and of .it will be convenient to also introduce the notation for the initial situation . for each of the non - terminal situations , , forecaster has beliefs ( in ) about what move reality will make in that situation , i.e. , about the outcome of the -th coin flip .these beliefs are expressed in terms of a set of really desirable gambles on reality s move space in .each such move space can clearly be identified with the children cut of .for the purpose of this example , it will be enough to consider the local predictive lower previsions on , associated with through eq . .forecaster assumes all coins to be approximately fair , in the sense that she assesses that the probability of heads for each flip lies between and , for some .this assessment leads to the following local predictive lower previsions : and ; see ( * ? ? ? * chapters 34 ) for more details . ] + 2\delta\min\{g({h}_{k+1}),g({t}_{k+1})\},\ ] ] where is any gamble on .let us see how we can for instance calculate , from the local predictive models , the predictive lower probabilities for a gamble on and any situation in the tree .first of all , for the terminal situations it is clear from proposition [ prop : prevision - properties - general].1 that we now turn to the calculation of .it follows at once from proposition [ prop : local - models ] that , and therefore , substituting in eq . for , to calculate ,consider that , since , where the first equality follows from theorem [ theo : concatenation ] , and the second from proposition [ prop : local - models ] , taking into account that is a gamble on the children cut of .it follows from eq . that and from eq . that . substituting in eq . for , we then find that repeating this course of reasoning , we find that more generally this illustrates how we can use a backwards recursion procedure to calculate global from local predictive lower previsions .in shafer and vovk s approach , there sometimes also appears , besides reality and sceptic , a third player , called _forecaster_. her rle consists in determining what sceptic s move space and gain function are , in each non - terminal situation .shafer and vovk leave largely unspecified just how forecaster should do that , which makes their approach quite general and abstract .but the matching theorem now tells us that we can connect their approach with walley s , and therefore inject a notion of belief modelling into their game - theoretic framework .we can do that by being more specific about how forecaster should determine sceptic s move spaces and gain functions : they should be determined by forecaster s beliefs ( in ) about what reality will do immediately after getting to non - terminal situations ., is already present in shafer s work , see for instance ( * ? ? ?* chapter 8) and ( * ? ? ? * appendix 1 ) .we extend this idea here to walley s imprecise probability models .] let us explain this more carefully .suppose that forecaster has certain beliefs , _ in situation _ , about what move reality will make next in each non - terminal situation , and suppose she models those beliefs by specifying a coherent set of really desirable gambles on .this brings us to the situation described in the previous section .when forecaster specifies such a set , she is making certain behavioural commitments : she is committing herself to accepting , in situation , any gamble in , contingent on reality getting to situation , and to accepting any combination of such gambles according to the combination axioms d3 , d4 and d5. this implies that we can derive predictive lower previsions , with the following interpretation : in situation , is the supremum price forecaster can be made to buy the -gamble for , conditional on reality s getting to , and on the basis of the commitments she has made in the initial situation .what sceptic can now do , is take forecaster up on her commitments .this means that in situation , he can use a selection , which for each non - terminal situation , selects a gamble ( or equivalently , any non - negative linear combination of gambles ) in and offer the corresponding gamble on to forecaster , who is bound to accept it .if reality s next move in situation is , this changes sceptic s capital by ( the positive or negative amount ) . in other words , his move space can then be identified with the convex set of gambles and his gain function is then given by .but then the _ selection _ can be identified with a _ for sceptic , and ( this is the essence of the proof of theorem [ theo : matching ] ) , which tells us that we are led to a coherent probability protocol , and that the corresponding lower prices for sceptic coincide with forecaster s predictive lower previsions . in a very nice paper ,shafer , gillett and scherl discuss ways of introducing and interpreting lower previsions in a game - theoretic framework , not in terms of prices that a subject is willing to pay for a gamble , but in terms of whether a subject believes she can make a lot of money ( utility ) at those prices .they consider such conditional lower previsions both on a contingent and on a dynamic interpretation , and argue that there is equality between them in certain cases . here , we have decided to stick to the more usual interpretation of lower and upper previsions , and concentrated on the contingent / updating interpretation .we see that on our approach , the game - theoretic framework is useful too .this is of particular relevance to the laws of large numbers that shafer and vovk derive in their game - theoretic framework , because such laws can now be given a behavioural interpretation in terms of forecaster s predictive lower and upper previsions . to give an example, we now turn to deriving a very general weak law of large numbers .consider a non - terminal situation and a cut of .define the -variable such that is the distance , measured in moves along the tree , from to the unique situation in that goes through . is clearly -measurable , and is simply the distance from to .we assume that for all , or in other words that .of course , in the bounded protocols we are considering here , is bounded , and we denote its minimum by . now consider for each between and a _ bounded _gamble and a real number such that , meaning that forecaster in situation accepts to buy for , contingent on reality getting to situation .let be any common upper bound for , for all .it follows from the coherence of [ d1 ] that . to make things interesting, we shall also assume that , because otherwise and accepting this gamble represents no real commitment on forecaster s part . as a result , we see that . we are interested in the following -gamble , given by ,\ ] ] which provides a measure for how much , on average , the gambles yield an outcome above forecaster s accepted buying prices , along segments of the tree starting in and ending right before .in other words , measures the average gain for forecaster along segments from to , associated with commitments she has made and is taken up on , because reality has to move along these segments .this gamble is -measurable too .we may therefore interpret as a gamble on . also , for any and any , we know that because , has the same value in all that go through .this allows us to write .\ ] ] we would like to study forecaster s beliefs ( in the initial situation and contingent on reality getting to ) in the occurrence of the event where . in other words , we want to know , which is forecaster s supremum rate for betting on the event that his average gain from to will be at least , contingent on reality s getting to . [theo : largenum ] for all , we see that as increases this lower bound increases to one , so the theorem can be very loosely formulated as follows : _ as the horizon recedes , forecaster , if she is coherent , should believe increasingly more strongly that her average gain along any path from the present to the horizon wo nt be negative_. this is a very general version of the weak law of large numbers .it can be seen as a generalisation of hoeffding s inequality for martingale differences ( see also ( * ? ? ?* chapter 4 ) and ( * ? ? ?* appendix a.7 ) ) to coherent lower previsions on event trees .we now look at an interesting consequence of theorem [ theo : largenum ] : we shall see that it can be used to score a predictive model in a manner that satisfies dawid s _ prequential principle _ .we consider the special case of theorem [ theo : largenum ] where .suppose reality follows a path up to some situation in , which leads to an average gain for forecaster .suppose this average gain is negative : .we see that for all , and therefore all these events have actually occurred ( because has ) . on the other hand , forecaster s upper probability ( in ) for their occurrence satisfies , by theorem [ theo : largenum ] .coherence then tells us that forecaster s upper probability ( in ) for the event , which has actually occurred , is then at most , where observe that is a number in , by assumption .coherence requires that forecaster , because of her local predictive commitments , can be forced ( by sceptic , if he chooses his strategy well ) to bet against the occurrence of the event at a rate that is at least .so we see that forecaster is losing utility because of her local predictive commitments .just how much depends on how close lies to , and on how large is ; see fig . [fig : scoring ] .( -0.01,-0.01 ) grid ( 1.01,1.01 ) ; ( 1.0,0.0 ) node[below ] 1 ; ( 0.0,1.0 ) node[left ] 1 ; ( 0.0,0.0 ) node[below right ] 0 ; ( 0.0,0.0 ) node[above left ] 0 ; ( -0.05,0 ) ( 1.05,0 ) node[right ] ; ( 0,-0.05 ) ( 0,1.05 ) node[above ] ; plot[id=5 ] function1-exp(-x**2*(5/4 ) ) node[right] ; plot[id=10 ] function1-exp(-x**2*(10/4 ) ) node[right] ; plot[id=100 ] function1-exp(-x**2*(100/4 ) ) node[right] ; plot[id=500 ] function1-exp(-x**2*(500/4 ) ) node[above left] ; the upper bound we have constructed for the upper probability of has a very interesting property , which we now try to make more explicit . indeed , if we were to calculate forecaster s upper probability for directly using eq ., this value would generally depend on forecaster s predictive assessments for situations that do nt precede , and that reality therefore never got to .we shall see that such is not the case for the upper bound constructed using theorem [ theo : largenum ] .consider any situation before but not on the path through , meaning that reality never got to this situation .therefore the corresponding gamble in the expression for is nt used in calculating the value of , so we can change it to anything else , and still obtain the same value of .indeed , consider any other predictive model , where the only thing we ask is that the coincide with the for all that precede .for other , the can be chosen arbitrarily , but still coherently . now construct a new average gain gamble for this alternative predictive model , where the only restriction is that we let and if precedes .we know from the reasoning above that , so the new upper probability that the event will be observed is at most in other words , the upper bound we found for forecaster s upper probability of reality getting to a situation _ depends only on forecaster s local predictive assessments for situations that reality has actually got to , and not on her assessments for other situations_. this means that this method for scoring a predictive model satisfies dawid s _ prequential principle _ ; see for instance .as we have discovered in section [ sec : concatenation ] , theorem [ theo : concatenation ] and proposition [ prop : local - models ] enable us to calculate the global predictive lower previsions in imprecise probability trees from local predictive lower previsions , , using a backwards recursion method . that this is possible in probability trees , where the probability models are precise ( previsions ) , is well - known , and [ sec : concatenation ] .for instance , theorem [ theo : concatenation ] generalises proposition 3.11 in to imprecise probability trees . ] and was arguably discovered by christiaan huygens in the middle of the 17-th century .it allows for an exponential , dynamic programming - like reduction in the complexity of calculating previsions ( or expectations ) ; it seems to be essentially this phenomenon that leads to the computational efficiency of such machine learning tools as , for instance , needleman and wunsch s sequence alignment algorithm . in this section , we want to give an illustration of such exponential reduction in complexity , by looking at a problem involving markov chains .assume that the state of a system at consecutive times can assume any value in a finite set .forecaster has some beliefs about the state at time , leading to a coherent lower prevision on .she also assesses that when the system jumps from state to a new state , where the system goes to will only depend on the state the system was in at time , and not on the states of the system at previous times .her beliefs about where the system in will go to at time are represented by a lower prevision on .the time evolution of this system can be modelled as reality traversing an event tree .an example of such a tree for and is given in fig .[ fig : markov ] .the situations of the tree have the form , ; for this gives some abuse of notation as we let . in each cut of , the value of the state at time is revealed .= [ rectangle , draw = black!60 ] = [ circle , draw = black!70,fill = black!70 ] = [ circle , draw = black!60,fill = black!30,minimum size=.5 mm ] = [ sibling distance=24 mm ] = [ sibling distance=12 mm ] = [ sibling distance=6 mm ] = [ semithick , dashed , draw = black ] ( root ) [ grow = right , inner sep=.7 mm ] child node[nonterminal , label = below left: ( t ) child node[nonterminal , label = below left: ( tt ) child node[terminal , label = right: child node[terminal , label = right: child node[nonterminal , label = above left: ( th ) child node[terminal , label = right: child node[terminal , label = right: child node[nonterminal , label = above left: ( h ) child node[nonterminal , label = below left: ( ht ) child node[terminal , label = right: child node[terminal , label = right: child node[nonterminal , label = above left: ( hh ) child node[terminal , label = right: child node[terminal , label = right: ; ( h ) + ( 0,1 ) ; ( h ) ( t ) + ( 0,-1.55 ) node[above right ] ; ( hh ) + ( 0,1 ) ; ( hh ) ( ht ) ( th ) ( tt ) + ( 0,-1 ) node[above right ] ; this leads to an imprecise probability tree with local predictive models and expressing the usual _ markov conditional independence condition _ , but here in terms of lower previsions . for notational convenience ,we now introduce a ( generally non - linear ) _ transition operator _ on the linear space as follows : or in other words , is a gamble on whose value in the state is given by .the transition operator completely describes forecaster s beliefs about how the system changes its state from one instant to the next . we now want to find the corresponding model for forecaster s beliefs ( in ) about the state the system will be in at time .so let us consider a gamble on that actually only depends on the value of at this time .we then want to calculate its lower prevision .consider a time instant , and a situation .for the children cut of , we see that is a gamble that only depends on the value of in , and whose value in is given by .we then find that where the first equality follows from theorem [ theo : concatenation ] , and the second from proposition [ prop : local - models ] and eq . .we first apply eq . for . by proposition [ prop : separate - coherence].2 , , so we are led to , and therefore substituting this in eq . for , yields , and therefore proceeding in this fashion until we get to , we get , and going one step further to , eq . yields and therefore we see that the complexity of calculating in this way is essentially _ linear _ in the number of time steps . in the literature on imprecise probability models for markov chains ,another so - called _ credal set _ , or _set of probabilities _ , approach is generally used to calculate .the point we want to make here is that such an approach typically has a worse ( exponential ) complexity in the number of time steps . to see this , recall that a lower prevision on that is derived from a coherent set of really desirable gambles , corresponds to a convex closed set of probability mass functions on , called a _credal set _ , and given by where we let be the expectation of the gamble associated with the mass function ; is a linear prevision in the language of section [ sec : lower - upper ] .it then also holds that for all gambles on , where is the set of extreme points of the convex closed set . typically on this approach, is assumed to be finite , and then is called a _ finitely generated credal set_. see for instance for a discussion of credal sets with applications to bayesian networks .then can also be calculated as follows : choose for each non - terminal situation , a mass function in the set given by eq . , or equivalently , in its set of extreme points .this leads to a ( precise ) probability tree for which we can calculate the corresponding expectation of .then is the minimum of all such expectations , calculated for all possible assignments of mass functions to the nodes .we see that , roughly speaking , when all have a typical number of extreme points , then the complexity of calculating will be essentially , i.e. , exponential in the number of time steps .this shows that the ` lower prevision ' approach can for some problems lead to more efficient algorithms than the ` credal set ' approach .this may be especially relevant for probabilistic inferences involving graphical models , such as credal networks .another nice example of this phenomenon , concerned with checking coherence for precise and imprecise probability models , is due to walley _we have proved the correspondence between the two approaches only for event trees with a bounded horizon . for games with infinite horizon ,the correspondence becomes less immediate , because shafer and vovk implicitly make use of coherence axioms that are stronger than d1d4 and d5 , leading to lower prices that dominate the corresponding predictive lower previsions .exact matching would be restored of course , provided we could argue that these additional requirements are rational for any subject to comply with .this could be an interesting topic for further research .we havent paid much attention to the special case that the coherent lower previsions and their conjugate upper previsions coincide , and are therefore ( precise ) _ previsions _ or _ fair prices _ in de finetti s sense .when all the local predictive models ( see proposition [ prop : local - models ] ) happen to be precise , meaning that for all gambles on , then the immediate prediction model we have described in section [ sec : connections ] becomes very closely related , and arguably identical to , the probability trees introduced and studied by shafer in .indeed , we then get predictive previsions that can be obtained through concatenation of the local modals , as guaranteed by theorem [ theo : concatenation ] .moreover , as indicated in section [ sec : backwards - recursion ] , it is possible to prove lower envelope theorems to the effect that ( i ) the local lower previsions correspond to lower envelopes of sets of local previsions ; ( ii ) each possible choice of previsions in over all non - terminal situations , leads to a _compatible _ probability tree in shafer s sense , with corresponding predictive previsions ; and ( iii ) the predictive lower previsions are the lower envelopes of the predictive previsions for the compatible probability trees .of course , the law of large numbers of section [ sec : weak - law ] remains valid for probability trees .finally , we want to recall that theorem [ theo : concatenation ] and proposition [ prop : local - models ] allow for a calculation of the predictive models using only the local models and _ backwards recursion _ , in a manner that is strongly reminiscent of dynamic programming techniques .this should allow for a much more efficient computation of such predictive models than , say , an approach that exploits lower envelope theorems and sets of probabilities / previsions .we think that there may be lessons to be learnt from this for dealing with other types of graphical models , such as credal networks , as well .what makes this more efficient approach possible is , ultimately , the marginal extension theorem ( theorem [ theo : natex ] ) , which leads to the concatenation formula ( theorem [ theo : concatenation ] ) , i.e. , to the specific equality , rather than the general inequalities , in proposition [ prop : walley].7 . generally speaking ( see for instance ( * ? ? ?* section 6.7 ) and ) , such marginal extension results can be proved because the models that forecaster specifies are _ local _ , or _ immediate _ prediction models : they relate to her beliefs , in each non - terminal situation , about what move reality is going to make _ immediately after _ getting to .this paper presents research results of bof - project 01107505 .we would like to thank enrique miranda , marco zaffalon , glenn shafer , vladimir vovk and didier dubois for discussing and questioning some of the views expressed here , even though many of these discussions took place more than a few years ago .sbastien destercke and erik quaeghebeur have read and commented on earlier drafts .we are also grateful for the insightful and generous comments of three reviewers , which led us to better discuss the significance and potential applications of our results , and helped us improve the readability of this paper .in this appendix , we have gathered proofs for the most important results in the paper . we begin with a proof of proposition [ prop : walley ] .although similar results were proved for bounded gambles by walley , and by williams before him , our proof also works for the extension to possibly unbounded gambles we are considering in this paper . for the first statement, we only give a proof for the first two inequalities .the proof for the remaining inequality is similar .for the first inequality , we may assume without loss of generality that and is therefore a real number , which we denote by .so we know that and therefore , by d2 .it then follows from eq .that . to prove the second inequality ,assume _ ex absurdo _ that , then it follows from eqs . and that there are real and such that , and .by d3 , , but this contradicts d1 , since .we now turn to the second statement .as announced in footnote [ fn : well - defined ] , we may assume that the sum of the terms and is well - defined .if either of these terms is equal to , the resulting inequality then holds trivially , so we may assume without loss of generality that both terms are strictly greater than .consider any real and , then by eq .we see that both and .hence \in{\mathcal{r}}$ ] , by d3 , and therefore , using eq . again .taking the supremum over all real and leads to the desired inequality . to prove the third statement ,first consider .since by d4 , if and only if , we get , using eq . for , consider that , where the last equality follows from d1 and d2 . to prove the sixth statement ,observe that implies that and therefore , by d2 .now consider any real such that , then by d3 , .hence and by taking suprema and considering eq ., we deduce that indeed . for the final statement ,assume that is a real number for all . also observe that for all non - empty .define the gamble as follows : for all , where .we have to prove that .we may assume without loss of generality that [ because otherwise the inequality holds trivially ] .fix , and consider the gamble .also consider any . if then , using eq . .if then again , by d2 . since is -conglomerable , it follows that , whence , again using eq . .hence , where .consequently , where we use the second statement , and the fact that and implies that the sum on the right - hand side of the inequality is well - defined as an extended real number .we have already argued that any coherent set of really desirable gambles that includes , must contain all gambles [ by d3 and d5 ] . by d2 and d3, it must therefore include the set .if we can show that is coherent , i.e. , satisfies d1d4 and d5 , then we have proved that is the natural extension of .this is what we now set out to do . to prove that d3 and d4 hold , consider any and in , and any non - negative real numbers and .we know there are selections and such that and .but is a selection as well [ because the satisfy d3 and d4 ] , and , whence indeed . to conclude ,we show that d5 is satisfied .consider any cut of .consider a gamble and assume that for all .we must prove that .let and , so is the disjoint union of and . for , implies that , by d1 . for ,we invoke lemma [ lem : contingency ] to find that there is some -selection such that .now construct a selection as follows .consider any in . if for some [ unique , because is a cut ] , let .otherwise let .then so indeed ; the first equality can be seen as immediate , or as a consequence of lemma [ lem : decomposition ] , and the second inequality holds because we have just shown that for all .the rest of the proof now follows from lemma [ lem : contingency ] .[ lem : decomposition ] let be any non - terminal situation , and let be any cut of . consider a -selection , and let , for any , be the -selection given by if the non - terminal situation follows , and otherwise .moreover , let be the -called off -selection for ( as defined after theorem [ theo : concatenation ] ) .then \\ & = { { \mathcal{g}}}^{{{\mathcal{s}}}}_u + \sum_{u\in u\setminus{\omega}}i_{{{{\uparrow}u}}}{{\mathcal{g}}}^{{{\mathcal{s}}}_u}_{\omega}={{\mathcal{g}}}^{{{\mathcal{s}}}^u}_{\omega}+\sum_{u\in u\setminus{\omega}}i_{{{{\uparrow}u}}}{{\mathcal{g}}}^{{{\mathcal{s}}}_u}_{\omega}. \end{aligned}\ ] ] it is immediate that the second equality holds ; see eq . for the third .for the first equality , it obviously suffices to consider the values of the left- and right - hand sides in any for .the value of the right - hand side is then , using eqs . and, [ lem : avoiding - partial - loss ] consider any non - terminal situation and any -selection . then it does nt hold that ( on ) . as a corollary ,consider any cut of , and the gamble on defined by . then it does nt hold that ( on ) .define the set , and its ( relative ) complement . if then , by eq ., so we can assume without loss of generality that is non - empty .consider any minimal element of , meaning that there is no in such that [ there is such a minimal element in because of the bounded horizon assumption ] .so for all we have that .choose in such that [ this is possible because satisfies d1 ] .this brings us to the situation . if , then choose in such that [ again possible by d1 ] . if then we know that for any choice of in .we can continue in this way until we reach a terminal situation after a finite number of steps [ because of the bounded horizon assumption ]. moreover it therefore ca nt hold that ( on ) . to prove the second statement ,consider the -called off -selection derived from by letting if ( follows and ) strictly precedes some in , and zero otherwise .then for all that go through , where [ see also eq . ] .now apply the above result for the -selection .it clearly suffices to prove the necessity part .assume therefore that , meaning [ definition of the set ] that there is some selection such that .let be the -selection defined by letting if , and zero otherwise .it follows from lemma [ lem : decomposition ] [ use the cut of made up of and the terminal situations that do not follow that + \sum_{{\omega}'\not\in{{{\uparrow}t}}}i_{{{{\uparrow}{\omega}'}}}{{\mathcal{g}}}^{{{\mathcal{s}}}}_{\omega}({\omega}'),\ ] ] whence , for all , then , by , the proof is complete if we can prove that .assume _ ex absurdo _ that .consider the cut of made up of and the terminal situations that do nt follow . applying lemma [ lem : avoiding - partial - loss ] for this cut and for the initial situation , we see that there must be some such that . butthis contradicts . for the first statement , consider a terminal situation and a gamble on . then and therefore if and only if , by d1 and d2 .using eq . , we find that indeed . by conjugacy , as well . for the second statement, consider any , then we must show that .but the -measurability of tells us that , and this gamble belongs to if and only if , by d1 and d2 .now use eq . .first , consider an immediate prediction model , .define sceptic s move spaces to be and his gain functions by for all and . clearly p1 and p2 are satisfied , because each is a convex cone by d3 and d4 . but so is the coherence requirement c. indeed , if it were nt satisfied there would be some non - terminal situation and some gamble in such that for all in , contradicting the coherence requirement d1 for are thus led to a coherent probability protocol .we show there is matching .consider any non - terminal situation , and any -selection .for all terminal situations , or in other words , selections and strategies are in a one - to - one correspondence ( are actually the same things ) , and the corresponding gamble and capital processes are each other s inverses .it is therefore immediate from eqs . and that .conversely , consider a coherent probability protocol with move spaces and gain functions for all non - terminal .define . by a similar argument to the one above , we see that , where the are the predictive lower previsions associated with the sets . but each is a convex cone of gambles by p1 and p2 , and by c we know that for all non - terminal situations and all gambles in there is some in such that .this means that the conditions for lemma [ lem : equivalence-1 ] are satisfied , and therefore also , where the are the predictive lower previsions associated with the immediate prediction model that is the smallest convex cone containing all non - negative gambles and including .[ lem : equivalence-1 ] consider , for each non - terminal situation , a set of gambles on such that ( i ) is a convex cone , and ( ii ) for all there is some in such that .then each set is a coherent set of really desirable gambles on .moreover , all predictive lower previsions obtained using the sets coincide with the ones obtained using the .fix a non - terminal situation .we first show that is a coherent set of really desirable gambles , i.e. , that d1d4 are satisfied .observe that is the smallest convex cone of gambles including the set and containing all non - negative gambles .so d2d4 are satisfied . to prove that d1 holds , consider any and assume _ ex absurdo _ that .then there are in , , and such that , whence and therefore and .but by ( ii ) , there is some in such that , whence .this contradicts .we now move to the second part .consider any gamble on .fix in and .first consider any -selection associated with the , i.e. , such that for all .since reality can only make a finite and _ bounded _ number of moves , whatever happens , it is possible to choose for each non - terminal such that for all in that follow .define the -selection associated with the by for all non - terminal that follow .clearly , and therefore since this inequality holds for all , we find that .conversely , consider any -selection associated with the .for all , we have that there are in , , and such that . define the -selection associated with the by .clearly then also , and therefore this proves that indeed .consider any -gamble on . recall that it is implicitly assumed that is again a -gamble .then we have to prove that .let , for ease of notation , , so the -gamble is -measurable , and we have to prove that .now , there are two possibilities. first , if is a terminal situation , then , on the one hand , by proposition [ prop : prevision - properties - general].1 . on the other hand ,again by proposition [ prop : prevision - properties - general].1 , now , since is a cut of , the unique element of that goes through , is , and therefore , again by proposition [ prop : prevision - properties - general].1 .this tells us that in this case indeed . secondly ,suppose that is not a terminal situation .then it follows from proposition [ prop : walley].7 and the cut conglomerability of that [ recall that and that .it therefore remains to prove the converse inequality .choose , then using eq .we see that there is some -selection such that on all paths that go through .invoke lemma [ lem : decomposition ] , using the notations introduced there , to find that now consider any . if is a terminal situation , then by proposition [ prop : prevision - properties - general].1 , , and therefore eq . yields also taking into account that [ see eq . ] . if is not a terminal situation then for all , eq .yields and since is a -selection , this inequality together with eq .tells us that , and therefore , for all , if we combine the inequalities and , and recall eq . , we get that .since this holds for all , we may indeed conclude that .the condition is clearly sufficient , so let us show that it is also necessary .suppose that , then there is some -selection such that , by theorem [ theo : natex ] [ or lemma [ lem : contingency ] ] .define , for any , the selection as follows : and elsewhere . then , by lemma [ lem : decomposition ] , now fix any in .if is a terminal situation , then it follows from the equality above that if is not a terminal situation , we get for all : whence , by taking the supremum of all , where the last inequality follows since by lemma [ lem : avoiding - partial - loss ] [ with and .now recall that is equivalent to [ see eq . ] . because for all , it follows that , and it therefore suffices to prove the inequality for .we work with the upper probability of the complementary event .it is given by because is -measurable , we can ( and will ) consider as an event on . in the expression , we may assume that , indeed , if we had and for some -selection , then it would follow that , contradicting lemma [ lem : avoiding - partial - loss ] .fix therefore and and consider the selection such that for all and let be zero elsewhere . here = \alpha\delta\prod_{t{\sqsubseteq}v{\sqsubset}s}[1+\delta(m_v - h_v(u ) ) ] , \label{eq : weak - law-1}\ ] ] where is any element of that follows .recall again that , so if we choose , we are certainly guaranteed that and therefore indeed . after some elementary manipulations we get for any and any : \ ] ] where the second equality follows from eq . .[ the is -measurable . ]if we let for ease of notation , then we get = \alpha\sum_{t{\sqsubseteq}s{\sqsubset}u } \prod_{t{\sqsubseteq}v{\sqsubset}s}[1+\delta\xi_v ] -\alpha\sum_{t{\sqsubseteq}s{\sqsubset}u } \prod_{t{\sqsubseteq}v{\sqsubseteq}s}[1+\delta\xi_v]\\ & = \alpha-\alpha\prod_{t{\sqsubseteq}v{\sqsubset}u}[1+\delta\xi_v ] = \alpha-\alpha\prod_{t{\sqsubseteq}v{\sqsubset}u}[1+\delta(m_v - h_v(u))]\end{aligned}\ ] ] for all in .then it follows from that if we can find an such that \geq1\ ] ] whenever belongs to , then this is an upper bound for . by taking logarithms on both sides of the inequality above ,we get the equivalent condition \geq0.\ ] ] since for , and by our previous restrictions on , we find & \geq\sum_{t{\sqsubseteq}s{\sqsubset}u}\delta(m_s - h_s(u ) ) -\sum_{t{\sqsubseteq}s{\sqsubset}u}[\delta(m_s - h_s(u))]^2\\ & \geq\delta\sum_{t{\sqsubseteq}s{\sqsubset}u } [ m_s - h_s(u)]-\delta^2n_u(u)b^2\\ & = n_u(u)\delta\left[-g_u(u)-b^2\delta\right].\end{aligned}\ ] ] but for all , , so for all such > n_u(u)\delta(\epsilon - b^2\delta).\ ] ] if we therefore choose such that for all , , or equivalently , then the above condition will indeed be satisfied for all , and then is an upper bound for .the tightest ( smallest ) upper bound is always ( for all ) achieved for . replacing by its minimum allows us to get rid of the -dependence , so we see that .we previously required that , so if we use this value for , we find that we have indeed proved this inequality for .10 g. boole . .dover publications , new york , 1847 , reprint 1961 .m. a. campos , g. p. dimuro , a. c. da rocha costa , and v. kreinovich .computing 2-step predictions for interval - valued finite stationary markov chains .technical report utep - cs-03 - 20a , university of texas at el paso , 2003 .f. g. cozman .credal networks . , 120:199233 , 2000 . f. g. cozman .graphical models for imprecise probabilities . , 39(2 - 3):167184 , june 2005 .statistical theory : the prequential approach ., 147:278292 , 1984 .dawid and v. g. vovk .prequential probability : principles and properties ., 5:125162 , 1999 . g. de cooman and f. hermans . on coherent immediate prediction: connecting two theories of imprecise probability . in g.de cooman , j. vejnarova , and m. zaffalon , editors , _isipta 07 proceedings of the fifth international symposium on imprecise probability : theories and applications _ , pages 107116 .sipta , 2007 .g. de cooman and e. miranda .symmetry of models versus models of symmetry . in w.l. harper and g. r. wheeler , editors , _ probability and inference : essays in honor of henry e. kyburg , jr ._ , pages 67149 .king s college publications , 2007 .g. de cooman and m. zaffalon . updating beliefs with incomplete observations ., 159(1 - 2):75125 , november 2004 . b. de finetti . .einaudi , turin , 1970 .b. de finetti . .john wiley & sons , chichester , 19741975 .nglish translation of , two volumes .grdenfors and n .- e .cambridge university press , cambridge , 1988 .m. goldstein .the prevision of a prevision . , 87:817819 , 1983 . w. hoeffding .probability inequalities for sums of bounded random variables . , 58:1330 , 1963 .. huygens . .reprinted in volume xiv of .ch . huygens . .martinus nijhoff , den haag , 1888 - 1950 .twenty - two volumes .available in digitised form from the bibliothque nationale de france ( ` http://gallica.bnf.fr ` ) .igor o. kozine and lev v. utkin .interval - valued finite markov chains . , 8(2):97113 , april 2002 . h. e. kyburg jr . and h. e. smokler , editors . .wiley , new york , 1964 .second edition ( with new material ) 1980 . c. manski . .springer - verlag , new york , 2003 .e. miranda and g. de cooman .marginal extension in the theory of coherent lower previsions . , 46(1):188225 , september 2007 . s. b. needleman and c. d. wunsch . a general method applicable to the search for similarities in the amino acid sequence of two proteins . , 48:443453 , 1970 . f. p. ramsey .truth and probability ( 1926 ) . in r.b. braithwaite , editor , _ the foundations of mathematics and other logical essays _ , chapter vii , pages 156198 .kegan , paul , trench , trubner & co. , london , 1931 . reprinted in and .g. shafer .bayes s two arguments for the rule of conditioning ., 10:10751089 , 1982 . g. shafer .a subjective interpretation of conditional probability ., 12:453466 , 1983 . g. shafer .conditional probability .53:261277 , 1985 . g. shafer . .the mit press , cambridge , ma , 1996 . g. shafer .the significance of jacob bernoulli s _ ars conjectandi _ for the philosophy of probability today ., 75:1532 , 1996 . g. shafer , p. r. gillett , and r. scherl . the logic of events ., 28:315389 , 2000 . g. shafer , p. r. gillett , and r. b. scherl . a new understanding of subjective probability and its generalization to lower and upper prevision . , 33:149 , 2003 . g. shafer and v. vovk .wiley , new york , 2001 .v. vovk , a. gammerman , and g. shafer . .springer , new york , 2005 .d. kulj .finite discrete time markov chains with interval probabilities . in j.lawry , e. miranda , a. bugarin , s. li , m. a. gil , p. grzegorzewski , and o. hryniewicz , editors , _ soft methods for integrated uncertainty modelling _ , pages 299306 .springer , berlin , 2006 .d. kulj .regular finite markov chains with interval probabilities . in g.de cooman , j. vejnarova , and m. zaffalon , editors , _isipta 07 proceedings of the fifth international symposium on imprecise probability : theories and applications _ , pages 405413 .sipta , 2007 .p. walley . .chapman and hall , london , 1991 .p. walley .measures of uncertainty in expert systems ., 83(1):158 , may 1996 .p. walley . towards a unified theory of imprecise probability ., 24:125148 , 2000 .p. walley , r. pelessoni , and p. vicig .direct algorithms for checking consistency and making inferences from conditional probability assessments ., 126:119151 , 2004 . l. wasserman . .springer , new york , 2004 .m. williams .notes on conditional previsions .technical report , school of mathematical and physical science , university of sussex , uk , 1975 .m. williams .notes on conditional previsions . , 44:366383 , 2007 .revised journal version of .
|
we give an overview of two approaches to probability theory where lower and upper probabilities , rather than probabilities , are used : walley s behavioural theory of imprecise probabilities , and shafer and vovk s game - theoretic account of probability . we show that the two theories are more closely related than would be suspected at first sight , and we establish a correspondence between them that ( i ) has an interesting interpretation , and ( ii ) allows us to freely import results from one theory into the other . our approach leads to an account of probability trees and random processes in the framework of walley s theory . we indicate how our results can be used to reduce the computational complexity of dealing with imprecision in probability trees , and we prove an interesting and quite general version of the weak law of large numbers .
|
small - angle x - ray scattering ( saxs ) and small - angle neutron scattering ( sans ) are well - established and widely used techniques for studying inhomogeneities on length scales from near - atomic scale ( 1 nm ) up to microns ( 1000 nm ) .recently , there has been an increasing emphasis and importance of nanoscale materials , due to the distinct physical and chemical properties inherent in these materials .this , together with the significant advances in x - ray and neutron sources , has resulted in the dramatically increased use of saxs and sans for characterizing nanoscale materials and self - assembled systems .for example , these techniques are used to investigate polymer blends , microemulsions , geological materials , bones , cements , ceramics and nanoparticles .these measurements are often made over a range of length scales and in real time during materials processing or other reactions such as synthesis .however , there has been less progress in saxs and sans data analysis , although some analysis software is available .for example , programs based on igor pro primarily for the reduction and analysis of sans and ultra - small - angle neutron scattering ( usans ) are available from nist .prinsas has been developed for the analysis of sans , usans and saxs data for geological samples and other porous media .primus and atsas 2.1 are used primarily for the analysis of biological macromolecules in solutions , but can be used for other systems such as nanoparticles and polymers .fish is another sans and usans fitting program developed at isis .the indra and irena usaxs data reduction and analysis package developed at aps is also based on igor .both of these latter programs offer several advanced features , including multiple form factor choices and background reduction routines . while these programs provide a powerful analysis capability , they can be complicated to use and some are based on commercial software .this has motivated the development of a simple , easy to use analysis package . in this paper, we describe saxsfit - a program developed to fit saxs and sans data for systems of particles or pores with a distribution of particle ( pore ) sizes .saxsfit is easy to use and applicable to a wide variety of materials systems .the program is most appropriate to low concentrations of particles or pores , due to the approximations used , but it does account for interparticle scattering within the local monodisperse approximation .the program is based on java and is readily portable with a user - friendly graphical interface .the emphasis of saxsfit is to provide an easy - to - use analysis package primarily for novices , but also of use to experts .saxsfit is written in java ( sdk 1.4.2 ) and provides a graphical user interface ( figure [ fig1 ] ) to select and adjust parameters to be used in the fit , change the plotting display and range of data to be used , calculating ` initial guess ' patterns and running the fit .it uses the algorithms of a matlab - based program .the advantage of using java is to provide a stand - alone program which is platform - independent , along with having a user - friendly graphical interface .saxsfit is also available as a windows executable .saxsfit can read ascii data files ( comma , space , or tab delimited ) , with or without a non - numerical header , which consist of two column ( , ) or three - column ( , , error bars ) data .any subsequent columns in the data file are ignored .once a data curve is successfully imported from an input file , it is plotted in a separate window and the fitting buttons are enabled .the error bars are also plotted if the input file contains them .the plot can be manipulated by changing the -range and selecting whether it is log - log or linear .initial guess and fitted curves are displayed when they are calculated .the -min and -max values of the data to be fitted are also shown as vertical lines and can be altered by changing the appropriate text boxes .several fitting parameters are available , with the option to fit or fix their values .three distributions of particle / pore sizes are available : log - normal , schulz and gaussian .these use the same two fitting parameters , ` particle / pore size ' ( ) and ` dispersion ' ( ) , and are detailed in section [ sec3 - 1 ] .the units for the pore size are the inverse of the units of the data ( i.e. for data in , or nm for data in nm ) . a second size distributioncan also be included in the fit .it has been shown that the choice of distribution function does not dramatically affect the final result .a constant background and/or power law can also be included .advanced options include the ability to change the maximum number of iterations and the weighting of the data .again there are several options : a constant weighting ( ) , statistical weighting ( ) , or uncertainty weighting ( - only applicable where data error bars ( ) have been imported ) .two output files are produced , consisting of the fit ( a two - column ascii file ) and a log file ( plain text ) showing the values of the parameters at each iteration of the fitting process , and the final result including parameter uncertainties , reduced and goodness of fit ( value ) .these are described in more detail in section [ stats ] .the final parameters are also displayed on the control panel .users must be aware of the assumptions made in the modeling of the data , which uses a hard sphere model with a local monodisperse approximation .strongly interacting systems , for example systems with a high degree of periodicity , are outside the scope of this approximation .the user is responsible for understanding the applicability of this approximation to their system , and ensuring that the fitted results are physically meaningful .the small angle scattering intensity is related to the scattering cross section by where is the incident flux ( number of photons , or neutrons , per area per second ) , is the illuminated area on the sample , is the sample thickness and is the solid angle subtended by a pixel in the detector . for saxs , the scattering cross - section is calculated from the structure factor and particle / pore size distribution from ^ 2 s(qr ) dr \label{eq2}\ ] ] where is the electron radius , is the electron density contrast , is the number density , is the number fraction particle / pore size distribution ( normalized so that the integral over is unity ) , is the spherical form factor , and is the structure factor .these terms are defined in the following sections .the final equation for the intensity used by the program is ^ 2 s(qr ) dr \label{eq3}\ ] ] where the scale factor is a fitted parameter , equivalent to for sans an expression similar to eq .[ eq4 ] holds .the data are modeled using a hard - sphere model with local monodisperse approximation .this model assumes that the particles are spherical and locally monodisperse in size . in other words ,the particle positions are correlated with their size .this is a good approximation for systems with large polydispersity and the approximation provides meaningful results providing the particles are for the most part not inter - connected ( e.g. , the particle concentration is not too high ) and are not spatially periodic . for porous systems ( with not too high pore concentration ) , the ` particle ' radius is equivalent to the pore size .the user has the choice of three pore / particle size distributions , which use the same fitting parameters ( ` pore size ' radius ) and ( ` dispersion ' ) . if two size distributions are selected , the distribution function is expanded to have the form : where is the number fraction of the second distribution and and are the and parameters for the distribution . for example , a 50:50 mixture by number fraction would have . to model a situation involving a mixture with differing contrasts , would be weighted by the different contrast values .the user has a choice of three distributions , as follows . ^ 2}{\sigma^2}\right ) \cdot \frac{1}{r \sigma \sqrt{2 \pi}}\ ] ] this has a maximum at , a mean of , and variance $ ] . where , , and is the gamma function , defined by , and .the schulz distribution is frequently used in sans analysis .it is physically reasonable in that it is skewed towards large sizes and has a shape close to a log - normal distribution . as approaches a gaussian distribution .\ ] ] the gaussian distribution is symmetric about the mean , , and has variance . in practiseit is only useful for systems with low polydispersity ( small ) .the spherical form factor has the following form : \ ] ] the structure factor follows the local monodisperse approximation ( lma ) for hard spheres , given by ^{-1}\ ] ] where is the dimensionless parameter eta ( sometimes referred to as the hard sphere volume fraction , having a value between 0 and 1 ) , is the hard sphere pore / particle radius , defined as , where relates the hard - sphere radius to the physical particle radius , and has the form : / a^3 + \\\gamma ( - a^4 \cos a + 4 [ ( 3 a^2 - 6 ) \cos a + \\ ( a^3 - 6a ) \sin a + 6 ] ) / a^5 \notag\\\end{gathered}\ ] ] where when a second size distribution is included in the fit , it has the same -parameter and values as the first distribution . to set the structure factor to unity, one simply sets .this is appropriate for dilute systems .the program uses a least - squares fitting routine which follows the levenberg - marquardt non - linear regression method to minimize the reduced .the integrals are calculated using the romberg integration method with intervals . since the integral [ eq1 ]must have finite bounds on , these are chosen based on the range of the distribution function , such that .the bounds are calculated numerically as follows : for the log - normal and schulz distributions , the lower bound is and the upper bound . for the gaussian distribution ,the lower bound is the maximum of zero or , and the upper bound is .the reduced and ( goodness of fit ) from the non - linear regression are reported at the end of the fitting procedure .these are common statistical measures and defined as follows : where is the number of data points , is the number of free parameters , are the weightings , is the input data and is the calculated . where and are defined above , and is the average of the values ( a constant ) .the parameter uncertainties are obtained by calculating the covariance matrix , from where is the jacobian matrix of partial derivatives and is a diagonal matrix where is the weighting on the data point .finally the reported parameter uncertainties are twice the square root of the diagonals of i.e. .this is two standard deviations , which for a gaussian distribution of errors represents a 95% confidence interval .figure 2 shows examples of data and the fitted result for nanoporous methyl silsesquioxane films , formed by spin - coating a solution of the silsesquioxane along with a sacrificial polymer ( ` porogen ' ) , and then annealing to remove the polymer and leave behind a nanoporous network . as the proportion of porogenis increased , the pores are observed to increase in size .data are shown for films with porogen loadings of 5 to 25 % with the background ( from methyl silsesquioxane ) subtracted .a single log - normal size distribution was fitted to each , the -parameter was fixed at 1.1 , was fixed to the porosity ( as determined from the porogen loading ) , and no background function was used .the results obtained are given in table [ tab1 ] and show an increase in the pore size with increased porogen loadings , in good agreement with electron microscopy and previous results .examples of data from nanoporous silsesquioxane films , with different porogen loadings .open symbols : raw data .lines : fitted curves using saxsfit.,width=302 ] [ cols="^,^,^,^,^",options="header " , ] figure [ fig3 ] shows data from a nanoporous glass using a three arm star shaped polymer as the porogen , which was found to exhibit two pore size distributions .example of data from nanoporous glass , showing two pore size distributions .open symbols : raw data .line : fitted curve using saxsfit.,width=302 ] the parameters for the fit are as follows : first distribution : .second distribution : .the number fraction of the second distribution was 88 2 % , which equates to a volume of 16 5 % .the parameter was the same for both distributions ( ) and the -parameter was fixed at 1.1 for both distributions .the value was 0.9951 and reduced 3.33 .saxsfit is provided as a windows executable ( tested on windows 98 , 2000 and xp ) , or java .jar executable ( tested on linux ubuntu and mac os x10.4 ) .the saxsfit programs and user manual are available from http://www.irl.cri.nz/saxsfiles.aspx .saxsfit is a useful program for fitting small angle x - ray and neutron scattering data , using a hard sphere model with local monodisperse approximation .saxsfit provides an easy - to - use analysis package for novices and experts .it is stand - alone software and can be used in a variety of software environments .funding was provided in part by the new zealand foundation for research , science and technology under contract co8x0409 .portions of this research were carried out at the stanford synchrotron radiation laboratory , a national user facility operated by stanford university on behalf of the u.s .department of energy , office of basic energy sciences .the authors also wish to thank benjamin gilbert , shirlaine koh , and eleanor schofield for testing and helpful suggestions for improvement , and peter ingham for assistance with the coding .
|
saxsfit is a computer analysis program that has been developed to assist in the fitting of small - angle x - ray and neutron scattering spectra primarily from nanoparticles ( nanopores ) . the fitting procedure yields the pore or particle size distribution and eta parameter for one or two size distributions ( which can be log - normal , schulz , or gaussian ) . a power - law and/or constant background can also be included . the program is written in java so as to be stand - alone and platform - independent , and is designed to be easy for novices to use , with a user - friendly graphical interface .
|
image registration is an essential operation in a variety of medical imaging applications including disease diagnosis , longitudinal studies , data fusion , image segmentation , image guided therapy , volume reconstruction , pathology detection , and shape measurement ( ) .it is the process of finding a geometric transformation between a pair of scenes , the _ source scene _ and the _ target scene _ , such that the similarity between the transformed source scene ( _ registered source _ ) and target scene becomes optimum .there are many challenges in the registration of medical images . among these ,those that stem from the artifacts associated with images include the presence of noise , interpolation artifacts , intensity non - uniformities , and intensity non - standardness .although considerable research has gone into addressing the effects of noise ( ) , interpolation ( ) , and non - uniformity in image registration ( ) , little attention has been paid to study the effects of image intensity standardization / non - standardness in image registration .this aspect constitutes the primary focus of this paper .mr image intensities do not possess a tissue specific numeric meaning even in images acquired for the same subject , on the same scanner , for the same body region , by using the same pulse sequence ( ) .not only a registration algorithm needs to capture both large and small scale image deformations , but it also has to deal with global and local image intensity variations . the lack of a standard and quantifiable interpretation of image intensities may cause the geometric relationship between homologous points in mr images to be affected considerably .current techniques to overcome these differences / variations fall into two categories .the first class of methods uses intensity modelling and/or attempts to capture intensity differences during the registration process .the second group constitutes post processing methods that are independent of registration algorithms .notable studies that have attempted to solve this problem within the first class are ( ) .while global intensity differences are modelled with a linear multiplicative term in ( ) , local intensity differences are modelled with basis functions . in ( ) ,a locally affine but globally smooth transformation model has been developed in the presence of intensity variations which captures intensity variations with explicitly defined parameters . in ( ) , intensities of one image are mapped into those of another via an adaptive transformation function . although incorporating intensity modelling into the registration algorithms improves the accuracy , simultaneous estimation of intensity and geometric changes can be quite difficult and computationally expensive . the papers that belong to the second group of methods are ( ) in which a two - step method is devised for standardizing the intensity scale in such a way that for the same mri protocol and body region , similar intensities achieve similar tissue meaning .the methods transform image intensities non - linearly so that the variation of the overall mean intensity of the mr images within the same tissue region across different studies obtained on the same or different scanners is minimized significantly .furthermore , the computational cost of these methods is considerably small in comparison to methods belonging to the first class .once tissue specific meanings are obtained , quantification and image analysis techniques , including registration , segmentation , and filtering , become more accurate .the non - standardness issue was first demonstrated in ( ) where a method was proposed to overcome this problem .the new variants of this method are studied in ( ) .numerical tissue characterizability of different tissues is achieved by standardization and it is shown that this can significantly facilitate image segmentation and analysis in ( ) . combined effects of non - uniformity correction and standardizationare studied in ( ) and the sequence of operations to produce the best overall _ image quality _ is studied via an interplaying sequence of non - uniformity correction and standardization methods . in( ) , an improved standardization method based on the concept of generalized scale is presented . in ( ) , the performance of standardization methods is compared with the known tissue characterizing property of magnetization transfer ratio ( mtr ) imaging and it is demonstrated that tissue specific intensities may help characterizing diseases .the motivation for the research reported in this paper is the preliminary indication in ( ) of the potential impact that intensity standardization may have on registration accuracy .currently no published study exists that has examined how intensity non - standardness alone may affect registration .the goal of this paper is , therefore , to study the effect of non - standardness on registration in isolation . toward this goal , first intensity non- uniformities are corrected in a set of images , and subsequently , they are standardized to yield a `` clean set '' of images .different levels of non - standardness are then introduced artificially into these images which are then subjected to known levels of affine deformations .the clean set is also subjected to the same deformations .the deformed images with and without non - standardness are separately registered to clean images and the differences in their registration accuracy are quantified to express the influence of non - standardness .the underlying methods are described in section ii and the analysis results are presented in section iii .section iv presents some concluding remarks .we represent a 3d image , called _ scene _ for short , by a pair where is a finite 3d array of voxels , called _ scene domain _ of , covering a body region of the particular patient for whom image data are acquired , and is an intensity function defined on , which assigns an integer intensity value to each voxel .we assume that for all and if and only if there are no measured data for voxel . in dealing with standardization issues ,the body region and imaging protocol need to be specified .all images that are analyzed for their dependence on non - standardness for registration accuracy are assumed to come from the same body region and acquired as per the same acquisition protocol .the non - standardness phenomenon is predominant mainly in mr imaging .hence , all image data sets considered in this paper pertain to mri .however , the methods described here are applicable to any modality where this phenomenon occurs ( such as radiography and electron microscopy ) .there are six main components to the methods presented in this paper : ( 1 ) intensity non - uniformity correction , referred to simply as _ correction _ and denoted by an operator ; ( 2 ) intensity standardization denoted by an operator ; ( 3 ) an affine transformation of the scene , denoted by used for the purpose of creating mis - registered scenes ; ( 4 ) introduction of artificial intensity non - standardness denoted by the operator ; ( 5 ) an affine scene transformation that is intended to register a scene with its mis - registered version ; ( 6 ) evaluation methods used to quantify the goodness of scene registration . super scripts and are used to denote , respectively , the scenes resulting from applying correction , standardization , introduction of non - standardness , mis - registration , and registration operations to a given scene .examples : .when a registration operation is applied to a scene , the target scene to which is registered will be evident from the context .the same notations are extended to sets of scenes .for example , if is a given set of scenes for body region and protocol , then , where our approach to study the effect of non - standardness on registration is as follows : ( s1 ) take a set of scenes , pertaining to a fixed and , but acquired from different subjects in routine clinical settings .( s2 ) apply correction followed by standardization to the scenes in to produce the set of _ clean scenes_. is as free from non - uniformities , and more importantly , from non - standardness , as we can practically make . as justified in ( ) , the best order and sequence of these operations to employ in terms of reducing non - uniformities and non - standardness is followed by .this is mainly because any correction operation introduces its own non - standardness .( s3 ) apply different known levels of non - standardness to the scenes in to produce the set .( s4 ) apply different known levels of affine deformations to the scenes in to form the scene set .apply the same deformations to the clean scenes in the set to create . in this manner for any scene , we have the same scene after applying some non - standardness and the same deformation , namely .( s5 ) register each scene to and determine the required affine deformation ( the subscript s indicates standardized " ) . similarly register each to and determine affine deformation ( ns for not standardized " ) needed .( s6 ) analyze the deviations of and from the true applied transformation over all scenes and as a function of the applied level of non - standardness and affine deformations .in the rest of this section , steps s1-s6 are described in detail . _s1 : data sets _ + two separate sets of image data ( i.e. , two sets ) are used in this study , both brain mr images of patients with multiple sclerosis , one of them being a t2 weighted acquisition , and the other , a proton density ( pd ) weighted set , with the following acquisition parameters : fast spin echo sequence , 1.5 t ge signa scanner , tr=2500 _ msec _ , voxel size 0.86x0.86x3 . each of the two sets is composed of 10 scenes .since the two data sets for each patient are acquired in the same session with the same repetition time but by capturing different echos , the t2 and pd scenes for each patient can be assumed to be in registration . _non - uniformity correction , standardization _+ for non - uniformity correction , we use the method based on the concept of local morphometric scale called g-_scale _( ) . built on fuzzy connectedness principles , the g-_scale _ at a voxel in a scene is the largest set of voxels fuzzily connected to in the scene such that all voxels in this set satisfy a predefined homogeneity criterion .since the g-_scale _ set represents a partitioning of the scene domain into fuzzy connectedness regions by using a predefined homogeneity criterion , resultant g-_scale _ regions are locally homogeneous , and spatial contiguity of this local homogeneity is satisfied within the g-_scale _ region .g-_scale _ based non - uniformity correction is performed in a few steps as follows .first , g-_scale _ for all foreground voxels is computed .second , by choosing the largest g-_scale _ region , background variation is estimated .third , a correction is applied to the entire scene by fitting a second order polynomial to the estimated background variation .these three steps are repeated iteratively until the largest g-_scale _ region found is not significantly larger than the previous iteration s largest g-_scale _ region .standardization is a pre - processing technique which maps non - linearly image intensity gray scale into a standard intensity gray scale through a training and a transformation step . in the training step ,a set of images acquired for the same body region as per the same protocol are given as input to _learn _ histogram - specific parameters . in the transformation step ,any given image for and is standardized with the estimated histogram - specific landmarks obtained from the training step . in the data sets considered for this study , and represents two different protocols , namely t2 and pd .the training and transformation steps are done separately for the two protocols .the basic premise of standardization methods is that , in scenes acquired for a given , certain tissue - specific landmarks can be identified on the histogram of the scenes .therefore , by matching the landmarks , one can standardize the gray scales .median , mode , quartiles , and deciles , and intensity values representing the mean intensity in each of the largest few g-_scale _ regions have been used as landmarks . additionally , to handle outliers , a `` low '' and `` high '' intensity value ( selected typically at 0 and 99.8 percentiles )are also selected as landmarks . in the training step ,the landmarks are identified for each training scene specified for and intensities corresponding to the landmarks are mapped into an assumed standard scale .the mean values for these mapped landmark locations are computed . in the transformation step ,the histogram of each given scene to be standardized is computed , and intensities corresponding to the landmarks are determined .sections of the intensity scale of are mapped to the corresponding sections of the standard scale linearly so that corresponding landmarks of scene match the mean landmarks determined in the training step .( the length of the standard scale is chosen in such a manner that the overall mapping is always one - to - one and no two intensities in map into a single intensity on the standard scale . )note that the overall mapping is generally not a simple linear scaling process but , indeed , a non - linear ( piece - wise linear ) operation ; see ( ) for details . in the present study ,standardization is done separately for t2 and pd scenes . _s3 . applying non - standardness _+ to artificially introduce non - standardness into a _ clean scene _ , we use the idea of the inverse of the standardization mapping described in ( ) .a typical standardization mapping is shown in figure [ img : mapping ] . in this figure , only three landmarks are considered - `` low '' and `` high '' intensities and and the median corresponding to the foreground object .there are two linear mappings : the first from ] and the second from ] . $ ] denotes the standard scale .the horizontal axis denotes the non - standard input scene intensity and vertical axis indicates the standardized output scene intensity . in inverse mapping ,since has already been standardized , the vertical axis can be considered as the input scene intensity , , and the horizontal axis can be considered as the output scene intensity , , where mapping the _ clean scene _ through varying the slopes and results in non - standard scenes . by using the values of and within the range of variation observed in initial standardization mappings of corrected scenes ,the non - standard scene intensities can be obtained by where converts any number y to the closest integer y , and denotes the median intensity on the standard scale . [ cols="^ " , ] \(a ) ( b ) ( c ) based on the fact that similarity of a pair of registered _clean scenes _ is higher than the similarity of a pair of registered non - standard scenes , it can be deduced that substantially improved uniformity of tissue meaning between two scenes of the same subject being registered improves registration accuracy .our experimental results demonstrate that scenes are registered better whenever the same tissues are represented by the same intensity levels .we note that , in both tables , most of the entries are less than 1 .this indicates that in both accuracy and consistency tests , the standardized scene registration task wins over the registration of non - standard scenes .table [ table : goodness ] on its own does not convey any information about what the actual accuracies in the winning cases are , or about whether the win happens for t2 scenes only , pd only , or for both .the fact that a majority of the corresponding cells in these tables both indicate wins suggests that accuracy - based wins happen for both t2 and pd scenes .conversely , a favorable value in table [ table : goodness2 ] does not convey any information about whether the high consistency indicated also signals accuracy .thus , accuracy and consistency are to some extent independent factors , and they together give us a more complete picture of the influence of non - standardness on registration .we described a controlled environment for determining the effects of intensity standardization on registration tasks in which the best image quality ( _ clean scene _ ) was established by the sequence of correction operation followed by standardization .we introduced several different levels of non - standardness into the _ clean scenes _ and performed nearly 20,000 registration experiments for small , medium and large scale deformations .we compared the registration performance of _ clean scenes _ with the performance of scenes with non - standardness and summarized the resulting goodness values . from overall accuracy and consistency test results in tables [ table : goodness ] and[ table : goodness2 ] , we conclude that intensity variation between scenes degrades registration performance . having tissue specific numeric meaning in intensities maximizes the similarity of images which is the essence of the optimization procedure in registration .standardization is therefore strongly recommended in the registration of images of patients in any longitudinal and follow up study , especially when image data come from different sites and different scanners of the same or different brands . in this paper, we introductorily addressed the problem of the potential influence of intensity non - standardness on registration .this is indeed a small segment of the much larger full problem : unlike the specific intra - modality ( or intra protocol ) registration task considered here , there are many situations in which the source and the target images may be from different modalities or protocols ( e.g. , ct to mri , pet to mri , and t1 to t2 registration etc . ) , and each such situation may have its own theme of non - standardness .further , these themes may depend on the body region , the scanner , and its brand .we determined that a full consideration of these aspects was just beyond the scope of this paper . since the sum of squared differences is one of the most appropriate similarity metrics for intra - modality registration , we focused on this metric in our study .but , clearly , more studies of this type in the more general settings mentioned above are needed .thus far , we controlled the computational environment via two factors : standardization and correction .a third important factor , noise , can be also embedded into the framework .it is known that correction itself introduces non - standardness into the scenes and it also enhances noise .investigating the interrelationship between correction and noise suppression algorithms and determining the proper order for these operations has been studied recently ( ) .a question immediately arises as to how standardization affects registration accuracy for different orders of correction and noise filtering . based on the study in ( ), we may conclude that non - uniformity correction should precede noise suppression and that standardization should be the last operation among the three to obtain best image quality .however , it remains unclear as to how a combination of deterministic methods ( standardization and correction ) affects a random phenomenon like noise .it is thus important to study these three phenomena in the future on their own or in relation to how they may influence the registration process , especially in multi - center studies wherein data come from different scanners and brands of scanners .this paper is presented in spie medical imaging - 2010 .the complete version of this paper is published in elsevier pattern recognition letters , vol(31 ) , pp.315323 , 2010 .baci , u. , bai , l. , 2007 .multiresolution elastic medical image registration in standard intensity scale . in : sibgrapi 07 : proceedings of the xx brazilian symposium on computer graphics and image processing ( sibgrapi 2007 ) , pp .305312 .ge , y. , udupa , j. , nyul , l. , wei , l. , grossman , r. , november 2000 . numerical tissue characterization in ms via standardization of the mr image intensity scale . journal of magnetic resonance imaging 12 ( 5 ) , pp. 715721 .guimond , a. , roche , a. , ayache , n. , meunier , j. , january 2001 .three - dimensional multimodal brain warping using the demons algorithm and adaptive intensity corrections .ieee transactions on medical imaging 20 ( 1 ) , pp. 5869 .holden , m. , hill , d. , denton , e. , jarosz , j. , cox , t. , rohlfing , t. , goodey , j. , hawkes , d. , february 2000 .voxel similarity measures for 3-d serial mr brain image registration .ieee transactions on medical imaging 19 ( 2 ) , pp . 94102 .knops , z. , maintz , j. b. a. , viergever , m. a. , pluim , j. p. w. , june 2006 .normalized mutual information based registration using k - means clustering and shading correction .medical image analysis 10 ( 3 ) , pp .432439 .madabhushi , a. , udupa , j. , moonis , g. , 2006 .comparing mr image intensity standardization against tissue characterizability of magnetization transfer ratio imaging .journal of magnetic resonance imaging 24 ( 3 ) , pp .667675 .madabhushi , a. , udupa , j. , souza , a. , february 2005 .generalized scale : theory , algorithms , and application to image inhomogeneity correction . computer vision and image understanding 101 ( 2 ) , pp . 100121 .udupa , j.k . ,odhner , d. , samarasekera , s. , goncalves , r.j . ,iyer , k. , venugopal , k.p . ,furuie , s.s . , 1994 .3dviewnix : an open , transportable , multidimensional , multi - modality , multi - parametric imaging software system . in : proceedings of spie : medical imaging .2164 . pp .
|
acquisition - to - acquisition signal intensity variations ( non - standardness ) are inherent in mr images . standardization is a post processing method for correcting inter - subject intensity variations through transforming all images from the given image gray scale into a standard gray scale wherein similar intensities achieve similar tissue meanings . the lack of a standard image intensity scale in mri leads to many difficulties in tissue characterizability , image display , and analysis , including image segmentation . this phenomenon has been documented well ; however , effects of standardization on medical image registration have not been studied yet . in this paper , we investigate the influence of intensity standardization in registration tasks with systematic and analytic evaluations involving clinical mr images . we conducted nearly 20,000 clinical mr image registration experiments and evaluated the quality of registrations both quantitatively and qualitatively . the evaluations show that intensity variations between images degrades the accuracy of registration performance . the results imply that the accuracy of image registration not only depends on spatial and geometric similarity but also on the similarity of the intensity values for the same tissues in different images .
|
the centerpiece of all life on earth is carbon - based biochemistry .it has repeatedly been surmised that biochemistry based on carbon may also play a pivotal role in extraterrestrial life forms , if existent .this is due to the pronounced advantages of carbon , especially compared to its closest competitor ( i.e. , silicon ) , which include : its relatively high abundance , its bonding properties , and its ability to form very large molecules as it can combine with hydrogen and other molecules as , e.g. , nitrogen and oxygen in a very large number of ways ( ( * ? ? ?* goldsmith & owen 2002 ) ) . in the following ,we explore the relative damage to carbon - based macromolecules in the environments of a variety of main - sequence stars using dna as a proxy by focussing on the effects of photospheric radiation .the radiative effects on dna are considered by applying a dna action spectrum ( ( * ? ? ?* horneck 1995 ) ) that shows that the damage is strongly wavelength - dependent , increasing by more than seven orders of magnitude between 400 and 200 nm .the different regimes are commonly referred to as uv - a , uv - b , and uv - c .the test planets are assumed to be located in the stellar habitable zone ( hz ) . following the concepts by (* kasting et al .( 1993 ) ) , we distinguish between the conservative and generalized hz .stellar photospheric radiation is represented by using realistic spectra taking into account millions or hundred of millions of lines for atoms and molecules ( ( * ? ? ?* castelli & kurucz 2004 ) , and related publications ) .we also consider the effects of attenuation by an earth - type planetary atmosphere , which allows us to estimate attenuation coefficients appropriate to the cases of earth as today , earth 3.5 gyr ago , and no atmosphere at all ( ( * ? ? ?* cockell 2002 ) ) .our results are presented in figs . 1 , 2 , and 3 .the first two figures show the relative damage to dna due to stars between spectral type f0 and m0 , normalized to today s earth .we also considered planets at the inner and outer edge of either the conservative or generalized hz as well as planets of different atmospheric attenuation . based on our studieswe arrive at the following conclusions : ( 1 ) all main - sequence stars of spectral type f to m have the potential of damaging dna due to uv radiation .the amount of damage strongly depends on the stellar spectral type , the type of the planetary atmosphere and the position of the planet in the habitable zone ( hz ) ; see ( * ? ? ?* cockell ( 1999 ) ) for previous results .( 2 ) the damage to dna for a planet in the hz around an f - star ( earth - equivalent distance ) due to photospheric radiation is significantly higher ( factor 5 ) compared to planet earth around the sun , which in turn is significantly higher than for an earth - equivalent planet around an m - star ( factor 180 ) . ( 3 )we also found that the damage is most severe in the case of no atmosphere at all , somewhat less severe for an atmosphere corresponding to earth 3.5 gyr ago , and least severe for an atmosphere like earth today .( 4 ) any damage due to photospheric stellar radiation is mostly due to uv - c .the relative importance of uv - b is between 5% ( f - stars ) and 20% ( m - stars ) .note that damage due to uv - a is virtually nonexistent ( see fig .our results are of general interest for the future search of planets in stellar hz ( e.g. , ( * ? ? ?* turnbull & tarter 2003 ) ) .they also reinforce the notion that habitability may at least in principle be possible around m - type stars , as previously discussed by ( * ? ? ?* tarter et al . ( 2007 ) ) .note however that a more detailed analysis also requires the consideration of chromospheric uv radiation , especially flares ( e.g. , ( * ? ? ?* robinson et al . 2005 ) ) , as well as the detailed treatment of planetary atmospheric photochemistry , including the build - up and destruction of ozone , as pointed out by ( * ? ? ?* ; * ? ? ?* segura et al . ( 2003 , 2005 ) ) and others .
|
we focus on the astrobiological effects of photospheric radiation produced by main - sequence stars of spectral types f , g , k , and m. the photospheric radiation is represented by using realistic spectra , taking into account millions or hundred of millions of lines for atoms and molecules . dna is taken as a proxy for carbon - based macromolecules , assumed to be the chemical centerpiece of extraterrestrial life forms . emphasis is placed on the investigation of the radiative environment in conservative as well as generalized habitable zones .
|
the nasa exoplanet science center ( nexsci ) hosts the sagan workshops , which are annual themed conferences aimed at introducing the latest techiques in exoplanet astronomy to young researchers .the workshops emphasize interaction with data , and include hands - on sessions where participants use their laptops to follow step - by - step tutorials given by experts .the 2012 workshop had the theme `` working with exoplanet light curves , '' and posed special challenges for the conference organizers because the three applications chosen for the tutorials run on different platforms , and because over 160 persons attended , the largest attendance to date .one of the applications , pyke , is a suite of python tools designed to reduce and analyze kepler light curves ; it is called from the command line or from a gui in pyraf . the transit analysis package (tap ) uses markov chain monte carlo ( mcmc ) techniques to fit light curves in the interactive data language ( idl ) environment , and systemic console analyzes transit timing variations ( ttv ) with idl and java - based guis to confirm and detect exoplanets from timing variations in light curve fitting . rather than attempt to run these diverse applications on the inevitable wide range of environments on attendees laptops , the conference organizers , in consulation with the virtual astronomical observatory , chose instead to run the applications on the amazon elastic cloud 2 ( ec2 ) .this paper summarizes the system architecture , the amazon resources consumed , and lessons learned and best practices .the sagan workshop took advantage of the ec2 s capabilities to support virtual machines ( vms ) that can be customized to meet local needs , then replicated , and then released on completion of the jobs .1 shows the system architecture developed to support the sagan workshop .participants logged into one of 20 tutorial servers via a virtual network connection ( vnc ) . the amazon elastic block storage ( ebs )system and the network file system ( nfs ) were used to share common datasets and user home directories across all virtual machines .an idl license server at ipac received license request through an ssh tunnel .the following list describes the architecture component by component and the rationale for the design choices .* one master virtual machine image , built on the cent os 64-bit operating system , was used for all servers .a boot script determined the vm s identity .usernames and passwords were the same on all machines . *1 tb of elastic block storage ( ebs ) , a block - based storage service where volumes appear as disk drives connected to vms , contained applications , tutorial data , and user home directories .applications and tutorial data are installed on vm images , and so data are not lost if a tutorial server fails . *the ec2 m1.2xlarge instance type was chosen to handle the load of 20 tutorial servers .it has enough memory to cache commonly accessed files , mounts all the partitions from the ebs volumes , and exports all partitions via nfs to the tutorial servers . *the tutorial servers were ec2 c1.xlarge instance type , with 8 cores and 7 gb ram , chosen because the applications were cpu - bound .server performance was benchmarked with 8 users , but the servers were in fact able to support up to 25 users .* a virtual network computing ( vnc ) server provided remote desktop logins to the tutorial servers .vnc is similar to the x window system , but sends compressed images instead of drawing commands and proved more responsive than x in our tests .each tutorial server ran one vnc server that supported up to 30 connections .screen resolution set to 1024x768 to balance usability and performance . in practice ,the workshop used tigervnc as the server and realvnc as the client .* the tutorial servers were connected via an ssh tunnel to an idl license server at ipac .idl vm sessions think the license server is on localhost , and the license server thinks idl is inside ipac s network .we used autossh to ensurethe tunnel was re - established if disconnected * the amazon aws security rules limited access only to the vnc , ssh and idl ports , and only from the caltech and ipac subnets used to support the workshop .had the sagan workshop s amazon ec2 costs not been met by an educational grant , the total cost of installation , testing and running the workshop sessions would have been ) + vm instances & 4,159 hours & 2,738 + ebs storage & 1.25 tb & 126 + i / o requests & 12 million & 1 + snapshot data storage & 22 gb & 3 + use of elastic ip addresses & 604 hours & 3 + data transfer & 55 gb & 5 + total & ... & 2,876 +these may be summarized as follows : * automate processes wherever possible , as this allows easier management of large numbers of machines and easy recovery in the case of failure .tutorial servers automatically mounted nfs partitions when booted and ssh tunnels automatically reconnected on failure . * test , test , and test again . document and testall the steps required to recover if a vm fails , and step through the tutorials under as close to operational conditions as possible . *develop a failover system .we copied the final software configuration to two local machines for use if amazon failed . *give yourself plenty of time to solve problems .in our case , we needed to assure the idl vendor that licenses would not persist on the cloud , and we needed to understand the poor performance of x for remote access to the cloud .the sagan workshop was funded as part of the sagan program through nasa s exoplanet exploration program .we thank amazon web services for the award of a generous educational gran .ed , gj and mr acknowledge support through nsf oci-0943725 . the vao is jointly funded by nsf and nasa , and is being managed by the vao , llc , a non - profit 501(c)(3 ) organization registered in the district of columbia and a collaborative effort of the association of universities for research in astronomy ( aura ) and the associated universities , inc .we thank dr .peter plavchan for suggesting we examine vnc .
|
the nasa exoplanet science institute ( nexsci ) hosts the annual sagan workshops , thematic meetings aimed at introducing researchers to the latest tools and methodologies in exoplanet research . the theme of the summer 2012 workshop , held from july 23 to july 27 at caltech , was to explore the use of exoplanet light curves to study planetary system architectures and atmospheres . a major part of the workshop was to use hands - on sessions to instruct attendees in the use of three open source tools for the analysis of light curves , especially from the kepler mission . each hands - on session involved the 160 attendees using their laptops to follow step - by - step tutorials given by experts . one of the applications , pyke , is a suite of python tools designed to reduce and analyze kepler light curves ; these tools can be invoked from the unix command line or a gui in pyraf . the transit analysis package ( tap ) uses markov chain monte carlo ( mcmc ) techniques to fit light curves under the interactive data language ( idl ) environment , and transit timing variations ( ttv ) uses idl tools and java - based guis to confirm and detect exoplanets from timing variations in light curve fitting . rather than attempt to run these diverse applications on the inevitable wide range of environments on attendees laptops , they were run instead on the amazon elastic cloud 2 ( ec2 ) . the cloud offers features ideal for this type of short term need : computing and storage services are made available on demand for as long as needed , and a processing environment can be customized and replicated as needed . the cloud environment included an nfs file server virtual machine ( vm ) , 20 client vm s for use by attendees , and a vm to enable ftp downloads of the attendees results . the file server was configured with a 1 tb elastic block storage ( ebs ) volume ( network - attached storage mounted as a device ) containing the application software and attendees home directories . the clients were configured to mount the applications and home directories from the server via nfs . all vm s were built with centos version 5.8 . attendees connected their laptops to one of the client vms using the virtual network computing ( vnc ) protocol , which enabled them to interact with a remote desktop gui during the hands - on sessions . we will describe the mechanisms for handling security , failovers , and licensing of commercial software . in particular , idl licenses were managed through a server at caltech , connected to the idl instances running on amazon ec2 via a secure shell ( ssh ) tunnel . the system operated flawlessly during the workshop .
|
in this paper we consider the log - convexity of the rate region in 802.11 wlans .the rate region is defined as the set of achievable throughputs and we begin by noting that the 802.11 rate region is well known to be non - convex . this is illustrated , for example , in figure [ fig : rateregion ] for a simple two - station wlan ( where are described in section [ sec : nm ] ) .the shaded region indicates the set of achievable rate pairs ( , ) where is the throughput of station , .it can be seen from this figure that the maximum throughput achievable by the network when only a single station transmits ( the extreme point along the x- or y - axes ) is greater than that when both stations are active ( e.g. the extreme point along the line ) .this non - convex behaviour occurs because in 802.11 there is a positive probability of colliding transmissions when multiple stations are active , leading to lost transmission opportunities . in figure[ fig : lograteregion ] the same data is shown but now replotted as the log rate region , i.e. the set of pairs ( , ) . evidently , the log rate region is convex .our main result in this paper is to establish that this behaviour is true in general , not just in this particular example .that is , although the 802.11 rate region is non - convex , it is nevertheless log - convex . the implications of this for optimisation - based approaches to the design and analysis of fair throughput allocation schemes are discussed after the result . in a wlan context , rate region properties have mainly been studied for aloha networks .the log - convexity of the aloha rate region in general mesh network settings has been established by several authors in the context of utility optimisation .all of these results make the standard aloha assumption of equal idle and busy slot durations , whereas in 802.11 wlans highly unequal slot durations are the norm e.g. it is not uncommon to have busy slot durations that are 100 times larger than the phy idle slot duration .this is key to improving throughput efficiency but also fundamentally alters other throughput properties since the mean mac slot duration and achieved rate are now strongly coupled .we note that a number of recent papers have considered algorithms that seek to achieve certain fair solutions ( proportionally fair , max - min fair ) in 802.11 networks , e.g see and references therein . for the wlan scenario in this paperwe show how existence and uniqueness of fair solutions follows from log - convexity .stations and and ( i.e. for packet sizes where the packet transmission duration is 10 times larger than the phy idle slot duration ) . ] . ]the 802.11e standard extends and subsumes the standard 802.11 dcf ( distributed coordinated function ) contention mechanism by allowing the adjustment of mac parameters that were previously fixed . with 802.11 , on detecting the wireless medium to be idle for a period , each station initializes a counter to a random number selected uniformly in the set \{0 , ... , cw-1 } where cw is the contention window .time is slotted and this counter is decremented once for each slot that the medium is idle .an important feature is that the countdown halts when the medium becomes busy and only resumes after the medium is idle again for a period . on the counter reaching zero , the station transmits a packet .if a collision occurs ( two or more stations transmit simultaneously ) , cw is set to and the process repeated . on a successful transmission , cw is reset to the value and a new countdown starts for the next packet .again , each packet transmission in this phase includes the time spent waiting for an acknowledgement from the receiver .the 802.11e mac enables the values of ( called in 802.11e ) , and to be set on a per class basis for each station . throughout this paperwe restrict attention to situations where has the legacy value .in addition , 802.11e adds a txop mechanism that specifies the duration during which a station can keep transmitting without releasing the channel once it wins a transmission opportunity . in order not to release the channel , a sifs interval is inserted between each packet - ack pair .a successful transmission round consists of multiple packets and acks . by adjusting this time, the number of packets that may be transmitted by a station at each transmission opportunity can be controlled .a salient feature of the txop operation is that , if a large txop is assigned and there are not enough packets to be transmitted , the txop period is ended immediately to avoid wasting bandwidth .we consider an 802.11e wlan with stations . as described in , we divide time into mac slots , where each mac slot may consist either of a phy idle slot , a successful transmission or a colliding transmission ( where more than one station attempts to transmit simultaneously ) .let denote the probability that station attempts a transmission .the mean throughput of station is then shown in to be where and , ^t ] as the vector of attempt probabilities ranges over domain \times\cdots\times[0,\bar{\tau}_n] ] .this is a mild assumption .for example , suppose is set equal to .then where is the probability that there is a packet available for transmission when the station wins a transmission opportunity and so is related to the packet arrival rate .when a station is saturated we have .we note that the value here is similar to the quantity in also referred to as . by adjusting ( via the packet arrival process ) and/or , it can be seen that the value of can be controlled as required ._ log - convexity_. recall that a set is convex if for any and , there exists an such that . a set is log - convex if the set is convex .we begin in this section by assuming that , where denotes the all 1 s vector .this assumption is relaxed later on . for convenience we set with ] .the rate region is log - convex if and ] .this involves no loss of generality since is a continuous function of .note that the term in ( [ eq : tput ] ) cancels on both sides of ( [ eq : sdef ] ) so the log - convexity result is independent of this term .we proceed by postulating that is of the form as the right side of ( [ eq : first ] ) does not depend on any particular .the log - convexity question is whether we can find satisfying substituting from ( [ eq : soln ] ) into ( [ eq : delta ] ) , then using the first expression in ( [ eq : xdef ] ) , and defining , we will need to solve for a such that recalling hlders inequality for two non - negative vectors and , ,\end{aligned}\ ] ] we have using the second expression in ( [ eq : xdef ] ) that the right - hand side of ( [ eq : soln2 ] ) is positive and lower bounded by choosing it can be seen that this lower bound lies within the range of the left - hand side of ( [ eq : soln2 ] ) .considering the left - hand side of ( [ eq : soln2 ] ) in more detail , its second derivative is given by where product over an empty set is defined to be .since the second - derivative is positive for , it implies the ( strict ) convexity of the left - hand side of ( [ eq : soln2 ] ) . this quantity is unbounded and has range that includes .it follows that there exists a positive satisfying ( [ eq : soln2 ] ) , as required . indeed ,in general there may exist two values of solving ( [ eq : soln2 ] ) . to see thisobserve that the left -hand side is unbounded both as and as .the first - derivative is negative as and positive as , so we have a turning point , which due to the convexity of the function is unique .this turning point partitions the real line and two solutions to ( [ eq : soln2 ] ) then exist , one lying in and the other in .additionally , this argument also says that there exists at least one solution of ( [ eq : soln2 ] ) where .we have therefore established the following theorem .[ thm : one ] the rate region is log - convex . we can extend the foregoing analysis to situations where the station attempt probability is constrained , i.e. the vector of attempt probabilities ranges over \times\cdots\times[0,\bar{\tau}_n] ] and for every ] .these log - convexity results allow us to immediately apply powerful optimisation results to the analysis and design of fair throughput allocations for 802.11 wlans . first , using ( * ? ?* theorem 1 ) , the existence of a max - min fair solution immediately follows .we also have that any optimisation of the form can be converted into an optimisation where ( so , in particular , ) , ^t$ ] , and .provided and the are convex functions , the optimisation is a convex problem to which standard tools can then be applied . from this point of viewit now follows that we can naturally extend the congestion and contention control ideas of to the more general scenario considered in . in particular , for the standard family of utility fairness functions given for , and by we have is concave for all . in the casewe also get strict concavity of , and the existence and uniqueness of utility fair solutions immediately follows from our log - convexity result . for analysis of the boundary of the log rate - region also allows one to show uniqueness of the solution in the case of .in this paper we establish the log - convexity of the rate region in 802.11 wlans .this generalises previous results for aloha networks and has immediate implications for optimisation based approaches to the analysis and design of fair throughput allocation schemes in 802.11 wireless networks .p. gupta , a. l. stolyar , `` optimal throughput allocation in general random - access networks , '' _ proc .ciss _ , 2006 .k. kar , s. sarkar , l. tassiulas , `` achieving proportional fairness using local information in aloha networks , '' _ ieee trans . auto .control _ , 49(10 ) , pp . 18581862 , 2004 . j. w. lee , m. chiang , a. r. calderbank , `` jointly optimal congestion and contention control based on network utility maximimization , '' _ ieee comm .letters _ , 10(3 ) , pp .216218 , 2006 .d. malone , k. duffy , and d. leith , modeling the 802.11 distributed coordination function in nonsaturated heterogeneous conditions , " _ ieee / acm trans .networking _ , 15(1 ) , pp . 159172 , 2007 .p. clifford , k. duffy , j. foy , d. j. leith , and d. malone , modeling 802.11e for data traffic parameter design , " _ proc . rawnet _ , 2006 .v. a. siris , g. stamatakis , `` optimal cwmin selection for achieving proportional fairness in multi - rate 802.11e wlans , '' _ proc .wintech _ , 2006 .x. wang , k. kar , j. s. pang , `` lexicographic max - min fair rate allocation in random access wireless networks , '' _ proc .ieee cdc _ , 2006 .b. radunovic , j .- y .le boudec , a unified framework for max - min and min - max fairness with applications , " _ ieee / acm trans . networking _ , 15(5 ) , pp .10731083 , 2007 .
|
in this paper we establish the log - convexity of the rate region in 802.11 wlans . this generalises previous results for aloha networks and has immediate implications for optimisation based approaches to the analysis and design of 802.11 wireless networks .
|
the 2003 wmap discovery of large optical depths to electron scattering at z 15 motivated several numerical and analytical studies of early reionization .the goal of these models was to extend the classical picture of reionization to account for the large electron fractions at high redshifts while still accommodating lower redshift observations .in particular , sokasian , _et al . _ and ciardi , _et al . _ employed large scale cosmological simulations combining gas and dark matter dynamics with radiative transfer to follow the growth of ionized regions in the early igm .global electron densities in these calculations were integrated over redshift to compute the thomson scattering optical depth for a variety of scenarios .all these simulations assume several free parameters : uv escape fractions , pop iii stellar masses , the positive and negative feedback of one generation of uv sources upon the next , and the pop iii / pop ii rollover in mass spectrum and uv production with redshift .furthermore , there was no hydrodynamic response to the energy deposited into the gas by the passage of fronts in these models , seriously altering the evolution of the fronts themselves .the studies cited above postprocessed successive hydrodynamic snapshots of the igm with radiative transfer without energy deposition into the gas to evolve primordial h ii regions ( see for a summary of the latest algorithms applied to cosmological rt ) .the challenge of the next generation of early reionization simulations is to capture the radiation hydrodynamics of ionization and feedback physics on small scales to determine the final sizes and distribution of i - fronts in the large simulation volumes necessary for statistically accurate structure formation .upcoming observations able to discriminate between early reionization scenarios underscore the need for ab initio simulation of the early igm , in which reionization properly unfolds over many redshifts and generations of luminous objects .21 cm line observations in both emission and absorption ( by the square kilometer array ) could yield cosmic electron fraction profiles as a function of redshift , beyond current wmap and upcoming planck measurements that are limited to electron column densities . if foreground contamination can be overcome , these observations might also unveil the size and morphologies of early h ii regions .signatures of pop iii stars manifest as excesses in the near - ir cosmic background may soon be measured by balloon and satellite missions .jwst will also open the first direct observational window on protogalaxies with a few thousand pop iii stars at 15 z 20 .the escape of ionizing uv photons from primordial minihalos and protogalaxies is mediated by the hydrodynamical transitions of ionization fronts on sub - parsec scales , and failure to properly capture their breakout can alter the final extent of h ii regions on kiloparsec scales . in some cases statictransfer completely fails to predict the exit of fronts from early galaxies by excluding the gas motions that can free them .radiation hydrodynamical simulations can predict escape fractions in the next generation of models by following i - fronts as they begin deep within primordial structures and blossom outward to ionize the igm , accurately resolving their true final sizes and morphologies . coupled to reactive networks able to evolve primordial h chemistry , radiation hydrodynamicswill also better model the radiative feedback mechanisms known to operate in the early universe , which remain to be incorporated in detail in large scale calculations .local entropy injection by uv sources , lyman - werner dissociation of h in minihalos and protogalaxies , and catalysis of molecular hydrogen by free electrons are key processes governing the rise of early star populations and the high - redshift ionizing background . on small scalesradiation hydrodynamics will also resolve ionized gas outflows in h ii regions that facilitate the dispersal of metals from the first supernovae , exhibit dynamical instabilities potentially leading to clumping and further star formation , and limit the growth of black holes left in minihalos .resolving radiative feedback over a few generations of primordial stars will enable their inclusion in large simulations over many redshifts with confidence later on . on large scales radiation hydrodynamicswill be crucial to determine whether igm photoheating cascades from small to large scales through nonlinear dynamical evolution to affect structure formation at later redshifts .the expansion of cosmological ionization fronts through filaments and voids is also inherently hydrodynamical in nature , as is the photoevaporation of minihalos that can impede these fronts .static transfer can not reproduce the outflows confirmed by numerical simulations to enhance the photoionization of these structures and therefore understimates the advance of i - fronts into the early igm .to investigate the numerical issues confronting the incorporation of radiation hydrodynamics into future large scale structure evolution models we have developed an explicit multistep scheme for ionization front transport from a single point source in the zeus - mp hydrocode .the issues fall into two categories .the first is how to calculate photoionization rates everywhere on the numerical grid , whether by ray tracing , variable tensor eddington factors , flux limited diffusion , or monte carlo approaches for either single or multiple sources ( see for novel raytracing schemes for the enzo and flash adaptive mesh refinement ( amr ) codes ) . in section 2we examine the second issue : how to couple reaction networks and energy equations driven by ionization to the hydrodynamics , which has only recently begun to be examined by the cosmology community .our algorithm easily extends to multiple frequencies and can be readily interfaced with transfer techniques accommodating many point sources . in sections 4 and 5we present a comprehensive suite of static and hydrodynamic i - front test problems utilized to benchmark our code that can be applied to validate future methods . in particular , the hydrodynamic benchmarks are adopted from an analytical study of ionization fronts in power - law density profiles done by ( see for a thorough review of numerical and analytical studies of classical h ii regions ) .the tests encompass the range of i - front dynamics likely to occur in cosmological settings and will challenge the versatility and robustness of any code .they also expose many features of ionized flows that are exhibited by any density gradient .we examine in section 6 how individual zones approach ionization equilibrium as well as the timescales that govern each phase of i - front and ionized flow evolution in a variety of density regimes .we discuss how these timescales control the timestep advance of the numerical solution and explore avenues for future algorithm optimization .the impact of radiation pressure on i - fronts and flows is also reviewed in section 7 , and we provide an array of improved uv escape fraction calculations for pop iii stars in section 8 .our modified zeus - mp hydrocode solves explicit finite - difference approximations to euler s equations of fluid dynamics together with a 9-species primordial gas reaction network that utilizes photoionization rate coefficients computed by a ray - casting radiative transfer module .ionization fronts thus arise as an emergent feature of reactive flows and radiative transfer in our simulations and are not tracked by computing equilibria positions along lines of sight , as is done in many algorithms .the fluid dynamics equations are where , e , and the v are the mass density , internal energy density , and velocity of each zone and p (= ( - 1 ) e ) and * q * are the gas pressure and the von neumann - richtmeyer artificial viscosity tensor . represents radiative heating and cooling terms described below .the left - hand side of each equation is updated term by term in operator - split and directionally - split substeps , with a given substep incorporating the partial update from the previous substep .the gradient ( force ) terms are computed in the zeus - mp source routines and the divergence terms are calculated in the zeus - mp advection routines .the primordial species added to zeus - mp ( h , h , he , he , he , h , h , h , and e ) are assumed to share a common velocity field and are evolved by nine additional continuity equations and the nonequilibrium rate equations of where is the rate coefficient of the reaction between species j and k that creates ( + ) or removes ( - ) species i , and the are the ionization rates .the divergence term for each species is evaluated in the advection routines , while the other terms form a reaction network which is solved separately from the source and advection updates . to focus our discussion on the gas dynamics of i -fronts our present calculations take the primordial gas to be hydrogen only , which can be ionic or neutral but not molecular .the rate equations reduce to where k , k , and k , are the rate coefficients for recombination , photoionization and electron collisional ionization of hydrogen and the n are the number densities .when the full reaction network is activated we enforce charge and baryon conservation at the end of each hydrodynamic cycle with the following constraints where f is the primordial hydrogen fraction , m is the hydrogen mass , and is the baryon density evolved in the original zeus - mp hydrodynamics equations .any error between the species or charge sums and is assigned to the largest of the species to bring them into agreement with .microphysical cooling and heating processes are included by an isochoric operator - split update to the energy density computed each time the reaction network is solved : where k is the photoionization rate described above , is the fixed energy per ionization deposited into the gas ( set to 2.4 ev as explained in ) , and , , , and are the recombinational , compton , collisional ionization , and collisional excitation cooling rates taken from ) . these four processesact together with hydrodynamics ( such as adiabatic expansion or shock heating ) to set the temperature of the gas everywhere in the simulations .the radiative transfer module computes k by solving the static equation of transfer along radial rays outward from a single point source centered in a spherical - polar coordinate geometry .the fact that the medium usually responds to radiation over much longer times than the light - crossing times of the problem domain permits us to omit the time derivative in the equation of transfer that would otherwise restrict the code to unnecessarily short timesteps .this approximation is violated very close to the central star by rapid ionizations that can lead to superluminal i - front velocities .these nonphysical velocities are prevented by simply not evaluating fluxes further from the central star than light could have traveled by that time in the simulation .our experience has been that this static form of the equation of transfer typically becomes valid before the i - front reaches the stromgren radius , and the code computes very accurate stromgren radii and formation times .photoionizations in any given cell in principle are due to direct photons from the central source through the cell s lower face as well as to diffuse recombination photons through all its faces .recombinations within a zone occur to either the ground state or to any excited state .we adopt the on - the - spot ( ots ) approximation that a case ( a ) recombination photon emitted in a zone is reabsorbed before escaping the zone by decomposing both the photoionization and recombination terms in the reaction network and equating the first and fourth terms , which cancel . herek and k are the recombination rate coefficients to the ground state and to all excited states while k and k are the rate coefficients for cell ionizations by central stellar photons and diffuse photons .taking ground - state recombination photons to be reabsorbed before they can exit the cell guarantees that no photons enter the cell from other locations in the problem , relieving us of the costly radiative transfer from many lines of sight that otherwise would be needed to compute the diffuse radiation entering the zone .while the two terms are set equal in the equation above , it should be recognized that ionizations typically occur much more quickly than recombinations .no error is introduced in the network because the much faster photoionizations will simply cycle out any ground - state recombinations over the timestep the solution is advanced , regardless of the processes true timescales .the ots approximation is valid anywhere there is a sizeable uv photon mean free path across a zone , which is the case within the front itself but not usually in the hot ionized gas behind it .case ( a ) photons emitted from an ionized cell might reach the front itself before being absorbed because of the very low neutral fraction in their path .such photons do not advance the front , however , because the neutral atom they leave behind will remove another source or diffuse photon that would have reached the front . since case ( a ) recombinations_ globally _ balance diffuse ionizations they can still be thought to cancel in any _ single _ zone on average , even in those that are ionized . however , there are important scenarios in which the ots approximation fails to reproduce the true ionization of a cosmological structure . for example, dense gas clumps in primordial minihalos cast shadows in the uv field from the central source .the ots paradigm would produce shadows that remain too sharp by not accounting for the recombination photons that would leak laterally into these shadows and soften them .failure to capture the correct shadowing may have important consequences on the growth of instabilities expected to develop in these ionization fronts .these instabilities are of interest for their potential to promote clumping that could later collapse into new stars , especially if mixed with metals from a previous generation .methods for accurate and efficient simulation of recombination radiation are being studied in connection with 3-d simulations of minihalo photoevaporation under development .only a single integration of the transfer equation along a radial ray from the source at the coordinate center is necessary to compute k in a cell .the static transfer equation is recast in flux form for simple solution in spherical coordinates : where is the inverse mean free path of a uv photon in the neutral gas simple integration yields the radial flux at the inner face of each zone on the grid : the ionization rate in a zone is calculated in a photon - conserving manner to be the number of photons entering the zone per second minus the number exiting the zone per second hence , the ionization rate in the cell is n must be converted into the rate coeficient required by the chemistry equations . in the ionization term of the reaction equation k is the ionization rate coefficient , is the number of ionizations per volume per second , and n is the number of ionizations per _ zone _ per second , with the three being related by n is therefore converted to k by he i and ii ionization coefficients as well as lyman - werner dissociation rates are also evaluated by this procedure .our prescription for generating k is photon - conserving in that the number of photons emitted along any line of sight will always equal the number of photoionizations in that direction over any time interval .this formulation enables the code to accurately advance i - fronts with significantly less sensitivity to resolution than methods which solve the transfer equation to compute a zone - centered flux or intensity to determine k : this non - conservative formalism does not guarantee n to equal n along a line of sight except in the limit of very high resolution .such methods can require hundreds or thousands of radial zones to converge to proper i - front evolution , making photon conservation a very desirable property . however , photon - conserving methods are not necessarily resolution independent , as implied at times in the literature .for example , if a grid fails to resolve a density peak then even a photon - conserving algorithm will overestimate the advance of the front ( even in a static problem ) because of the strong n dependence of recombination rates on material densities .the resolution needed for correct i - front evolution in hydrodynamical simulations is governed by the resolution necessary for the gas dynamics to converge , since this determines the accuracy of the densities encountered by the front .nevertheless , photon - conservative schemes are still the method of choice in gas dynamical i - front simulations because the grid resolution non - conservative methods would require to follow the front can be much greater than the resolution needed just for hydrodynamic convergence .each equation in the reaction network can be rewritten as where c is the source term representing the formation of species i while d is the source term describing its removal .a fully - explicit finite - differenced solution would be with c and d ( and the n comprising them ) being evaluated at the current time .a fully- implict update would take the source terms to be at the advanced time , requiring iteration to convergence for the n the chemistry rate coefficients driving the reaction network exhibit a variety of different timescales that make the network numerically stiff .the shortest reaction times in the network would force a purely explicit solution into much smaller timesteps than accuracy requires , while a fully - implicit scheme best suited to the solution of stiff networks demands matrix iterations of excessive cost in a 3d simulation .the additional costs of matrix iteration in implicit schemes can be offset by the much longer timesteps they permit because the stability of the network is freed from its shortest timescale .however , a disadvantage of fully - implicit approaches is that they require simultaneous solution of the reaction network together with the radiative transfer equation ( needed for the ionization rates in the network ) and the isochoric energy equation ( which sets the temperatures utilized in the rate coefficients ) , all evaluated at the advanced time .while such methods evolve ionization fronts with very high accuracy in 1d , photon conservation is sacrificed in the process of achieving concurrency between the network , energy , and transfer equations at the future time .hence , these algorithms may only achieve their superior accuracy if high problem resolutions are employed ( 8000 radial zones in the theory validation runs ) .photon - conserving fully - implicit stencils which can operate accurately at much lower resolutions may be possible with further investigation .furthermore , while in principle a fully - implicit network can be evolved over an entire hydrodynamical time , this strategy would accrue significant errors in many density regimes . reported that in some test cases it was necessary to evolve their network by no more than a few photoionization timescales in order to compute energy deposition correctly .in such instances the additional costs of iteration in implicit schemes outweighs their advantage in accuracy over explicit methods because they must ultimately both perform a comparable number of cycles to complete a problem .it should also be noted that highly nonlinear and nonmonotonic primordial heating / cooling rates have been observed to retard or prevent newton - raphson convergence in implicit cosmological calculations .we instead adopt the intermediate strategy of sequentially computing each n , building the source and sink terms for the i species update from the i - 1 ( and earlier ) updated species while applying rate coefficients evaluated at the current problem time .the order of the updates is h , h , he , he , he , e , h , h , and h .this approach allows direct solution of the densities with sufficient accuracy to follow i - fronts in most density regimes with reasonable execution times , which are sometimes much shorter than for implicit schemes .et al . _ found a speedup of ten in sequential species updates over an implicit stiff solver package in cosmological test cases involving the steady buildup of igm uv fluxes from a metagalactic background .two timescales in general govern the evolution of h ii regions .the many reaction rate timescales can be consolidated into a single chemistry timestep defined by formulated to ensure that the fastest reaction operating at any place or time in the problem determines the maximum time by which the reaction network may be accurately advanced .the second timescale is the heating / cooling time t connected to the hydrodynamic response of the gas to the reactions .the ratio of the two times can depend on the evolutionary phase of the h ii region or even on the current ionization state in a single zone .in general , reaction times are shorter than heating times as the ionization front propagates outward from the central uv source but cooling times can become shorter than recombination times after shutdown of the central source .the latter circumstance can lead to nonequilibrium cooling in h ii recombination regions , which can remain much more ionized at low temperatures than would be expected for a gas in thermodynamic equilibrium , an effect which has been observed in cosmological h ii region simulations .many strategies have been devised to interleave reaction networks and radiative transfer with hydrodynamics .after computing the global minimum courant timestep for the problem domain , anninos _et al ._ evolve the species in each cell by advancing the rate equations by a tenth of the lesser of the chemistry and heating / cooling timescales for that cell until a tenth of the cell s heating / cooling timescale is covered . at this pointthe cumulative energy gained or lost over the chemistry updates is added to the cell s gas energy in the microphysical heating / cooling substep described earlier , but neither velocities nor densities are updated .this cycle is then repeated over consecutive heating timesteps in the cell until the global minimum courant timestep is covered .cells with the fastest kinetics require the most chemistry subcycles over a heating time : more slowly reacting cells covering their heating time with fewer subcycles are quiescent during the subsequent cycles required by the faster cells .likewise , kinetics and energy updates in cells traversing the global hydrodynamical time in fewer heating cycles are halted over the additional subcycles the more quickly heating or cooling cells demand .every cell in the grid undergoes the same number of subcycles ( which continue until the last cell has covered the global courant timestep ) , but updates in a given cell are suspended after it has crossed this timestep .new photoionization rates are calculated every chemistry timestep by a call to a radiative transfer module but the other rate coefficients remain constant over a heating timestep because they depend only on temperature ; they are updated at the beginning of the next heating cycle with the new gas temperature . at the end of the hydrodynamical timestep full source and advective updates of velocities , energies , and conserved total baryonic densities are performed . although sufficient for the slowly rising uv metagalactic backgrounds in the anninos _ et al ._ calculations this subcycling approach does not accurately simulate the growth of ionization fronts .proper front capturing is sensitive to velocities that can build up over the heating timestep that are not correctly computed by the anninos _et al ._ scheme because it does not update velocities until many such heating times have passed .the order of execution of our algorithm is as follows : first , the radiative transfer module is called to compute k via eq .( [ eqn : kphderiv ] ) to determine the smallest heating / cooling timestep on the grid .the grid minimum of the courant time is then calculated and the smaller of the two is adopted as t ; in a 1d calculation this is next , from this same set of k the shortest chemistry timestep of the grid is calculated . the species densities and gas energy in _ all _ cells are then advanced over this timestep , the transfer module is called again to compute a new chemistry timestep , and the network and energy updates are performed again .the n and energy are subcycled over successive chemistry timesteps until t has been covered , at which point full source and advective updates of velocities , energies , and total densities are performed .a new t is then determined and the cycle repeats .if the chemistry timestep is longer than the global hydrodynamical time the reaction network is only subcycled once .this more restrictive hierarchy of rate solves and hydrodynamical updates is necessary to compute the correct velocities in each zone with the passage of the front over the wide range of density regimes discussed in section 3 .note that cooling times which are shorter than recombination times in the problem are easily handled because the code will simply cycle the reactions in each cell once over the hydrodynamic timestep .considerable experimentation with alternative hierarchies of kinetic and hydrodynamical updates and choices of timestep control ( some involving photoionization times ) proved them to be less accurate or robust . performed analytical studies of 1-d ionization fronts from a monochromatic source of photons centered in a radial density profile with a flat central core followed by an r dropoff : they considered photon rates that would guarantee that the stromgren radius r of the front if the entire medium was of uniform density n is greater than r , noting that if r r that the front would evolve as if it were in a constant density .their analysis of i - front propagation down the radial density gradients revealed that there is a critical exponent ^{-1 } \label{eqn : wcrit } \vspace{0.1in}\ ] ] below which the front executes the classic approach to a stromgren sphere of modified radius ^{1/(3 - 2\omega)}\left(\frac{r_{s}}{r_{c}}\right)^{2\omega/(3 - 2\omega)}\vspace{0.20in}\ ] ] at which point it reverts from r - type to d - type and continues along the gradient , building up a dense shocked shell before it . herer is the classical stromgren radius the ionization front would have in a uniform density medium if 1.5 the front remains d - type throughout its lifetime and continues to accumulate mass in its shell , expanding as ^{4/(7 - 2\omega ) } \label{eqn : r_w1 } \vspace{0.10in}\ ] ] where c is the sound speed in the ionized gas . if = 1.5 the shock and front coincide without any formation of a thin neutral shell as in the 1.5 profiles .when 1.5 the d - type front will revert back to r - type and quickly overrun the entire cloud .since i - fronts ultimately transform back to r - type in any cloud with density dropoffs steeper than 1.5 this power constitutes the critical point for eventual runaway ionization .as expected , if = 0 then r becomes r and r(t ) exhibits the t expansion of an ionization front in a uniform medium .fronts descending gradients steeper than r never slow to a stromgren radius or transform to d - type .remaining r - type , they quickly ionize the entire cloud , leaving behind an essentially undisturbed ionized density profile because they exit on timescales that are short in comparison to any hydrodynamical response of the gas .completely ionized and at much higher pressures , the entire cloud begins to inflate outward at the sound speed of the ionized material .however , the abrupt core density dropoff left undisturbed by the rapid exit of the front develops a large pressure gradient because of the equation of state in the nearly isothermal postfront gas .the sharpest pressure gradient is at the ionized core s edge at r r .this edge expands outward in a pressure wave which quickly steepens into a shock that overtakes the more slowly moving outer cloud regions .the velocity of this shock depends on the initial density dropoff : if 3 then {i}t \label{eqn : om2 } \vspace{0.1in}\ ] ] if = 3 then and if 3 ^{2/\left(\delta+2-\omega\right ) } \label{eqn : om5 } \vspace{0.1in}\ ] ] where is the initial core radius and is an empirically fit function of : the shock has a constant velocity for 3 , weakly accelerates for = 3 , and strongly accelerates if .this is in agreement with what would be expected for the mass of each cloud . for 3the cloud mass is infinite so a central energy source could not produce gas velocities that increase with time .the cloud mass becomes finite just above = 3 , the threshold for the ionized flow to exhibit a positive acceleration .we present a series of static i - front tests of increasing complexity for initial code validation . in line with the non - hydrodynamical nature of these problems ,velocity updates were suspended but the heating and cooling updates in eq ( [ eqn : egas ] ) were performed .the energy updates were necessary to evolve the reaction rates according to temperature as well as to regulate the timesteps over which the reaction network was advanced .the initial gas temperature in all the static tests was set to 10 k. [ fig : t2 ioniz ] the simplest test is an r - type ionization front due to a monochromatic point source in a uniform infinite static medium in which no recombinations occur . the radius of the spherical front is easily computed by balancing the number of emitted photons with the number of atoms in the ionized volume : the front will expand forever in the absence of recombinations but will eventually slow to zero velocity as t .we compare this solution to our code results in fig [ fig : t1 radii / ioniz ] for n = 10 , = 10 s , and outer boundary of 64 kpc for grid resolutions of 64 , 128 , 256 , and 512 radial zones .the position of the front is defined to be at the outermost zone whose neutral fraction has decreased to 50% .the time evolution of the front is shown in fig [ fig : t1 radii / ioniz ] . for a given resolutionthe algorithm is within one zone of the exact solution after 2.0 10 yr . as expected for a static homogeneous medium , the photon - conserving radiative transfer correctly propagates the r - type front independently of numerical resolution .the neutral fraction profiles are extremely sharp because there are no recombinations , and they drop essentially to zero behind the front . including recombinations yields the well - known result for an i - front in a uniform infinite static medium : ^{1/3 } \label{eqn : r2(t)}\vspace{0.1in}\ ] ] where r is the stromgren radius and ^{-1 } \vspace{0.1in}\ ] ] taking again n = 10 and = 10 s but with an outer radius of 10 kpc , we plot the results of our algorithm for 64 , 128 , 256 , and 512 radial zones along with the analytical solution in fig [ fig : t2 radii / temp ] .the computed curves exhibit excellent agreement with theory , with a maximum error of 7.5% between the 64-zone solution and eq .( [ eqn : r2(t ) ] ) . the code results are again clustered closely together because of photon conservation , and after several recombination times they converge to a stromgren radius of 7.72 kpc , within 0.3% of the r = 7.70 kpc predicted by eq .( [ eqn : r_s ] ) .the small departure from theory at intermediate times evident in fig [ fig : t2 radii / temp ] arises because eq .( [ eqn : r2(t ) ] ) assumes a constant (t ) throughout the evolution of the front . in reality, the h ii region has a temperature structure that changes over time , as shown in fig [ fig : t2 radii / temp ] . atany given time the temperature decreases from its maximum near the point source to its minimum at the i - front , and this drop in temperature with radius can grow to more than 10000 k at later times .we adopt an average temperature of 20000 k for in eq ( [ eqn : r2(t ) ] ) .the postfront temperatures are greatest near the central star because the gas there has undergone more cycles of recombination and photoionization than the gas near the front .each cycle increments the gas temperature upward because lower energy electrons are preferentially recombined with a net deposition of energy into the gas .successive profiles steadily rise in temperature over time for the same reason .the temperature profiles continue to rise well after the stromgren radius is reached as seen in fig [ fig : t2 radii / temp ] because there are no gas motions in which pdv work can be performed and because collisional excitation and ionization processes are suppressed by the decline of postfront neutral fractions with time .neutral fractions fall as rising temperatures slow down recombinations .the rising gas temperatures in this static problem eventually stall when they sufficiently quench recombinations , well before other processes such as bremmstrahlung cooling arise .inclusion of recombinations leads to the ionization structure of this static h ii region in fig [ fig : t2 ioniz ] ( which in part is determined by the temperature profile ) , in contrast with the very simple neutral fraction profiles of the previous test . extend this classical h ii region problem to a ionizing point source in a uniform medium undergoing cosmological expansion in a friedmann - robertson - walker universe , but this test can not be performed by zeus - mp at present because scale factors are not implemented in the code .et al . _ studied the hydrodynamics of i - fronts in power - law gradients but not their time - dependent propagation in static profiles .solutions for r - type fronts exiting flat central cores into r gradients do exist for static media but in general are quite complicated . found that in = 1 franco _et al ._ static density profiles the radius of the front advances according to \right\ } \label{eqn : r4(t ) } \vspace{0.1in}\ ] ] where r=l / k is the stromgren radius and w(x ) is the principal branch of the lambert w function .w(x ) is a solution of the algebraic equation and must be evaluated numerically . here , and , where is the recombination time in the core and c is the clumping factor .( [ eqn : r4(t ) ] ) describes the approach of the bounded front to its modified stromgren radius r with time .w(x ) in general is multivalued in the complex plane ; its principal branch w(x ) is single - valued over the range [ -1/e,0 ] , monotonically increasing from -1 to 0 over this interval . as the time in eq .( [ eqn : r4(t ) ] ) evolves from zero to infinity the argument of w(x ) advances from -1/e to 0 , guaranteeing that the i - front expands from zero to r in approximately twenty recombination times .several commercial algebraic software packages can compute w(x ) ; we instead utilized a recursive algorithm by halley : adopting the first two terms in the series expansion for w(x ) as an initial guess for each value of x this method typically converged to an error of less than 10 in 2 - 4 iterations per point .we show in the left panel of fig [ fig : w1/w2 ] the radius of the ionization front on uniform grids of 25 , 50 , 100 , 200 , 500 , and 1000 zones with inner and outer boundaries of 1.047 10 cm and 5.6 10 cm , respectively .we set n = 10 , r = 2.1 10 cm , = 2.4773 10 ( constant with temperature ) , and = 5.0 10 s , which results in a core recombination time of 0.128 yr and a final stromgren radius of 5.03 10 cm .zeus - mp converges to within 4% of eq .( [ eqn : r4(t ) ] ) by 500 radial zones at early times and to within 1% past 1.5 yr ( on the scale of the graph , the 500 zone curve is essentially identical to the 1000 zone plot ) .the disagreement between theory and simulation clearly illustrates that photon - conserving schemes can fail to properly evolve ionization fronts if important density features are not resolved , in this case the abrupt falloff in density at the edge of the central core .nevertheless , the code still exhibits good agreement ( 10% ) with the analytical result with relatively few ( 50 ) zones .the evolution of an r - type front in a static = 2 franco density field in general involves complex lambert w functions with several branches but reduces to the relatively simple unbounded solution ^{1/2 } \label{eqn : r3(t)}\vspace{0.1in}\ ] ] provided that we plot the position of the ionization front in the right panel of fig [ fig : w1/w2 ] on a uniform grid for the resolutions indicated , with outer radius 0.8 kpc , n = 3.2 , r = 91.5 pc , and a constant = 2.4773 10 , which yields a recombination time in the core of 0.04 myr and = 9.55 10 s .the solutions assume a specific temperature for the h ii region that does not evolve with time , so was set constant in our simulation .our zeus - mp results converge to within 10% of eq .( [ eqn : r3(t ) ] ) by 2000 radial zones ( and are within 2 - 3% over most of the time range ) with = 9.65 10 s .an interesting aspect of the analytical solution is its sensitivity to the requirement that is constant , as seen in the (t ) plot in the right panel of fig [ fig : w1/w2 ] .recombinations are supressed in a static h ii region whose temperature varies with radius and increases with time , freeing the front to advance more quickly than expected from eq .( [ eqn : r3(t ) ] ) .r(t ) is also strongly divergent from eq .( [ eqn : r3(t ) ] ) for photon rates above or below eq .( [ eqn : ndot ] ) , as seen in the left panel of fig [ fig : w2/w2long ] .we set n = 10 , r = 2.1 10 cm , and = 2.4773 10 in a simulation with a 172-zone ratioed grid with an outer radius of 5.5 pc .these much higher and more compact central densities were chosen for their relevance to high - redshift cosmological minihalos and are more computationally demanding .the ratioed grid is defined by the requirements where n is the number of radial zones and r was chosen to be small in comparison to 5.5 pc but large enough to avoid coordinate singularities at the origin .we applied a grid ratio = 1.03 to concentrate zones at the origin in order to resolve the central core and density drop .our choice of parameters sets = 3.844 10 s and t = 0.128 yr . as shown in fig [ fig : w2/w2long ] , the ionization front in our calculation stalls for this value of but agrees with eq .( [ eqn : r3(t ) ] ) to within 5% if we change to 4.314 10 s , a difference of 12% .if the photon rate is increased another 2% the front exits the grid much faster than predicted by eq .( [ eqn : r3(t ) ] ) and if it is decreased by 2% the front is halted , implying that the rate set by eq .( [ eqn : ndot ] ) is the threshold for breakout from an r core envelope .this result is confirmed by substituting eq .( [ eqn : ndot ] ) into eq .( [ eqn : r_s ] ) to compute the r appearing in eq .( [ eqn : wcrit ] ) , which yields = 2 .this gradient is just steep enough for the central flux of eq .( [ eqn : ndot ] ) to be unbounded , with the position of the front evolving according to the power - law eq .( [ eqn : r3(t ) ] ) .we note that while the front escapes for , it will slow down for 3 because the cloud mass is infinite but approach the speed of light for 3 because of finite cloud mass .the eventual slowing of the front in an = 2 cloud for = 5.0 10 is confirmed at large radii in the right panel of fig [ fig : w2/w2long ] by extending the profile used above to an outer boundary of 1500 pc in a 500 ratioed - zone simulation with = 1.01 .as in the 1-d numerical tests , a source of ionizing photons with = 5.0 s was centered in a flat neutral hydrogen core with r = 2.1 cm , n = 10 , and an r dropoff beyond r .h was set to 17.0 ev and the grid was divided into 8077 uniform radial zones with inner and outer boundaries at 1.047 cm and 5.50 pc , respectively . although not necessary to the benchmarks , for completeness we employed the same interstellar medium ( ism ) cooling curve of utilized by in place of the three last terms of eq . ( [ eqn : egas ] ) .the photon energy and cooling curve together set the postfront temperature and therefore sound speed c of the ionized gas .our choice of cooling curve and = 2.4 ev led to postfront temperatures of 17,745 k and sound speeds of 15.6 km s in all our runs , in contrast to the artificially set 10,000 k temperatures and 11.5 km s sound speeds of the runs .we applied c = 15.6 km s to the analytical expressions above for r(t ) and r(t ) .the initial gas temperature in all the hydrodynamical tests was set to 100 k. it should be noted that the times appearing in r(t ) and r(t ) are not taken from when the central source switches on .r(t ) is the position of the ionization front after the initial stromgren sphere of radius r has formed while r(t ) is the location of the ionized core shock after one sound crossing time across the core so these values must be subtracted from the total problem time when comparing the formulae to code results .stromgren sphere formation times t and radii emerging from the run outputs are compiled in table 1 . in all casesthe code computed stromgren radii within a zone width of the predicted r .ccc 1.0 & 5.970e16 & 3.142e09 + 1.2 & 7.290e16 & 8.730e08 + 1.4 & 1.015e17 & 1.035e09 + 1.45 & 1.148e17 & 1.102e09 + 1.5 & 1.337e17 & 1.121e09 + the derivation for r(t ) does not account for the hydrodynamical details of the breakout of the shock through the i - front so the formula is an approximation which improves as the front grows beyond r . similarly , r(t ) is also an approximation which is increasingly accurate as the core shock expands beyond r .column 1 of fig [ fig : 123profiles ] shows the density , velocity , and temperature profiles of a classic d - type ionization front expanding in an = 1 power law density for the problem times listed .zeus - mp correctly predicts the formation of a d - type front in this gradient ; the confinement of the front behind the leading shock is apparent from the abrupt temperature drop from 17800 k in the front to 2000 k in the shock .the front decelerates as predicted by eq .( [ eqn : r_w1 ] ) , from 9 km s to 6.5 km s over the simulation time .the dense shell thickens as the shock accumulates neutral gas , and the rapid decline in postfront densities as the problem evolves ( especially in comparison to the initial profile ) reveals the efficiency with which the ionized flow evacuates the cloud core .the ionized density profile is flat because any density fluctuations initially present in the isothermal gas become acoustic waves that smooth the variations on timescales that are shorter than the expansion time of the front ( which is subsonic with respect to the 17800 k gas ) .the postfront gas remains isothermal even though central densities fall and the front performs pdv work on the shock .this behavior can be understood from the various timescales governing the evolution of the gas energy density in the ionized flow we plot photoionizational heating , pdv work , radiative cooling , and recombinational timescales defined by along with the hydrodynamical courant time in fig [ fig : tscl1 ] as a function of radius for the = 1 flow .the shortest timescales are associated with the most dominant terms on the right hand side of eq .( [ eqn : eneqn ] ) .the heating and cooling times are identical to within 0.1% behind the d - type front , with recombination times that are slightly shorter and pdv work times that are three orders of magnitude greater .equal heating and cooling times ensure that these processes balance in the energy updates and do not change the gas temperature , and the much longer pdv timescales demonstrate that the work done by the flow behind the front is small compared to the gas energy . even though the ionized densities fall over time they still cause enough recombinations for the central star to remainenergetically coupled to the flow by new ionizations and maintain the gas temperatures against radiative cooling and shock expansion .the early evolution of the front in the r envelope is shown in the left panel of fig [ fig : w3front ] with inner and outer problem boundaries of 1.047 cm and 0.3 pc at resolutions of 250 , 500 , 1000 , and 2000 uniform radial zones .the curves are ordered from lower right to upper left in increasing resolution .the front exits the core in 0.05 yr through an essentially undisturbed medium since this is much shorter than the dynamical time of the gas .the plots converge as the core to envelope transition becomes better resolved on the grid .zeus - mp reproduces the expected slowdown of the r - type front in the central core and its rapid acceleration as it exits the density gradient , having never made a transition from r - type to d - type .notice that beyond 0.1 pc the slopes of all the curves are restricted to the speed of light because the static approximation of radiative transfer breaks down in the rarified dropoff . as explained in section 6 , the mean free path of ionizing photons abruptly becomes comparable to the size of the grid at that radius and an unrestricted static approximation would permit nonzero photoionization rates to suddenly appear all the way to the outer boundary .as previously noted , in such circumstances the code restricts the position of the front to be this requirement prevents superluminal i - fronts over total problem times but not necessarily over successive timesteps , as seen in fig [ fig : w3front ] .as the front crosses the core boundary r(t ) briefly curves upward with a slope greater than the speed of light c before it is abruptly limited to the speed of light by eq .( [ eqn : rct ] ) . having been slowed to less than c in the core the front can cross the next few zones faster than the speed of light because because the total distance the front has traveled over the entire problem time will still be less than ct .this results in the slight unphysical displacement of the front upward by 0.1 pc , which can be seen if one visualizes the true curve to continue up and to the right with slope c from the point where the computed slope becomes greater than c. this error is negligible in comparison to the kpc scales on which the front later expands , and the code produces the expected approach of the front velocity to the speed of light at later times given that the mass of the cloud is finite , as observed earlier .in reality no cosmological density field decreases indefinitely so the front would eventually slow to a new stromgren radius in the igm .columns 2 and 3 of fig [ fig : 123profiles ] depict the ionized flows developing in = 3 and = 5 density fields after the r - type ionization front has exited the cloud . in both casesthe departure of the front from the grid is visible in the 10000 k gas that extends out to the problem boundary .both clouds are ionized on comparable timescales with initially flat profiles before much dynamic response has arisen in the gas .however , very distinct flows emerge in the two profiles .the = 3 field has a shallower drop with smaller pressure gradients that drive a shock that accelerates weakly but supersonically with respect to the outer cloud .a reverse shock develops that can be seen in the tapered peak of the velocity profile and in the density maximum that falls increasingly behind the leading shock as the flow progresses .the supersonic expansion of the core does not permit the central densities to relax to constant values as in the d - type front , but they are fairly flat out to the reverse shock because the shocked region is still dynamically coupled to the inner flow .at any given moment the postshock gas temperatures are uniform in radius up to the reverse shock , but they decrease over time as the flow is driven outward .the temperatures are almost flat because the gas heating and cooling rates are nearly constant in the fairly uniform central densities .the temperatures fall with time because the central gas densities are lower than in the = 1 flow above .there are fewer of the recombinations that enable the central star to sustain the postshock temperatures as in the d - type front , so the flow expands partly at the expense of its own internal energy .the timescales of eq .( [ eqn : eneqn ] ) governing the gas internal energy and the courant times are shown in fig [ fig : tscl2 ] .pdv work done by the gas clearly outpaces photoionizational heating out to the reverse shock with a net loss of internal energy in the ionized gas .the suppression of recombinations is evident in the recombination times , which are nearly ten times the photoheating times .we show the initial transit of the i - front through an r falloff in the right panel of fig [ fig : w3front ] for the same boundaries and grid resolutions as in the = 3 plots on the left ; the curves again progress from lower right to upper left as the resolution increases .zeus - mp again correctly predicts the initial slowing of the front in the core and its rapid acceleration down the gradient without ever becoming d - type .there are no qualitative differences between the fronts in the two gradients prior to their exit from the core because their initial conditions are identical : the = 5 front also crosses the core in 0.05 yr .there is likewise little difference between the two fronts after their velocities have been limited to the speed of light by the code .the two prominent differences are the slower convergence of the curves beyond r in the = 5 gradient and their greater superluminal velocities before being set to c by the restriction algorithm .both trends are expected : the front will descend the steeper gradient more quickly before being limited to the speed of light and more grid cells are required to resolve the falloff . again , because the = 5 cloud has finite mass , the velocity of the front becomes c at later times . in this profilea much more abrupt isothermal density dropoff remains in the wake of the r - type front .extreme pressure gradients launch the edge of the core outward in a strong shock that is essentially a free expansion , as seen in the velocity profiles in column 3 of fig [ fig : 123profiles ] .the density falls off so sharply that recombinations are quenched there and the shock expands adiabatically .the shock advances so quickly that it becomes dynamically decoupled from the flow behind it , so no reverse shock forms and all the flow variables remain stratified with radius .temperatures rise to 4 10 k in the shock but drop to 500 k behind it from adiabatic expansion .the sequence of velocity profiles confirms that the core shock accelerates over the entire evolution time of the problem , as predicted by eq .( [ eqn : om5 ] ) .it becomes hypersonic with speeds in excess of 400 km s .the central star is least energetically coupled to this ionized flow , with much of it being driven by its own internal energy .temperatures in the central ionized gas fall with time for the same reasons as in the = 3 profiles .the energy and courant times for the = 5 flow appear in fig [ fig : tscl3 ] .the hierarchy of timescales is similar to that of the = 3 flow up to the position of the shock , at which point cooling , heating , and recombination times rise much more quickly than their = 3 counterparts .the steeper jump in timescales is due to the strong suppression of recombinations in the highly stratified densities .zeus - mp accurately reproduces the mildly - accelerating ionized core shock expected to form in r radial densities as well as the strong adiabatic shock predicted for r gradients , with hydrodynamical profiles that are in excellent agreement with earlier work performed by 1-d implicit lagrangian codes . in this sectionwe summarize our code results for i - fronts and shocks in r and r core envelopes and present their density , velocity , and temperature profiles in fig [ fig : 215profiles ] .zeus - mp verifies the theoretical prediction that a constant - velocity ionized shock forms in the = 2 gradient after the rapid departure of the r - type front , as seen in the upper middle panel of fig [ fig : 215profiles ] .this ionized flow does not steepen into as strong a shock as in the = 3 case , which is evident from the lower gas velocities and smaller spike in the temperature distribution .timescale analysis indicates that heating and recombination times are nearly equal in the postshock gas but not to the same degree as in the d - type = 1 front , so expansion occurs partly at the expense of the internal energy of the gas .consequently , the level gas temperatures behind the shock decrease slightly over time but not to the extent in the = 3 shock .the evolution of the postshock gas is intermediate between that in the r and r cases .this flow regime is relevant to primordial minihalos : high resolution simulations of the formation of these objects yield spherically - averaged baryon density profiles with 2.0 2.5 .the correspondence is not exact because can vary in radius over this range , which is why both shock accelerations and decelerations are observed numerically in profiles derived from cosmological initial conditions .our code confirms that the = 1.5 front evolves essentially as d - type but with no shocked neutral shell : the front and the shock are coincident and advance at the same velocity . throughoutthe evolution of this flow the front is precariously balanced at the edge of breakout through the shock and down the gradient .this breakthrough is sensitive to small errors in shock position at early times ; as a result , zeus - mp finds that the i - front overruns the shock when = 1.45 instead of 1.5 ( the densities , velocities , and temperatures in the second row of fig [ fig : 215profiles ] are for an r distribution ) .however , this is a relatively small error , and our algorithm captures shock and front positions to within 10% of theory for = 1.4 and 1.6 .although this discrepancy is still under investigation , we believe it to be due to small errors in energy conservation in our eulerian code . in this respectlagrangian codes would likely enjoy an advantage in proper shock placement because of their conservative formalism .we demonstrate the convergence of our algorithm to eq .( [ eqn : r_w1 ] ) for the i - front position r(t ) in an = 1 dropoff as well as to the ionized shock position eqs .( [ eqn : om2 ] ) , ( [ eqn : om3 ] ) , and ( [ eqn : om5 ] ) for = 2 , 3 and 5 gradients in the first two rows of fig [ fig : shkres ] . for each density regimewe utilized grids with 100 , 250 , 500 , 1000 , 2000 , 4000 , and 8000 uniform radial zones .the dashed line in each panel of fig [ fig : shkres ] is the analytical solution . in fig[ fig : shkres]a the numerical predictions for the position of the = 1 d - type ionization front are ordered from lowest to highest resolution from the lower right corner upward to the left toward the expected solution .zeus - mp converges to within 10% of eq .( [ eqn : r_w1 ] ) by 2000 zones and to within 2 - 3% at 8000 zones ( the 4000 zone curve is indistinguishable from the 8000 zone solution at this scale ) . because eq . ( [ eqn : r_w1 ] ) does not account for the r- to d - type transition of the front , the agreement improves the further the front advances beyond the stromgren radius at any resolution .convergence was fastest in the case of the constant velocity ionized core shock in the = 2 gradient , as shown in fig [ fig : shkres]c .the order of solutions is again from lowest to highest resolution from the lower right upward to the left toward eq .( [ eqn : om2 ] ) .agreement to within 10% between 10 yr and 10 yr and 2 - 3% past 10 yr was achieved with just 500 zones .the trend is somewhat different for the weakly accelerating ionized shock in the = 3 panel in fig [ fig : shkres]b .the numerical models are all somewhat above the analytical curve at later times but the 100 and 250 zone runs are below it at earlier times .the different grids are most easily distinguished by their overshoot at late times : with the exception of the 100 zone curve they converge downward toward eq .( [ eqn : om3 ] ) with increasing resolution , with the 8000 zone curve dipping slightly below it near the end of the evolution time . with 2000 zones the convergence is to within 10% of the expected values between 5000 yr and 10 kyr and to within 1.5% by 60 kyr . in the highly supersonic = 5 core shock runs shown in fig [ fig : shkres]d , the progression of the numerical curves toward eq . ([ eqn : om5 ] ) is again from lower right to upper left as the resolution increases , but they converge to a solution that lies well above the analytical result .although the hydrodynamical profiles for the = 5 ionized flow in fig [ fig : 123profiles ] are in excellent qualitative agreement with earlier work , in this gradient the algorithm does not accurately compute the shock placement for reasons that are currently under investigation .this flow regime is rather extreme , with very high shock temperatures that challenge the energy conservation of our code .such flows are unlikely to develop in the 3-d photoevaporation of the high - redshift minihalos that we plan to study . in the 2 3 gradients of relevance to those primordial structures , zeus - mpis quite accurate .the convergence trends in the early i - front evolution studies in the past three sections suggest that a key factor in accurate front placement is proper resolution of the central core and its initial dropoff .ratioed grids with higher central and lower outer resolutions can capture proper shock placement and front positions with far fewer zones than uniform grids , as shown in the bottom row of fig [ fig : shkres ] . in fig[ fig : shkres]e we plot ionized core shock radii as a function of time in both = 1 and 3 density profiles for a 250 zone uniform grid and a 172 zone ratioed grid with = 1.03 as defined in section [ sect : rmw static ] ; the analytical prediction in both panels is again the dashed line .we show the corresponding simulations for an = 2 profile in fig [ fig : shkres]f but with a 100 zone uniform grid .we achieve the same accuracy with 172 ratioed zones as with 2000 uniform zones in the first two regimes and as with 500 zones in the = 2 gradient ( the most convergent of the four regimes tested ) .equal accuracy with large savings in computational resources motivated our use of ratioed grids in earlier work , with the only sacrifice being the loss of some detail in the hydrodynamical profiles ( e.g. compare fig 3 in to the = 2 profiles in fig [ fig : 215profiles ] ) .the physical timescales governing code timesteps as i - fronts and flows evolve on the grid yield insights about the processes that drive the fronts and are the key to future algorithm optimization . in general ,different timescales dominate the transit of the front than the rise of ionized flows behind it .we examine how these timescales control the advance of the solution in four density gradients : = 1.0 , 2.0 , 3.0 , and 5.0 . as a rule , the photoheating timescales in the cell being ionizedwhile the front remains on the grid determine the global timestep over which the entire solution is updated .we first describe how cells approach ionization equilibrium at different radii in these gradients .the ionization fraction x of a zone as a function of time varies with distance from the central source , ambient density , and type of ionization front . in this sectionwe first examine the equilibration of a zone in the flat central core ( the same in all four density gradients ) and then analyze three other zones at radii of 0.25 pc , 1.0 pc , and 4.5 pc , well into the density falloffs in which the distinct characters of the fronts emerge .we plot the ionization curves for the four radii in figs [ fig : ionprfl]a , c , e , and fig [ fig : tsclprfl]a for = 1 , 3 , 5 , and 2 gradients , respectively .we tally the corresponding consecutive heating timesteps t ( eq .( [ eqn : theat ] ) ) in the four cells as they come to equilibrium in figs [ fig : ionprfl]b , d , f , and fig [ fig : tsclprfl]b ( recall that t is the interval over which full hydrodynamical updates are performed ) . the zone we consider in the central core is the fourth from the coordinate origin , but the ionization of the inner boundary also merits discussion .chemical subcycling over t is uniquely extreme in this zone because the uv radiation entering its lower face initially encounters no electron fraction whatsoever .since n is zero t is extremely small so far more chemical timesteps are invoked to cover t than are required by the network ( several thousand ) . in practicethis only occurs on the inner boundary because over the 80 - 90 heating times required to ionize the inner zone the hydrodynamic updates advect a small electron fraction into the next zone , substantially reducing the number of initial chemical cycles there .the network subcycles 100 times in the fourth zone in its first heating timestep but only two or three times by the fifth timestep .roughly 100 t are necessary to ionize this zone . as shown in fig [ fig : ionprfl]d , heating times are at first 1 s in the central zone but increase to 1 10 s by the time it reaches x = 0.999 .the heating rate is greatest when the front enters the cell because k is at its maximum but cooling rates are at their minimum because of the low initial temperature of the cell .this low temperature implies that e is also at its minimum , ensuring that the first t is the smallest .as the temperature and gas energy rise with x , cooling processes are activated and decrease , increasing t .when x exceeds 0.999 heating times can abruptly dwarf the light - crossing time t of the cell because cooling suddenly balances photoionizational heating , driving sharply downward .the code solution consequently takes too large a timestep forward , over which ionization that should have commenced in subsequent zones can not , unphysically stalling the young i - front .this only occurs in core zones close to the source because t t further from the center .ionization begins in the next zone before the previous zone can come to equilibrium , decreasing the code timestep dt and preventing anomalous jumps of the solution forward in time .the jumps in central zones are easily remedied by limiting t to be less than a tenth of the light - crossing time there . after a core zone is fully ionized the code solution advances by consecutive 0.1 t steps until eq .( [ eqn : rct ] ) permits ionizations in the next zone .hence , the uv flux in the core zones is a step function in time , immediately rising to full intensity because intervening zones are completely transparent .the heating time profiles for r - type fronts in = 3 and 5 gradients appear in figs [ fig : ionprfl]d and f. as noted earlier , these fronts rapidly accelerate and must be limited to the speed of light by the code .the uv flux in the outer zones is again a step function in time because , as observed above , the front arrives at the next zone before the current zone is completely ionized . in this instancethe step in the radiation field is not quite to its full intensity because one or more of the intervening zones is not fully transparent . as in the core zone , we begin the tally of t in the outer three test zones when their photoionization rates switch on . the heating timesare again comparatively small at first and grow by several orders of magnitude as the cells become ionized .the heating times as a rule increase with distance from the source because the smaller outer fluxes generate lower photoionization rates .the static approximation is clearly violated near the inner boundary because the 1 10 s ionization timescale of the core zone is comparable to its 7 10 s light - crossing time .the approximation also fails in the outer zones of these two gradients even though ionization times are much longer than light - crossing times there .the explanation lies in the photon mean free paths : as shown in table 2 , they exceed the length of the grid as the densities plummet with radius .if the widths of the fronts overrun the outer boundary the static approximation would allow nonzero ionization rates in zones that could not have been reached by light by that time in the simulation .we must again employ eq .( [ eqn : rct ] ) to prevent superluminal velocities in the outer regions .igm mean densities prevent runaway i - fronts in realistic cosmological conditions . in compiling heating time profiles of the outer zones we terminate the advance of the front beyond the zone in question to avoid the downshift in code timestep associated with ionizations in a new zone before the current cell has equilibrated .ccccc 0.25 & 6.35e-6 pc & 1.29e-4 pc & 4.79e-3 pc & 6.6 pc + 1.0 & 1.38e-5 pc & 2.03e-3 pc & 0.299 pc & 5600 pc + 4.5 & 6.21e-5 pc & 4.10e-2 pc & 27.14 pc & 1.18e07 pc + in fig [ fig : tsclprfl]b the = 2 t profiles at 0.25 pc , 1.0 pc , and 4.5 pc have a more complicated structure because the uv radiation illuminating those zones is not a step function in time .as seen in table 2 the front is a fraction of a zone in width at first but quickly widens to encompass several zones . although r - type , the front has a velocity well below the speed of light so ionizations proceed several zones in advance of the center of the front .the radiation field at the leading zone is initially weak ( having been attenuated by the previous few partially - ionized zones ) but soon grows to its full intensity as the center of the front crosses the cell . as a resultthe energy deposition into the gas is first relatively small but increases as the radiation field intensifies . then dips back downward when cooling processes are switched on as the zone temperature increases .the heating times therefore curve downward but then recover upward as shown in fig [ fig : tsclprfl]b .the profiles again migrate upward with distance from the source .the static approximation is well - obeyed in this regime given that the ionization times are much longer than the zone - crossing times for light and that the speed of the front is much smaller than c. the width of the front also remains small in comparison to the grid .we begin the heating time tally in the three cells when x has risen to 1 10 because there is no sudden activation of photoionization rates in them . as we show in figs [ fig : ionprfl]a and b , test zones ionized by the d - type front in an = 1 gradient exhibit noticeably different ionization fractions and heating times because the shock reaches them before the front .the shock raises their temperatures to a few thousand k and induces collisional ionizations that leave a residual x 0.01 - 02 .shock heating lengthens t when the front reaches the cell because e is 200 - 300 times greater than in the preshock gas and the initial chemistry subcycling is heavily reduced by the collisional electron fraction .the radiation profile in these cells is a smooth function of time because of the uv photons escaping ahead of the front into the slightly preionized zone ( the width of the front itself in these densities is less than a tenth of a zone ) .we initiate the tally of t in the cell when ionization rates surge with the arrival of the center of the front .the large fluctuations in heating times at 0.25 pc are due to postshock numerical oscillations of the hydrodynamical flow variables .these variations dampen at larger radii because the shocked neutral shell thickens with time ; the densities the front photoevaporates have more time to numerically relax to steadier values .postshock oscillation is also responsible for the early variations in the 0.25 pc x profile . as in the other gradients, t again increases with distance from the central source .the static approximation is most valid in this density regime , with material properties changing many orders of magnitude more slowly than light - crossing times .as the nascent i - front emerges from the central core the numerical solution evolves in a succession of heating timesteps that rise and then fall as one zone comes to equilibrium and ionization commences in the next . as discussed earlier, the heating timesteps in the zone being ionized rarely achieve their full values in figs [ fig : ionprfl ] and [ fig : tsclprfl ] because ionization in the next zone is activated before the current zone can come to full equilibrium . because the timesteps are determined by the sum of photoionizational heating and cooling in the zone being ionized , they are much smaller than courant times in the postfront gas ( 10 yr ) while the front is on the grid .although restrictive , such timesteps are necessary in part because of the proximity of the ionized gas temperature to the sharp drop in the cooling curve near 10,000 k ; longer timesteps can cause sudden unphysical cooling in equilibrated zones that lose energy in a heating cycle . whenever ionization fronts approach the speed of light the solution is limited to relatively short steps until the front exits the grid ( although it should be remembered from figs [ fig : ionprfl ] and [ fig : tsclprfl ] that the timesteps grow as the radius of the front increases ) .the rise and sharp fall of timesteps continues until the i - front exits the outer boundary . in = 3 and 5 gradients the code expends 85% of its cycles propagating the front across the grid ( 19 yr ) and its last 15% advancing the core shock to the outer boundary ( 1 10 yr ) .the transport of r - type fronts in = 2 gradients is uniquely efficient because of the unusual t profile of the zones .when a cell begins to be ionized the heating times are initially fairly long and are again relatively long as the zone approaches equilibrium . on average ,this curve allows the solution to evolve much further in time each cycle .70% of the simulation timesteps are utilized for the transit of the front ( 500 yr ) with the remainder being spent on the exit of the ionized shock ( 1.5 10 yr ) .this simulation requires only 20% of the cpu time of the = 3 and 5 runs , which require about the same time to execute .the much longer heating times in the = 1 gradient enable the code to advance the d - type front across the grid with only 30% of the cycles needed in the = 3 and 5 models . however , much longer simulation times ( 6.0 10 yr ) are necessary to advance the front to the outer boundary because of its relatively low velocity behind the shock , so this run executes with somewhat greater cpu times than the = 2 model . in figs [ fig : tsclprfl]c , d , e , and f we plot global minima of the courant , pdv work , photoionizational heating , recombination , and cooling timescales along with the code timestep dt as a function of time after the fronts have exited the grid ( except for the = 1 case ) for = 1 , 2 , 3 , and 5 density profiles .after the outermost zone is completely ionized the minimum heating and cooling times by which the solution is governed settle to nearly constant values for the sound crossing time of the central core , approximately 500 yr . the 1 10 - 10 s code timesteps dt are similar to the heating times in core zones that have come to complete equilibrium ( see fig [ fig : ionprfl ] ) .500 yr marks the steepening of the pressure wave at the core s edge into the ionized shock that begins to evacuate the core .as central densities fall recombinations are supressed , slowing cooling rates and new ionizations .photoheating , recombination , and cooling timescales rise , surpassing courant times at t 1000 yr , at which point the code adopts t as the new global timestep .the noise in the code timestep prior to crossover is due the nearly equal heating and cooling rates ; their difference is sensitive to minor variations in each rate and can modulate relatively large fluctuations in .this noise disappears after the algorithm turns to t to compute dt .the minimum t,t , and t originate in the relatively flat densities behind the shock ( see fig [ fig : tscl2 ] ) and continue to rise with the expulsion of baryons from the center of the cloud .the three timescales branch away from each other at late times in the = 3 and 5 panels because the cooling rates are sensitive to the drop in postshock gas temperature ( from 18,000 k down to 2000 k in the = 3 gradient ) as the ionized flows expand in the course of the simulation .the rates do not diverge in = 1 and 2 postshock flow because the gas is either isothermal or nearly so .the courant time is approximately 15 yr in the newly ionized gas in all four cases but begins to fall after the core sound crossing time in the last three regimes as a result of the formation of the shock .the minimum courant time coincides with the temperature peak of the shock at all times thereafter in the = 2 , 3 , and 5 flows but resides within the i - front in the = 1 gradient because the much higher temperature of the ionized gas in comparison to the weakly shocked neutral gas .as shown in fig [ fig : tsclprfl]c , the courant profile is almost flat in this flow regime because the postfront gas is isothermal .the ionized core shock in the = 2 gradient has a nearly constant velocity after formation so its courant time remains flat after falling to 10 yr in fig [ fig : tsclprfl]d .the = 3 flow only weakly accelerates and therefore exhibits a similar profile in fig [ fig : tsclprfl]e with final courant times of 4.5 yr in the stronger shock .courant times in the hypersonic = 5 flow continue to fall in fig [ fig : tsclprfl]f because of its strong acceleration , eventually dropping to 0.6 yr by the end of the simulation .the first dip in the pdv timescale at t 480 yr in the = 2 , 3 , and 5 plots results is due to outflow of heated gas from the inner boundary zone .the oscillations after the dip at t 800 yr are associated with the launch of the core shock .the minimum work times later track the forward edge of the shock at all times thereafter in all four regimes .the behavior of the = 2 , 3 , and 5 pdv profiles after the rise of the shock mirrors that of the courant profiles .the constant speed = 2 shock uniformly accelerates parcels of upstream gas throughout its evolution so its pdv timescales remain fairly flat . in contrast , the strong = 5 shock accelerates upstream fluid elements to increasingly higher speeds as it advances so its work timescales continue to fall .the profile of the gently accelerating = 3 shock falls in between these two cases , sloping downward and then evening out at later times .pdv timescales in the d - type front actually rise over time because of the deceleration of the shock .the momentary spike in the profile at t 2000 yr ( also manifest in the courant profile ) occurs at the inner boundary and is likely due to pressure fluctuations associated with acoustic waves there .several features in the = 1 plot deserve attention. the pdv timescale oscillations at t 400 yr are from sound waves in the flat core .the postshock numerical oscillations of the flow variables responsible for the rapid fluctuations in the heating profiles continue to cause rapid variations in the heating , cooling , and work timescales throughout the simulation , dominating the code timestep until the average t rises above the courant time at t 3 10 yr .it is important to remember that the noise in t and t in this panel is not readily apparent in fig [ fig : tscl1 ] because the global minima in fig [ fig : tsclprfl ] occur at the rear edge of the shock .the two timescales quickly become equal behind the shock .direct radiation forces in astrophysical flows can strongly influence their evolution in scenarios like accretion onto compact objects or line - driven winds from massive stars . in order to assess the radiation pressures exerted by pop iii stars upon their parent halos we added an operator - split update term to the zeus - mp momentum equation : we utilized a flux corresponding to the blackbody spectrum of a 200 m star with t = 10 k normalized to yield the emission rate of all photons above the hydrogen ionization threshold given in table 4 of and centered in the lcdm spherically - averaged halo discussed in section [ sect : uvesc ] .the neutral hydrogen component of the extinction coefficient exits the integral , which , along with the integral required to normalize the flux , only needs to be evaluated once .varies over ionization timescales because of its dependence on the neutral hydrogen fraction , so radiation momentum transfer into the gas must be accrued over consecutive chemical timesteps .this is done by multiplying the force integral above in every problem zone by the current chemical timestep and summing this product over subsequent chemical times until the hydro timestep has been crossed .the sum is then divided by the total density to obtain the update to the gas velocity in each zone .this method neglects spectral hardening across the front that could only be captured by a full multifrequency treatment of the radiation transfer , but these effects are likely to be unimportant in the high densities of the primordial halo .gas densities and uv fluxes are largest at the center of the halo so radiation forces exert their greatest influence in the earliest stages of primordial h ii region evolution .uv photons can accelerate both the small neutral fractions in the postfront gas as well as the thin semi - neutral discontinuity at the front .gravitational , thermal pressure gradient , and radiation forces in the early i - front are compared at 22 yr and 220 yr in fig [ fig : pressure ] .thermal pressure gradients cancel gravity forces upstream of the shock leading the front ( because of the hydrostatic initial conditions of the gas ) but dwarf them , as expected , within the h ii region itself .the radiation force is also much greater than gravity within the h ii region , an interesting result given that neutral fractions are less than 10 there .thermal pressure dominates radiation in the gas on either side of the i - front except in the partially - neutral layer of the front where the radiation force spikes because the product of the flux and neutral fraction peaks there . in contrast , the radiation pressure is somewhat larger than thermal pressure in the postfront gas at 220 yr in this simulation , but their ratio in general is sensitive to changes in the very small neutral fraction in the ionized gas that can occur problem resolution or initial conditions .comparison of the direct uv forces upon the postfront gas at 22 yr and 220 yr clearly shows that they diminish with time . as the 10 k gas expands, it is driven outward with an accompanying drop in central densities that suppresses recombination rates .central neutral fractions fall ( eventually to 10 ) , with a corresponding loss of radiation pressure .our radiation force profiles are in general agreement with , who considered the same 200 m blackbody spectrum but applied a different initial density profile .our simulations reveal that the radiation forces in the ionized gas behind the front enhance its velocity by less than one percent , even though they are comparable to the thermal forces there .this is due to the fact that the thermal pressure gradients in the shocked gas and just behind the front are primarily responsible for the acceleration of the flow , and these gradient forces are large in comparison to the radiation forces except across the few photon mean free paths ( mfp ) of the front .direct calculation of the uv acceleration of fluid elements in the i - front layer itself is currently impractical for two reasons .the mfp of uv photons through the baryonic densities typical of 10 m halos is approximately 10 pc , well below the resolution limit of an eulerian calculation that must accommodate the much larger dynamical scales of the h ii region .failure to resolve the front can cause a code to interpret the very large radiation force peak to act upon the entire problem cell over the time required to ionize all of the cell , when in reality the force spike only operates on the fluid parcels in the extremely thin front for just the time required to ionize this layer ( such an error led to unphysically large gas velocities in early trials of our code ) .furthermore , the intense uv flux in such proximity to the central star can violate the static approximation to the transfer equation , necessitating the use of the fully time - dependent equation to ensure accurate transport of the i - front , even across an appropriately rezoned grid .[ fig : pressure ] a simple estimate of the radiative acceleration of gas elements in the front is possible by recognizing that it acts on the layer for at most a few ionization timescales , between 10 s and 10 s at the times indicated in fig [ fig : pressure ] .the product of the radiation force peak in fig [ fig : pressure ] and these photoionization times yields an upper limit to the velocity imparted to the front , which we find from the data above to be less than 2 km s , too little to significantly alter the evolutionary outcome of the ionized flow .this estimate is an upper bound because the force actually decreases as the parcels are ionized , while this procedure takes it to be constant ( and at its maximum strength ) .when the thin layer is ionized , the large radiation force upon it evaporates and continues on to the next layer .the global radiation momentum transfer from the thin front into the gas accelerates it by at most 1 - 2 km s .the breakout of uv radiation from pop iii minihalos was explored in an earlier paper ( hereafter wan04 ) ; we now extend this study to lower stellar masses with improved problem setup .we adopted the spherically - averaged lcdm baryon field of a 5 10 m dark matter minihalo formed at z = 18 for our initial number density profile in place of the cdm density profile of .also , in our earlier work , the inner boundary was set to 0.06 pc in the cdm protostellar density , regardless of the mass assumed for the central star later forming at its center .launching the i - front from this radius would neglect the 100 - 200 m gas mass interior to this radius not consolidated into the protostar in the lower stellar mass cases , as seen in fig [ fig : dens / mtot ] .the inner boundary was therefore set to be the radius enclosing a gas mass equal to the mass of the star whose uv escape fraction was being studied .although this inner few hundred m is small in comparison to the outer tens of thousands of solar masses later ionized in our previous studies , they were included because their significantly higher number densities and recombination rates might disproportionately retard the advance of the front .apart from these two modifications , the problem setup was the same as in the wan04 simulations . an important consequence of shifting the inner problem boundary in accord with the protostellar mass is that the nascent i - front can now encounter much higher central densities , up to 10 for the 25 m runs compared to the 10 densities in wan04 .the accompanying rise in photoionization rates restricts the timestep control to much shorter advances , making 25 m the lower practical computational limit in these density profiles . despite gathering interest in the physical processes that cut off accretion onto popiii protostars , they are not understood well enough to provide firm estimates on the final masses of these objects , only that they are likely 30 - 300 m .future models will also be needed to determine the central envelope densities and infall through which primordial i - fronts truly emerge ; the exit of ionization fronts through accretion infall will be studied in forthcoming 3-d simulations .ccccc 25 & 5.0e-3 & & & 0 + 40 & 8.8e-3 & & & 0 + 80 & 1978 & 3.3e05 & 7.9 & 89% + 120 & 2633 & 2.3e05 & 7.2 & 91% + 140 & 3186 & 1.7e05 & 4.9 & 93% + 200 & 3505 & 1.3e05 & 3.7 & 94% + 260 & 3856 & 9.0e04 & 2.8 & 96% + 300 & 4441 & 8.5e04 & 2.6 & 96% + 400 & 4665 & 5.9e04 & 2.3 & 97% + 500 & 5132 & 1.9e04 & 1.0 & 99% + results for stellar masses ranging from 25 m to 500 m using the time - averaged photon rates from table 4 of appear in table 3 .since no uv photons exit the virial radius of the halo before the front transforms from d - type back to r - type , f is simply where t and t are the stellar main sequence lifetime and time to i - front breakout , respectively .comparison of these final i - front radii with those of wan04 for m 100 m indicates that they differ by 20% at 120 m but only by 4% at 500 m in a trend toward greater agreement with mass .wan04 excluded the most intervening mass at 120 m and the least at 500 m , so it is not surprising the i - front is delayed the most in the first case. as expected , breakout radii and times decrease with increasing stellar mass the i - front breakout from the 80 m case shows that lower - mass pop iii stars still effectively ionize their local environments and can establish ionized outflows capable of dispersing any elements ejected by pulsational instabilities before the star collapses directly into a black hole .in contrast , the primordial envelope traps the i - fronts of the 25 m and 40 m stars to form ultracompact h ii ( uc h ii ) regions with lifetimes of 6.5 myr and 3.9 myr , respectively .the higher central recombination rates overwhelm the lower ionizing photon rates of a lower mass star to stall the front at subparsec radii. a star in this mass range can not remain on the main sequence long enough for the ionized pressure bubble to expand and free the front .we have presented an explicit multi - step scheme for integrating the coupled equations of radiation transport , ionization kinetics , radiative heating and cooling , and hydrodynamics .we have validated our method against a large battery of analytic test problems for both static and moving media . while these are 1d tests , the method is easily extended to 3d .an important strength of our multistepping algorithm is its applicability to a variety of radiation transport schemes for computing photoionization rates on a grid .many problems can be addressed by this method in its current state as a single source code , with more soon being possible with the planned activation of the full 9-species network as well as multifrequency and multisource upgrades .full 3d radiation hydrodynamical trials of the code are currently underway to simulate the escape of uv radiation from high - redshift minihalos formed from realistic cosmological initial conditions .one important question to be answered is whether the uv escape fraction cutoff observed by kitayama , _ et al . _ with low stellar massespersists if three - dimensional instabilities arise at the ionization front .instabilities may open channels out of the halo and permit the exit of uv radiation that would not otherwise have escaped , enhancing uv escape fractions from low mass stars . on the other hand , recent high dynamical range amr simulations of primordial star formation revealed the formation of disks around protostars .these disks may reduce escape fractions even from high mass stars by blocking uv photons along lines of sight in the plane of the disk with large optical depths .the radiation hydrodynamical evolution of these objects will enable the simulation of more realistic initial conditions for the explosion of supernovae and subsequent chemical enrichment of the early universe .such simulations would also set the stage for self - consistent modeling of the energetics of cosmological miniquasars and the propagation of their radiation into the early igm .while we have modified our algorithm to study the photoevaporation of minihalos in the vicinity of a massive population iii ( pop iii ) star in 1d at high redshifts , the ionization of these objects in 3d with primordial chemistry remains to be done .the catalysis of molecular hydrogen within these structures in the presence of suv and x - ray backgrounds as they are being ionized by external fronts is a key issue in radiative feedback processes in the early universe that is not well understood .another process within the realm of study of our single source algorithm is the potential cutoff of accretion onto pop iii protostars by nascent i - fronts as the star enters the main sequence .radiation hydrodynamical simulation of accretion cutoff processes with improved stellar evolution models at their foundation will provide firmer estimates of the true mass spectrum of pop iii stars , a key ingedient in large scale calculations of early reionization .studies of the escape of uv radiation from high redshift protogalaxies of 1000 pop iii stars are currently being planned in connection with the addition of cartesian multisource vtef radiative transfer to zeus - mp .having exhaustively validated our coupling scheme for radiative transfer and hydrodynamics in 1d with the array of static and hydrodynamic tests presented in this paper , we can now apply the algorithm to these relevant and timely problems in three dimensions with confidence .we would like to thank the anonymous referee for suggestions which have significantly improved the quality of this paper .dw would like to thank mordecai mac - low , tetsu kitayama , and ilian iliev for useful discussions .this work was supported in part by nsf grants ast-0307690 and ast-9803137 .dw has also been funded in part by the u.s .dept . of energy through its contract w-7405-eng-36 with los alamos national laboratory .the simulations were performed at sdsc and ncsa under nrac allocation mca98n020 .abel , t. , bryan , g. l. , & norman , m. l. 2002 , science , 295 , 93 abel , t. , norman , m. l. , & madau , p. 1999, , 523 , 66 anninos , p. , zhang , y. , abel , t. , & norman , m. l. 1997 , new astronomy , 2 , 209 barkana , r. , & loeb , a. 2004 , , 609 , 474 bromm , v. , coppi , p. s. , & larson , r. b. 1999 , , 527 , l5 cen , r. 2003 , , 591 , 12 ciardi , b. , ferrara , a. , marri , s. , & raimondo , g. 2001 , , 324 , 381 ciardi , b. , ferrara , a. , & white , s. d. m. 2003 , , 344 , l7 cooray , a. , bock , j. j. , keating , b. , lange , a. e. , & matsumoto , t. 2004 , , 606 , 611 cooray , a. & yoshida , n. 2004 , , 351 , l71 corless , r. m. , gonnet , g. h. , hare , d. e. g. , jeffrey , d. j. , & knuth , d. e. 1996 , advances in computational mathematics , 5 , 329 .dalgarno , a. & mccray , r. a. 1972 , , 10 , 375 franco , j. , tenorio - tagle , g. , & bodenheimer , p. 1990, , 349 , 126 furlanetto , s. r. , & briggs , f. h. 2004 , new astronomy review , 48 , 1039 garcia - segura , g. , & franco , j. 1996 , , 469 , 171 haiman , z. & holder , g. p. 2003, , 595 , 1 heger , a. , fryer , c. l. , woosley , s. e. , langer , n. , & hartmann , d. h. 2003 , , 591 , 288 kogut , a. _ et al . _2003 , , 148 , 161 kitayama , t. , yoshida , n. , susa , h. , & umemura , m. 2004 , , 613 , 631 kuhlen , m. , & madau , p. , astro - ph/0506712 .machacek , m. e. , bryan , g. l. , & abel , t. 2003 , , 338 , 273 maselli , a. , ferrara , a. , & ciardi , b. 2003 , , 345 , 379 mellema , g. , iliev , i. t. , alvarez , m. , & shapiro , p. r. , new astronomy , submitted .norman , m. l. 2000 , revista mexicana de astronomia y astrofisica conference series , 9 , 66 oh , s. p. & haiman , z. 2003 , , 346 , 456 omukai , k. & inutsuka , s. 2002 , , 332 , 59 omukai , k. & palla , f. 2003 , , 589 , 677 oshea , b. w. , abel , t. , whalen , d. , & norman , m. l. 2005 , , 628 , l5 osterbrock , d. astrophysics of gaseous nebulae and active galactic nuclei , university science books 1989 .paschos , p. & norman , m. 2005 , , in prep .razoumov , a. o. & cardall , c. y. , astro - ph/0505172 .ricotti , m. , gnedin , n. y. , & shull , j. m. 2001 , , 560 , 580 rijkhorst , e. j. , plewa , t. , dubey , a. & mellema , g. , astro - ph/0505213 .ripamonti , e. & abel , t. 2004 , , 348 , 1019 santos , m. r. , bromm , v. , & kamionkowski , m. 2002 , , 336 , 1082 schaerer , d. 2002 , , 382 , 28 shapiro , p. r. , & giroux , m. l. 1987 , , 321 , l107 shapiro , p. r. , iliev , i. t. , & raga , a. c. 2004 , , 348 , 753 sokasian , a. , abel , t. , hernquist , l. , & springel , v. 2003 , , 344 , 607 sokasian , a. , yoshida , n. , abel , t. , hernquist , l. , & springel , v. 2004 , , 350 , 47 somerville , r. s. & livio , m. 2003 , , 593 , 611 spergel , d. n. _ et al . _ 2003 , , 148 , 175 stone , j. m. & norman , m. l. 1992 , , 80 , 753 tenorio - tagle , g. , bodenheimer , p. , lin , d. n. c. , & noriega - crespo , a. 1986 , , 221 , 635 wehrse , r. , wickramasinghe , d. & dave , r. , astro - ph/0507359 .whalen , d. , abel , t. , & norman , m. l. 2004 , , 610 , 14 wood , k. , & loeb , a. 2000 , , 545 , 86 wyithe , j. s. b. & loeb , a. 2003 , , 586 , 693 yorke , h. w. 1986 , , 24 , 49
|
radiation hydrodynamical transport of ionization fronts in the next generation of cosmological reionization simulations holds the promise of predicting uv escape fractions from first principles as well as investigating the role of photoionization in feedback processes and structure formation . we present a multistep integration scheme for radiative transfer and hydrodynamics for accurate propagation of i - fronts and ionized flows from a point source in cosmological simulations . the algorithm is a photon - conserving method which correctly tracks the position of i - fronts at much lower resolutions than non - conservative techniques . the method applies direct hierarchical updates to the ionic species , bypassing the need for the costly matrix solutions required by implicit methods while retaining sufficient accuracy to capture the true evolution of the fronts . we review the physics of ionization fronts in power - law density gradients , whose analytical solutions provide excellent validation tests for radiation coupling schemes . the advantages and potential drawbacks of direct and implicit schemes are also considered , with particular focus on problem timestepping which if not properly implemented can lead to morphologically plausible i - front behavior that nonetheless departs from theory . we also examine the effect of radiation pressure from very luminous central sources on the evolution of i - fronts and flows .
|
consider a general chemical system confined in a compartment of volume and consisting of a number of distinct chemical species interacting via chemical reactions of the type here , is an index running from to , denotes chemical species , and are the stoichiometric coefficients , and is the macroscopic rate of reaction . note that these reactions are not necessarily elementary ( unimolecular or bimolecular reactions ) . if the reaction is elementary then its rate is a constant while if it is non - elementary is a function of macroscopic concentrations .the general form of the master equation for both cases is where is the probability that the system is in a particular mesoscopic state and is the number of molecules of the species .note that is a step operator when it acts on some function of the absolute number of molecules , it gives back the same function but with replaced by .the chemical reaction details are encapsulated in the stoichiometric matrix and in the microscopic rate functions .the probability that the reaction occurs in the time interval is given by .for elementary reactions , the microscopic rate function takes one of four different forms , depending on the order of the reaction : ( i ) a zeroth - order reaction by which a species is input into a compartment gives ; ( ii ) a first - order unimolecular reaction involving the decay of some species gives ; ( iii ) a second - order bimolecular reaction between two molecules of the same species gives ; ( iii ) a second - order bimolecular reaction between two molecules of different species , and , gives . note that these forms for the microscopic rate functions have been rigorously derived from microscopic physics and hence the validity of eq .( [ eq2-supp ] ) for elementary reactions is guaranteed . for non - elementary reactions ,the form of the microscopic rate function has to be basically guessed by analogy with the prescription for elementary reactions .for example , for the set of reactions ( 3 ) in the main text , the second reaction is a non - elementary first - order reaction with a time - dependent macroscopic rate constant / ( k_m + [ x_s(t)]) ] is the constant macroscopic total enzyme concentration and ] based on the formula stated above for an elementary first - order reaction .of course , master equations based on microscopic rate functions obtained from this procedure are ad - hoc and have no fundamental basis .[ [ general - formulation - of - the - linear - noise - approximation - in - steady - state - conditions ] ] general formulation of the linear noise approximation in steady - state conditions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ here , we provide a step by step recipe to construct the linear noise approximation ( lna ) of the master equation , eq .( [ eq2-supp ] ) , for the set of reactions ( [ eq1-supp ] ) .we note that this approximation is only valid for a monostable system [ the condition is formally given by eq .( 3.4 ) in ch .x of the book by van kampen ] .let the macroscopic steady - state concentration of species be given by ] and the diagonal matrix with elements .3 . construct the jacobian matrix whose element is given by .construct the diffusion matrix .the stochastic differential equations ( linear langevin equations ) approximating the chemical master equation for the set of reactions ( [ eq1-supp ] ) in the limit of large molecule numbers are given by where , the entry of the vector , denotes the fluctuations about the macroscopic steady - state concentration of species , i.e. , ] , , , and . for the case ( the enzyme lactate dehydrogenase ), we used , =10\,\mu m ] , , , and . in all casesthe total number of enzyme molecules was .parameter values for fig .[ fig12](b ) are , , , =1 \ , nm$ ] , and .the rate constants for the cases and were obtained from the experimental studies. the rate constants for were not for a specific enzyme and hence were chosen from the known physiological ranges : for the range is , for the range is , and for the range is .similarly , the total enzyme concentrations were chosen from the physiological ranges : nano to millimolar concentrations. the compartment volumes for the data in fig .[ fig12](a ) were chosen such that the total number of enzyme molecules was in all cases ; for fig .[ fig12](b ) the volumes were chosen such that could be varied over the range to . ohphs n. g. van kampen , _stochastic processes in physics and chemistry _ ( elsevier , amsterdam , 2007 ) .d. t. gillespie , physica a * 188 * , 404 ( 1992 ) .d. t. gillespie , j. chem .* 131 * , 164109 ( 2009 ) .j. elf and m. ehrenberg , genome res . * 13 * , 2475 ( 2003 ) .j. keizer , _ statistical thermodynamics of nonequilibrium processes _ ( springer - verlag , berlin , 1987 ) .a. lodola , j. d. shore , m. d. parker , and j. holbrook , biochemical j. * 175 * , 987 ( 1978 ) .m. renard and a. r. fersht , biochemistry * 12 * , 4713 ( 1973 ) .m. j. boland and h. gutfreund , biochem .j. * 151 * , 715 ( 1975 ) .j. m. berg , j. l. tymoczko , and l. stryer , _ biochemistry _( freeman , new york , 2002 ) .a. fersht , _ structure and mechanism in protein science : guide to enzyme catalysis and protein folding _ ( freeman , new york , 1999 ) .r. grima and s. schnell , essays in biochemistry * 45 * , 41 ( 2008 ) .
|
the application of the quasi - steady - state approximation to the michaelis - menten reaction embedded in large open chemical reaction networks is a popular model reduction technique in deterministic and stochastic simulations of biochemical reactions inside cells . it is frequently assumed that the predictions of the reduced master equations obtained using the stochastic quasi - steady - state approach are in very good agreement with the predictions of the full master equations , provided the conditions for the validity of the deterministic quasi - steady - state approximation are fulfilled . we here use the linear - noise approximation to show that this assumption is not generally justified for the michaelis - menten reaction with substrate input , the simplest example of an open embedded enzyme reaction . the reduced master equation approach is found to considerably overestimate the size of intrinsic noise at low copy numbers of molecules . a simple formula is obtained for the relative error between the predictions of the reduced and full master equations for the variance of the substrate concentration fluctuations . the maximum error is reached when modeling moderately or highly efficient enzymes , in which case the error is approximately . the theoretical predictions are validated by stochastic simulations using experimental parameter values for enzymes involved in proteolysis , gluconeogenesis and fermentation . it is well known that whenever transients in the concentration of a substrate species decay over a much slower timescale than those of the enzyme species , one can invoke the quasi - steady - state approximation ( qssa ) to considerably simplify the deterministic ( macroscopic ) rate equations. the study by rao and arkin pioneered the use of the same approximation but on a mesoscopic level , i.e. , applying a stochastic version of the approximation to obtain reduced chemical master equations . this approximation has since become ubiquitous in stochastic simulations of large biochemical reaction networks inside cells ( see , for example , refs . ) although its range of validity is presently unknown . a plausible hypothesis is that the stochastic qssa is valid in the same regions of parameter space where the deterministic qssa is known to be valid . a handful of numerical studies have shown that for some choices of rate constants which are compatible with the deterministic qssa , the differences between the reduced and full master equation approaches are practically negligible . however , none of these studies exclude the possibility that there exist regions of parameter space where the deterministic qssa is valid but the stochastic qssa exhibits large systematic errors in its predictions . in particular , one is interested in knowing how accurate are the predictions of the stochastic qssa for the size of intrinsic noise , i.e. , the size of fluctuations in concentrations , since such noise is known to play important functional roles in biochemical circuits. numerical approaches can not easily answer such questions because the stochastic simulation algorithm , the standard method which exactly samples the trajectories of master equations, is computationally expensive. in this communication , we seek to develop a theoretical approach to answer the following question : given that the rate constants are chosen such that the deterministic qssa is valid , what are the differences between the predictions of the reduced and full master equations for the variance of the fluctuations about the mean concentrations ? we obtain a formula estimating the size of these differences for the simplest biochemical circuit which embeds the michaelis - menten reaction and confirm its accuracy using stochastic simulations . we find , using physiological parameter values , that the reduced master equation approach can overestimate the variance of the fluctuations by as much as . we start by considering the michaelis - menten reaction with substrate input {k_0 } x_c \xrightarrow[]{k_2 } x_e + x_p,\end{aligned}\ ] ] where denotes chemical species and the s denote the associated macroscopic rate constants . the reaction can be described as follows . substrate molecules ( species ) are pumped into some compartment at a constant rate , they bind to free enzyme molecules ( species ) to form substrate - enzyme complexes ( species ) which then either decay back to the original substrate and free enzyme molecules or else decay into free enzyme and product molecules ( species ) . the first reaction in ( [ eqn : mmreaction ] ) could equally represent the production of substrate by a first - order chemical reaction provided the species transforming into substrate exists in concentrations large enough such that fluctuations in its concentration can be ignored . the sum of the concentrations of free enzyme and complex is a constant since the enzyme can only be in one of these two forms . hence , all mathematical descriptions of the michaelis - menten reaction can be expressed in terms of just complex and substrate variables . on the macroscopic level , the qssa proceeds by considering the case in which transients in the complex concentration decay much faster than those of the substrate . this condition of timescale separation is imposed by setting the time derivative of the macroscopic complex concentration to zero , solving for the steady - state complex concentration and substituting the latter into the rate equation for the substrate concentration which leads to the new rate equation = k_{in } - \frac{k_2 [ e_t ] [ x_s(t)]}{k_m + [ x_s(t ) ] } , \label{eq2}\ ] ] where ] is the total enzyme concentration , i.e. , the sum of the concentration of free enzyme , ] , which is a constant as previously mentioned . note that the notation ] is satisfied . linear stability analysis of the full rate equations describing ( [ eqn : mmreaction ] ) shows that the timescale for the decay of transients in the substrate concentrations is )^{-1} ] . hence , the criterion for the validity of the qssa on the macroscopic rate equations ( the deterministic qssa ) , i.e. , for the validity of eq . ( [ eq2 ] ) , reads + k_m)/[x_e ] \gg 1 ] . note that while the first reaction is elementary , the second is clearly not , since it can clearly be broken down into a set of more fundamental constituent reactions . given the reduced set of reactions ( [ eq3 ] ) one can then construct a reduced master equation for the set of reactions ( [ eqn : mmreaction ] ) ( see ref . and supplementary material for the construction of master equations ) n_s}{k_m + n_s / \omega } p(n_s , t ) , \label{eq4}\end{aligned}\ ] ] where is the compartment volume in which the reactions are occurring , is the absolute number of substrate molecules , is the probability that the system has substrate molecules at time and is the step operator which upon acting on a function of changes it into a function of ( ref . ) . we note and emphasize that the physical basis of this master equation is not clear because such equations have been derived from first principles for elementary reactions, while ( [ eq3 ] ) involves a non - elementary reaction . equation ( [ eq4 ] ) is simply written by analogy to what one would write down for ( [ eq3 ] ) if both reactions were elementary and hence its legitimacy is _ a priori _ doubtful . now we want to use this master equation to deduce the variance of the noise in the macroscopic substrate concentrations . it is well known that in the macroscopic limit , the master equation for monostable chemical systems can be approximated by a linear langevin equation , an approximation called the linear noise approximation ( lna). for systems with absorbing states or exhibiting multimodality , the lna will not usually give accurate results ( see , for example , ref . ) but its application to our example , the michaelis - menten reaction with substrate input , is not problematic since this reaction is only capable of monostable behavior . the steps to construct the lna for a general monostable chemical reaction system are summarized in the supplementary material . here , we will simply state the results of this recipe when applied to the master equation , eq . ( [ eq4 ] ) . the langevin equation approximating eq . ( [ eq4 ] ) in the macroscopic limit of large molecule numbers and in steady - state conditions is }{\omega \gamma}\biggl(1+\frac{[x_s]}{k_m } \biggr)}\gamma(t ) , \label{eq5}\ ] ] where denotes the fluctuations about the macroscopic steady - state substrate concentration defined as ] in eqs . ( [ eq5 ] ) and ( [ eq - sigmaslna ] ) is the steady - state substrate concentration obtained by solving for ] . in the macroscopic limit , the master equation , eq . ( [ eq7 ] ) , can be approximated by a pair of langevin equations as given by the lna ( see supplementary information ) ) & [ x_e ] \\ k_1 + [ x_s ] & - [ x_e ] \end{pmatrix } \begin{pmatrix } \eta_{c}(t ) \\ \eta_{s}(t ) \end{pmatrix } \nonumber \\ & + \sqrt{\omega } \begin{pmatrix } 0 & \sqrt{k_0 [ x_e ] [ x_s ] } & -\sqrt{k_1 [ x_c ] } & -\sqrt{k_2 [ x_c ] } \\ \sqrt{k_{in } } & -\sqrt{k_0 [ x_e ] [ x_s ] } & \sqrt{k_1 [ x_c ] } & 0 \end{pmatrix } \vec{\gamma}(t ) , \label{eq8}\end{aligned}\ ] ] where , , and , respectively , denote the fluctuations about the macroscopic steady - state complex and substrate concentrations and is a dimensional vector whose entries are white gaussian noise with the properties and . using eq . ( [ eq8 ] ) , one can show ( see supplementary information ) that the variance of substrate concentration fluctuations in steady - state conditions is given by }{\omega}\left(1+\frac{[x_s]}{k_m } \frac{k_1 + [ x_s]}{k_m + [ x_s ] } \frac{\gamma}{1+\gamma } \right ) \nonumber \\ & \xrightarrow{\gamma \gg 1 } \frac{[x_s]}{\omega}\left(1+\frac{[x_s]}{k_m } \frac{k_1 + [ x_s]}{k_m + [ x_s ] } \right ) , \label{eq - sigmalna}\end{aligned}\ ] ] where in the last step we took the limit of , corresponding to the condition in which the deterministic qssa eq . ( [ eq2 ] ) is valid . comparing eqs . ( [ eq - sigmaslna ] ) and ( [ eq - sigmalna ] ) , we see that the two are not generally equal to each other except in the case . from a fundamental point of view , this disagreement implies that the reduced master equation does not obey the generalized fluctuation - dissipation theorem of nonequilibrium physics and that hence it is flawed . more importantly , we observe that the condition is not equivalent to the quasi - steady - state condition . the former condition is consistent with the enzyme - substrate complex being in thermodynamic equilibrium with free enzyme and substrate , a condition which is difficult to uphold in open systems since they are characterized by nonequilibrium steady states . while the quasi - steady - state condition can easily be satisfied in open systems since it is only required that the total enzyme concentration is much less than the michaelis - menten constant hence , we can conclude that for open systems , the stochastic qssa based on eqs . ( [ eq3 ] ) and ( [ eq4 ] ) is _ not _ the legitimate stochastic equivalent of the deterministic qssa . there are two possible hypothetical scenarios which would imply that the stochastic qssa is perhaps still a very good general method to estimate the size of the concentration fluctuations . the first case would be if experimental evidence showed that for many enzymes it just happens that . the second case would be if experimental evidence showed no such restriction on but nevertheless the difference between the variance prediction of the reduced and full master equations is so small as to be negligible . we now consider each case . a perusal of the experimental data available in the literature shows that there are very few studies which simultaneously report values of and , the data required to estimate . the vast majority of studies report values for , a considerable number report and a small percentage report both and . now the ratio , frequently called the enzyme efficiency, is defined as the recent study by bar - even _ et al . _ based on mining the brenda and kegg databases concluded that for most enzymes lies in the range . it is also known that the association constant takes values in the range . we can conclude from these two pieces of data and using eq . ( [ theta ] ) that the range of for most enzymes is between and some number which is much greater than and that hence on the basis of experimental data one can not argue for the general validity of the stochastic qssa . of course , as previously mentioned , it could still happen that even though there is no restriction on , that the variance as predicted by the stochastic qssa and the true variance are negligibly small . we can test this hypothesis quantitatively by using eqs . ( [ eq - sigmaslna ] ) and ( [ eq - sigmalna ] ) to derive the fractional relative error in the variance prediction of the stochastic qssa where ) ] , and = [ e_t ] \alpha$ ] , from which we can deduce that is a measure of how saturated is the enzyme with substrate . note that eq . ( [ eq11 ] ) shows that the relative error tends to zero as and and that hence the reduced master equation provides a correct prediction of the size of the substrate fluctuations whenever the free enzyme or complex concentrations are very small ( similar results have been obtained by mastny _ et al._ for the michaelis - menten reaction with no substrate input ; however their results are not for general and and do not enforce the validity of the deterministic qssa ; see later for discussion ) . in fig . [ fig12](a ) , the solid lines illustrate the predictions of eq . ( [ eq11 ] ) for three different values of : ( i ) , ( ii ) , and ( iii ) . case ( i ) utilizes experimental data for the enzymes chymotrypsin and malate dehydrogenase with respective substrates acetyl - l - tryptophan and nadh, while case ( ii ) is based on data for the enzyme lactate dehydrogenase with substrate pyruvate. these enzymes are respectively involved in proteolysis , gluconeogenesis and the conversion of pyruvate ( the final product of glycolysis ) to lactate in anaerobic conditions . case ( iii ) showcases the largest possible error made by the stochastic qssa ; this is consistent with a highly efficient enzyme such as -lactamase for which is of the same order of magnitude as the maximum possible association rate constant . the theoretical predictions of our lna based method are confirmed by stochastic simulations of the master equations , eq . ( [ eq4 ] ) and eq . ( [ eq7 ] ) , using gillespie s algorithm [ data points in fig . [ fig12](a ) ] . note that the maximum possible percentage error is about , which is significant . also note that the maximum error in all cases is reached at , namely , when the enzyme is half saturated with substrate which occurs when the substrate concentrations are equal to the michaelis - menten constant ( this is the case for most enzymes of the glycolytic pathway ) ; for substrate concentrations much smaller or larger than , the error is negligible . the lna is , strictly speaking , valid for large volumes or , equivalently , in the limit of large number of molecules and hence one could argue that our theoretical formula eq . ( [ eq11 ] ) is of limited validity inside cells , where molecule numbers can be quite small. figure [ fig12](b ) shows the results of stochastic simulations for the case and using a total number of enzyme molecules varying between and molecules . note that the error is practically constant at , the value predicted by the lna and shown in fig . [ fig12](a ) . this suggests that the estimates provided by our method are accurate even for low copy number conditions . our study has focused on the most common type of stochastic qssa in the literature which is heuristic in nature and hence the question regarding its validity . there are a class of alternative model reduction techniques based on singular - perturbation analysis ( sqspa and sqspa- ) which are rigorous and whose validity is not under question . for the michaelis - menten reaction without substrate input , these methods lead to a reduced master equation of the same form as the heuristic stochastic qssa whenever the free enzyme or complex concentrations are very small ( see table ii of ref . ) . this implies that for such conditions the error in the predictions of the stochastic qssa should be zero , a result which is also reproduced by our method . however , note that though these concentration conditions can be compatible with the deterministic qssa they are not synonymous with it . the sqspa methods do not lead to a reduced master equation for parameters consistent with the deterministic qssa and hence can not make statements regarding the accuracy of the heuristic stochastic qssa in such conditions . our contribution fills this important gap by deriving an explicit formula for the error in the predictions of the stochastic qssa , i.e. , eq . ( [ eq11 ] ) , for all parameters values consistent with the deterministic qssa . we finish by noting that a recent study by gonze _ et al . _ also studied the reaction system ( [ eqn : mmreaction ] ) using numerical simulations and found little difference between the predictions of the stochastic qssa and the full master equation . the study used values of ( see table in ref . ) and hence in the light of our results , it is clear why they observed high accuracy of the stochastic qssa . however , as we have shown , this is not the general case : many enzymes have large and hence discrepancies of the order of few tens of percent between the predictions of the reduced and full approaches will be visible whenever substrate concentrations are approximately equal to the michaelis - menten constant . ohphs l. a. segel and m. slemrod , siam rev . * 31 * , 446 ( 1989 ) . s. schnell and p. k. maini , comm . theo . biol . * 8 * , 169 ( 2003 ) . c. v. rao and a. p. arkin , j. chem . phys . * 118 * , 4999 ( 2003 ) . d. gonze , m. jacquet , and a. goldbeter , j. royal . . int . * 5 * , s95 ( 2008 ) . j. paulsson , o. g. berg , and m. ehrenberg , proc . nat . acad . sci . * 97 * , 7148 ( 2000 ) . m. l. guerriero , a. pokhilko , a. pinas - fernandez , k. j. halliday , a. j. millar , and j. hillston , j. r. soc . interface doi:10.1098/rsif.2011.0378 ( 2011 ) . g. dupont , a. abou - lovergne , and l. combettes , biophys . j. * 95 * , 2193 ( 2008 ) . d. gonze , w. abou - jaoude , d. a. ouattara , and j. halloy , methods enzym . * 487 * , 171 ( 2011 ) . k. r. sanft , d. t. gillespie , and l. r. petzold , iet sys . biol . * 5 * , 58 ( 2011 ) . a. eldar and m. b. elowitz , nature * 467 * , 167 ( 2010 ) . d. t. gillespie , j. phys . chem . * 81 * , 2340 ( 1977 ) . d. t. gillespie , annu . rev . phys . chem . * 58 * , 35 ( 2007 ) . i. stoleriu , f. a. davidson , and j. l. liu , j. math . biol . * 48 * , 82 ( 2004 ) . n. g. van kampen , _ stochastic processes in physics and chemistry _ ( elsevier , amsterdam , 2007 ) . d. t. gillespie , physica a * 188 * , 404 ( 1992 ) . d. t. gillespie , j. chem . phys . * 131 * , 164109 ( 2009 ) . w. hortshemke and l. brenig , z. physik b * 27 * , 341 ( 1977 ) ; w. hortshemke , m. malek - mansour , and l. brenig , z. physik b * 28 * , 135 ( 1977 ) ; t. t. marquez - lago and j. stelling , biophys . j. * 98 * , 1742 ( 2010 ) . j. elf and m. ehrenberg , genome res . * 13 * , 2475 ( 2003 ) . we note that the general form of the master equation , eq . ( 4 ) in ref . , is constructed from the laws of probability given two simple premises : ( i ) the chemical system , at any point in time , can be in one of a number of possible states , each state described by the number of molecules of each species , and ( ii ) when a reaction occurs , the state of the system changes to a new one . due to premise ( ii ) , the master equation depends on the probability that a reaction occurs in a short time interval . assuming well mixed conditions , the form of these probabilities have been rigorously derived from the well established laws of microscopic physics for elementary reactions ( those involving the simultaneous interaction of at most two molecules ) in the gas - phase , ref . , and in liquids , ref . . j. keizer , _ statistical thermodynamics of nonequilibrium processes _ ( springer , berlin , 1987 ) . a. bar - even , e. noor , y. savir , w. liebermeister , d. davidi , d. s. tawfik , and r. milo , biochemistry * 50 * , 4402 ( 2011 ) . a. fersht , _ structure and mechanism in protein science : guide to enzyme catalysis and protein folding _ ( freeman , new york , 1999 ) . e. a. mastny , e. l. haseltine , and j. b. rawlings , j. chem . phys . * 127 * , 094106 ( 2007 ) . p. pharkya , e. v. nikolaev , and c. d. maranas , metab . eng . * 5 * , 71 ( 2003 ) . m. kanehisa , m. araki , s. goto , m. hattori , m. hirakawa , m. itoh , t. katayama , s. kawashima , s. okuda , t. tokimatsu , and y. yamanishi , nucleic acids res . * 36 * , d480 ( 2008 ) . a. lodola , j. d. shore , m. d. parker , and j. holbrook , biochemical j. * 175 * , 987 ( 1978 ) . m. renard and a. r. fersht , biochemistry * 12 * , 4713 ( 1973 ) . m. j. boland and h. gutfreund , biochem . j. * 151 * , 715 ( 1975 ) . r. grima , j. chem . phys . * 133 * , 035101 ( 2010 ) . r. a. copeland , _ evaluation of enzyme inhibitors in drug discovery : a guide for medicinal chemists and pharmacologists _ ( wiley - interscience , hoboken , 2005 ) . r. grima and s. schnell , essays in biochemistry * 45 * , 41 ( 2008 ) . r. srivastava , e. l. haseltine , e. a. mastny , and j. b. rawlings , j. chem . phys . * 134 * , 154109 ( 2011 ) .
|
many classical geometric inequalities were proved by first establishing the inequality for a simple geometric transformation , such as steiner symmetrization or polarization .steiner symmetrization is a volume - preserving rearrangement that introduces a reflection symmetry , and polarization pushes mass across a hyperplane towards the origin .( proper definitions will be given below ) .to mention just a few examples , there are proofs of the isoperimetric inequality and santal s inequality based on the facts that steiner symmetrization decreases perimeter and increases the mahler product .inequalities for capacities and path integrals follow from the observation that polarization increases convolution functionals and related multiple integrals .this approach reduces the geometric inequalities to one - dimensional problems ( in the case of steiner symmetrization ) or even to combinatorial identities ( in the case of polarization ) .it can also be exploited to characterize equality cases .a major point is to construct sequences of the simple rearrangements that produce full rotational symmetry in the limit . in this paper, we study the convergence of random sequences of polarizations to the symmetric decreasing rearrangement .the result of random polarizations of a function is denoted by , where each is a random variable that determines a reflection .we assume that the are independent , but not necessarily identically distributed , and derive conditions under which rearrangements have been studied in many different spaces , with various notions of convergence .we work with continuous functions in the topology of uniform convergence , while most classical results are stated for compact sets with the hausdorff metric . these notions of convergence turn out to be largely equivalent because of the monotonicity properties of rearrangements . for sequences of steiner symmetrizations alonguniformly distributed random directions , convergence is well known .it has recently been shown that certain uniform geometric bounds on the distributions guarantee convergence for a broad class of rearrangements that includes polarization , steiner symmetrization , the schwarz rounding process , and the spherical cap symmetrization . among these rearrangements, polarization plays a special role , because it is elementary to define , easy to use , and can approximate the others .our conditions for convergence allow the distribution of the to be far from uniform .we also prove bounds on the rate of convergence , and show how convergence can fail .our results shed new light on steiner symmetrizations .in particular , we obtain bounds on the rate of convergence for steiner symmetrizations of arbitrary compact sets .let be either the sphere , euclidean space , or the standard hyperbolic space , equipped with the uniform riemannian distance , the riemannian volume , and a distinguished point , which we call the origin .the ball of radius about a point is denoted by ; if the center is at we simply write .we denote by the distance between a point and a set , and by the * hausdorff distance * between two sets .if is a set of finite volume in , we denote by the open ball centered at the origin with .we consider nonnegative measurable functions on that vanish weakly at infinity , in the sense that the level sets have finite volume for all .( on the sphere , this condition is empty . )the * symmetric decreasing rearrangement * is the unique lower semicontinuous function that is radially decreasing about and equimeasurable with .its level sets are obtained by replacing the level sets of with centered balls , a * reflection * is an isometry on with that exchanges two complementary half - spaces , and has the property that whenever and lie in the same half - space . on , we have the reflections at great circles , on the euclidean reflections at hyperplanes , and in the poincar ball model of the inversions at -dimensional spheres that intersect the boundary sphere at right angles .for every point there exists a -dimensional family of reflections that fix , and for every pair of distinct points there exists a unique reflection that maps to .let be a reflection on that does not fix the origin .for , denote by the mirror image of , and let be the half - spaces exchanged under . by construction , .the * polarization * of a function with respect to is defined by obvious reasons , polarization is also called * two - point symmetrization*. we use a fixed normal coordinate system centered at the origin , where , and denote the parameter space by . on , these are just the standard polar coordinates . on and ,normal coordinates define a diffeomorphism from to , but on the normal coordinate system degenerates at , where it reaches the south pole . for ,let be the reflection that maps to the point with normal coordinates .the reflections generate a one - dimensional group of isometries of . as , they converge uniformly to a reflection that fixes the origin and exchanges the half - space ( that has as its exterior normal at ) with the complementary half - space .we do not identify with in , although they label the same reflection on . if with , the polarization of with respect to is denoted by .given a sequence in , we denote the corresponding sequence of polarizations by .let be a unit vector in , and let be a nonnegative measurable function that vanishes at infinity .the * steiner symmetrization * in the direction of replaces the restriction of to each line , where , with its ( one - dimensional ) symmetric decreasing rearrangement . if the restriction of to such a line is not measurable or does not decay at infinity , we set the steiner symmetrization of equal to zero on this line .we denote the steiner symmetrization of by , or simply by . by construction , is symmetric under .note that steiner symmetrization dominates polarization in the sense that for every direction and all ( see fig . 1 ) .polarization and steiner symmetrization share with the symmetric decreasing rearrangement the properties that they are monotone ( implies ) , equimeasurable ( for all ) , and -contractive ( ) for all .they also preserve or improve the * modulus of continuity * , which we define here as the corresponding rearrangements of a set are defined by rearranging its indicator function .conversely , the rearranged function can be recovered from its level sets with the * layer - cake principle * , different from standard conventions , we do not automatically identify functions that agree almost everywhere .we have chosen the symmetric decreasing rearrangement of a function to be lower semicontinuous .in particular , if is a set of finite volume , then is an _open _ ball .polarization and steiner symmetrization both transform open sets into open sets .polarization also transforms closed sets into closed sets , but steiner symmetrization does not .the literature contains a variant of the symmetric decreasing rearrangement that preserves compactness , where is a _ closed _ centered ball if has positive volume , if is a non - empty set of zero volume , and if . steiner symmetrization is again defined by symmetrizing along a family of parallel lines . a * random polarization * is given by a borel probability measure on that determines the distribution of the random variable , viewed as the identity map on .we assume that ; for we also assume that .a * random steiner symmetrization * is given by a borel probability measure on , or equivalently , by a measure on with .for sequences of random rearrangements with each independent and distributed according to a measure on , we use as the probability space the infinite product with the product topology , and with the product measure defined by in this view , is the -th coordinate projection on . let be the space of nonnegative continuous functions with compact support in .( if , this agrees with the space of all nonnegative continuous functions on ) .our first theorem provides a sufficient condition for the almost sure convergence of a random sequence of polarizations to the symmetric decreasing rearrangement .[ thm:46 ] let be a sequence of polarizations on , , or , defined by a sequence of independent random variables on . if for every radius and every pair of bounded sequences , in with , then at first sight , the conclusion in eq .( [ conclusion-46a ] ) , that the random sequence almost surely drives all functions in _ simultaneously _ to their symmetric decreasing rearrangements , looks stronger than eq . .as we show in the proof of theorem [ thm:46 ] , the statements are equivalent , because is separable and polarization contracts uniform distances .let be the space of nonnegative -integrable functions .since polarization also contracts -distances and is dense in , eq .( [ conclusion-46a ] ) extends to the assumption in eq .( [ assumption-46a ] ) implies that infinitely many of the assign strictly positive measure to every non - empty open set in .the measures may concentrate or converge weakly to zero as , but not too rapidly .this causes typical random sequences to be dense in .we are convinced that almost sure convergence holds under much weaker assumptions on the distribution of the random variables than eq .( [ assumption-46a ] ) . a related question concerns the conditions for convergence of non - random sequences in .clearly , convergence can fail if a sequence of polarizations concentrates on a subset of that is too small to generate full rotational symmetry .since the polarization leaves subsets of unchanged , a sequence of reflections must accumulate near to ensure convergence .it is , however , neither sufficient nor necessary that the sequence be dense in : on the one hand , any given sequence of polarizations can appear as a subsequence of one for which convergence fails ( proposition [ prop : lower - p]b ) ; on the other hand , a sequence of polarizations chosen at random from certain small sets can converge to the symmetric decreasing rearrangement ( theorem [ thm : iid ] ) . rather , convergence depends on the ergodic properties of the corresponding reflections in the orthogonal group . to state the result , we introduce some more notation . for ,let be the map from to itself that fixes the half - space and reflects the complementary half - space by .we visualize as folding each centered sphere down into the hemisphere antipodal to ( see fig .given and , we refer to the set as the * orbit * of under .[ thm : iid ] let be a random sequence of polarizations on , , or , defined by independent random variables that are identically distributed according to a probability measure on with .let be the smallest closed set of full -measure in , and set if the orbit is dense in for each , then converges to the symmetric decreasing rearrangement and eq .( [ conclusion-46a ] ) holds . in one dimension, polarizations need to accumulate on both sides of the origin to produce the desired reflection symmetry . in dimension , the precise characterization of subsets that have dense orbits in is an open problem .a necessary condition is that be a * generating set of directions * for the orthogonal group , in the sense that the finite products are dense in .also , can not be contained in a hemisphere .a sufficient condition is that the antipodal pairs form a generating set of directions for , because for every and every , either , or .must contain antipodal pairs ?do directions suffice ?( see fig .generating sets of directions for are well understood .for instance , if _( i ) _ the vectors in span ; _ ( ii ) _ can not be partitioned into two non - empty mutually orthogonal subsets ; and _ ( iii ) _ at least one pair of vectors in encloses an angle that is not a rational multiple of , then is a generating set of directions . ( the third condition can be relaxed in dimensions . ) since directions in general position are a generating set for , the hypothesis of theorem [ thm : iid ] can be satisfied even by measures whose support has only a finitely many accumulation points .theorems [ thm:46 ] and [ thm : iid ] imply the following statements about steiner symmetrization .[ cor:46 ] let be a sequence of steiner symmetrizations on along independently distributed random directions in . 1 .( convergence of random steiner symmetrizations ) . if for every radius and every sequence in , then 2 .( convergence of i.i.d .steiner symmetrizations ) .the same conclusion holds , if , instead , the random directions are identically distributed according to a probability measure on whose support contains a generating set of directions for .the literature contains several different constructions for convergent sequences of rearrangements . in their proof of the isoperimetric inequality , carathodory andstudy recursively choose the direction of the next steiner symmetrization such that is as close to the ball as possible .lyusternik proposed a sequence that alternates steiner symmetrization in the -th coordinate direction with schwarz symmetrization in the complementary coordinate hyperplane and a well - chosen rotation .brascamp , lieb , and luttinger alternate steiner symmetrization in all coordinate directions with a rotation .the constructions of lyusternik and brascamp - lieb - luttinger yield _ universal _ sequences , which work for all nonnegative functions on that vanish at infinity .a number of authors have addressed the question of what distinguishes convergent sequences of steiner symmetrizations , and how to describe their limits .eggleston proved that full rotational symmetry can be achieved by iterating steiner symmetrization in directions that satisfy a non - degeneracy condition .klain recently showed that iterating any finite set of steiner symmetrizations on a convex body results in a limiting body that is symmetric under the subgroup of generated by the corresponding reflections . on the other hand , steiner symmetrizations along a dense set of directions may or may not converge to the symmetric decreasing rearrangement , depending on the order in which they are executed .we note in passing that , although the last three results are stated for convex sets , the proofs are readily adapted to functions in , with the arzel - ascoli theorem providing the requisite compactness in place of the blaschke selection theorem . by choosing the measure in corollary [ cor:46]b to be supported on a finite generating set of directions , we obtain an analogue of eggleston s theorem for random sequences .finding even one convergent sequence of polarizations is more difficult , because it is not enough to iterate a finite collection of polarizations .baernstein - taylor , benyamini , and brock - solynin argue by compactness that the set of functions that can be reached by some finite number of polarizations from a function contains in its closure .the greedy strategy of carathodory and study also works for the case of polarizations .both constructions result in sequences that depend on the initial function .a universal sequence was produced by van schaftingen . in these papers, considerable effort goes into the construction of convergent ( or non - convergent ) sequences that are rather special .the question whether a randomly chosen sequence converges with probability one was first raised by mani - levitska .he conjectured that for compact subsets of , a sequence of steiner symmetrizations in directions chosen uniformly at random should converge in hausdorff distance to the ball of the same volume , and verified this for convex sets .the mani - levitska conjecture was settled by van schaftingen for a larger class of rearrangements that have the same monotonicity , volume - preserving , and smoothing properties as the symmetric decreasing rearrangement .we paraphrase his results for the case of polarization .van schaftingen proves the convergence statement in eq .( [ conclusion-46a ] ) under the assumption that the random variables are independent and their distribution satisfy the uniform bound for every and every . in the proof, he first constructs a _universal _ sequence , that is , a single non - random sequence in such that the symmetrizations converge uniformly to for every . eq .( [ assumption - vs ] ) implies that typical random sequences closely follow the universal sequence for arbitrarily long finite segments , i.e. , for every and every integer , after taking a countable intersection over and , eq . ( [ conclusion-46a ] ) follows with a continuity argument .the condition in eq .( [ assumption - vs ] ) is stronger than the corresponding assumption of theorem [ thm:46 ] . to see this ,let , be a pair of bounded sequences in , and choose a pair of subsequences , that converge to limits and . for large , if eq .( [ assumption - vs ] ) holds , then does not converge to zero , and the series in eq .( [ assumption-46a ] ) diverges .we later show examples that satisfy eq .( [ assumption-46a ] ) but not eq . ( [ assumption - vs ] ) .independently , voli has given a direct geometric proof for the convergence of steiner symmetrizations along uniformly distributed random directions .his proof is phrased as a borel - cantelli estimate , which suggests that pairwise independence of the might suffice for convergence ( see ) . upon closer inspection, there is a conditioning argument where the independence of the comes into play .it is an open question if convergence can be proved under weaker independence assumptions .we are not aware of any prior work on rates of convergence for polarizations .there are , however , some very nice results regarding rates of convergence for steiner symmetrizations of convex bodies .klartag proved that for every convex body and every , there exists a sequence of steiner symmetrizations such that in other words , .this means that the distance from a ball decays faster than every polynomial ( * ? ? ?* theorem 1.5 ) .remarkably , is a numerical constant that depends neither on nor on the dimension . the control over the dimension builds on the earlier result of klartag and milman that steiner symmetrizations suffice to reduce the ratio between outradius and inradius of a convex set to a numerical constant .around the same time , bianchi and gronchi established bounds on the rate of convergence in the other direction .for each and every dimension , they construct centrally symmetric convex bodies in whose hausdorff distance from a ball can not be decreased by _ any _ sequence of successive steiner symmetrizations .their construction yields a lower bound on the distance from a ball for arbitrary infinite sequences of steiner symmetrizations .klartag s results have recently been extended to random symmetrizations of convex bodies .it is not known whether convergence is in fact exponential , and whether klartag s convergence estimates can be generalized to non - convex sets .the proofs of mani - levitska , van schaftingen , and voli involve a detailed analysis of typical sample paths .since they rely on compactness and density arguments , they do not yield bounds on the rate of convergence .in contrast , bianchi - gronchi and klartag use probabilistic methods to find non - random sequences with desired properties .the construction of bianchi and gronchi takes advantage of ergodic properties of reflections .klartag views the rearrangement composed of a random rotation followed by steiner symmetrizations in each of the coordinate directions as one step of a markov chain on convex bodies .he replaces the steiner symmetrizations by minkowski symmetrizations to obtain a simpler markov chain , which acts on the support function of a convex body as a random orthogonal projection in .since this simpler process is a strict contraction on the spherical harmonics of each positive order , the support function converges exponentially ( in expected -distance ) to a constant .he finally obtains eq .( [ eq : klartag ] ) from a subtle geometric comparison argument .we combine an analytical approach similar to klartag s with the geometric techniques used by voli .the sequence defines a markov chain on the space .we use that the functional decreases under each polarization , and make voli s conditioning argument explicit by appealing to the markov property . here, denotes integration with respect to the standard riemannian volume on , , or . for the proof of theorem [ thm:46 ] , we quantify the expected value of the drop in terms of and the modulus of continuity of .since the expected drop goes to zero , converges uniformly to . for the case of i.i.d .polarizations considered in theorem [ thm : iid ] , the challenge is that their distribution may be supported on a small set . here , we resort to a compactness argument . by monotonicity , approaches a limiting value . under the assumptions of the theorem ,the drop of has _ strictly positive expectation _ unless ( lemma [ lem : og ] ) .this forces the limits of convergent subsequences to be invariant under a family of transformations ( the folding maps parametrized by eq . ) , which play the role of _ competing symmetries _ : the only functions that are invariant under the entire family are constant on each centered sphere . our estimates for the expected drop of imply bounds on the rate of convergence that depend on the modulus of continuity of and the distribution of the . in the case where the are uniformly distributed on a suitable subset of , we show that there exists a numerical constant such that for every lipschitz continuous nonnegative function on with support in ( proposition [ prop : uniform ] ) . on the other hand, there exist lipschitz continuous functions with support in such that where and are numerical constants ( proposition [ prop : lower - p]a ) . for steiner symmetrization, we use that for every and all to bound the expected value of the drop under a random steiner symmetrization from below by the corresponding estimate for a random polarization ( corollary [ cor:46 ] ) . by the same token, the power - law bounds on the rate of convergence extend to steiner symmetrizations along uniformly distributed directions ( corollary [ cor : uniform - s ] ) .since we ignore that steiner symmetrization reduces perimeter , these bounds can not be sharp , but to our knowledge they are the only available bounds that do not require convexity .it is an open question whether the sequence converges exponentially , and how the rate of convergence depends on the dimension .is it more effective to alternate steiner symmetrizations along the coordinate directions with a random rotation , as in ?does it help to adapt the sequence to the function ?do polarizations converge more slowly , perhaps following a power law ?we start by preparing some tools for the proof of the main results .let be the functional defined in eq . .the first lemma is a well - known identity , which is related to the hardy - littlewood inequality .we reproduce its proof here for the convenience of the reader .[ lem : hl ] let be a nonnegative measurable function with , and let be a polarization. then ^+\ , [ d(\sigma_\omega x , o)\!-\!d(x , o)]^+\ , dm(x)\,.\ ] ] in particular , unless almost everywhere .we rewrite the functional as an integral over the positive half - space associated with , where .if for some , then the values of at and agree with the corresponding values of , and the integrand vanishes at .if , on the other hand , then the values are swapped for , and the integrand becomes , where both factors are negative .we switch the signs , collect terms , and integrate to obtain the claim .the next lemma is the key ingredient in the proof of theorem [ thm:46 ] .[ lem : key ] let be a nonnegative continuous function with compact support in for some and modulus of continuity .set , let be so small that , and let be a random variable on , as described above .then where , and the infimum extends over with .furthermore , on , where .we first construct a pair of points such that ( see fig .3a ) . by assumption, there exists a point with .set , let , and let be the corresponding level set of . if , we set . by construction , .since this set is open and non - empty , it has positive volume , and therefore , having the same volume , is non - empty .. then . similarly ,if , we set , find , and note that .since the modulus of continuity of is valid also for and , we have . by lemma [ lem : hl ] and fubini s theorem , a random polarization satisfies ^+\ , [ d(\sigma_w x , o)-d(x , o)]^+\ , dm(x)\right)\\ & \ge & \frac{\eps\rho}{2 } \int_{b_\rho(a ) } p ( d(\sigma_w x , b)<\rho)\ , dm(x)\,,\end{aligned}\ ] ] because the choice of and ensures that and for all with .. follows by minimizing over , and and evaluating the integral . for a random steiner symmetrization ,we use eq .( [ eq : trump ] ) to obtain ^+\ , \bigl[\,|\sigma_{(r,\pm u)}x|-|x|\,\bigr]^+\ , dm(x)\right)\\ & \ge & \ee\left ( \chi_{\inf \int_{b_\rho(a ) } \frac{\eps\rho}{4}\ , dm(x)\right)\\ & \ge & \frac{\eps\rho}{8 } \ , m(b_\rho(a ) ) \cdot p(2l\sin d(u , v)<\rho)\,,\end{aligned}\ ] ] where is the unit vector in the direction of , and is the enclosed angle . in the second line ,the infimum runs over , and we have used that and whenever and . in the last line , we have estimated the infimum by and applied fubini s theorem .given , let be the result of random polarizations of . since , the sequence decreases monotonically and satisfies by writing the difference as a telescoping sum and taking expectations , this implies that where the infimum extends over all with , and , , and are positive constants that depend on . we have used the markov property in the second step , and applied eq .( [ eq : key - pol ] ) of lemma [ lem : key ] in the third .in particular , the sum in eq .( [ eq : tata ] ) converges . since the first factors in the product are not summable by eq ., the second factors must have zero as an accumulation point . by monotonicity, they converge to zero .since was arbitrary , we conclude that this establishes eq . . to complete the proof , we choose a countable dense subset .let be a sequence in .since polarizations and the symmetric decreasing rearrangement contract uniform distances , we have for every pair of functions and every , we take and minimize over to obtain , by the density of , latexmath:[\[\begin{aligned } \lim_{n\to\infty } ||s_{\omega_1\dots \omega_n}f - f^*||_\infty & \le\inf_{g\in\mathcal g } \left\ { 2||f - g||_\infty + \lim_{n\to\infty}||s_{\omega_1\dots \omega_n}g - g^*||_\infty\right\}\\ & \le \sup_{g\in\mathcal g } \\lim_{n\to\infty } is countable , it follows that proving eq .( [ conclusion-46a ] ) . for the proof of theorem [ thm :iid ] , we need one more lemma . [lem : og ] let .* ( by polarization ) .let be a random variable on whose distribution satisfies .if the orbit of each under is dense in , then * ( by steiner symmetrization ) .let be a random variable on , and let be its probability distribution . if the support of contains a generating set of directions for , then for part ( a ) , suppose that .it follows from lemma [ lem : hl ] that , and hence , for -a.e .this means that for -a.e . and all .let . by assumption, assigns strictly positive measure to each neighborhood of in .since , we can find a sequence with that converges to such that for each and all . by continuity , for all , which means that the value of increases monotonically along orbits of .since is dense in the sphere of radius and is uniformly continuous , must be radial . to see that is symmetric decreasing , we write it as for some continuous function .consider first the cases and .given , choose with such that for all , and let be the point with normal coordinates .the reflection maps the centered sphere of radius to the sphere of the same radius centered at . since this sphere contains the points with normal coordinates , by the intermediate value theorem it contains for each ] .this proves that . for part ( b ), suppose that .we augment the random direction to a random variable on , where is exponentially distributed on , the positive and negative signs are equally likely , and the three components are independent. then by eq . .the probability distribution of is given by the measure on . by construction , .since the support of contains a generating set of directions for , the orbit of any vector under is dense in .therefore , satisfies the assumptions of part ( a ) , and we conclude that .finally , the converse implications hold because for all .let be a random variable on that is distributed according to the measure from the statement of the theorem .lemma [ lem : og ] guarantees that unless .let be the set of all nonnegative continuous functions supported in the ball of radius whose modulus of continuity is bounded by . since is continuous in the uniform topology and compact by the the arzel - ascoli theorem , for each .given , let be its modulus of continuity , and assume that is supported in . denote by the result of random polarizations of .since polarization preserves the modulus of continuity and the ball , we have .we argue as in the proof of theorem [ thm:46 ] that in the second line , we have used the markov property and the definition of .since , the sequence converges almost surely uniformly to , and eq .( [ conclusion-46a ] ) follows .we proceed as in the proofs of theorems [ thm:46 ] and [ thm : iid ] , with eq . and lemma [ lem : og]b in place of eq . and lemma [ lem : og]a .the following lemma allows to transform integrals over into integrals over .geometrically , we map to the image of a point under the reflection .since for every point there exists a unique reflection that maps to , this defines a diffeomorphism from to . for ,the diffeomorphism agrees with the polar coordinate map .[ lem : jacobian ] let .then for every measurable function on such that the integral on the left hand side converges .here denotes the uniform measure on . .in polar coordinates centered at and , the volume element transforms as ., scaledwidth=36.0% ] set . if we write and express in polar coordinates , then because the lines are invariant under .if moves by a certain distance , then moves by that distance in either the direction of or in the opposite direction ( see fig .4 ) . in polar coordinates , the metric on transforms as .the claim follows by returning to cartesian coordinates for .we use this formula to construct examples of measures that satisfy the hypothesis of theorem [ thm:46 ] but not eq .( [ assumption - vs ] ) .consider the gaussian probability measure on whose density is the centered heat kernel at time . by changing to polar coordinates, we obtain a probability measure on , given by where .fix , let be a pair of points in with , and consider the event .if , we use that to see that ] and , then in the proof of the proposition , we will use the following lemma . [expected drop in symmetric difference ] [ lem : uniform ] if is a uniformly distributed random variable on , then for every measurable set in .fix , and let and be the half - spaces associated with . by construction, polarization swaps the portion of that lies in with its mirror image in ( see fig .of these sets , precisely the portion of whose reflection lies in contributes to the symmetric difference , twice , see fig . 3b .but this just means that we compute the expectation , using fubini s theorem and the change of variables from lemma [ lem : jacobian ] .the result is where . in the last step, we have used that the distance between and is at most , and that and have the same volume . note that the riemannian volume of the unit sphere in is related to the lebesgue measure of the unit ball by . consider first the case where for some measurable set , and let . by lemma [ lem : uniform ] , the markov property , and jensen s inequality , where .this shows that satisfies the recursion relation . since and , it follows that if is a nonnegative bounded measurable function on , we use the layer - cake principle to write and likewise for .since is bounded , the integrand vanishes for , and we obtain from eq .that proving the first claim .if is hlder continuous , then and and are hlder continuous with the same modulus of continuity .let , and set . since differs from by at least on some ball of radius , we have .we obtain from eq .( [ eq : rate ] ) that applying jensen s inequality once more , we arrive at the leading constant is maximized at and , and eq .follows . by eq .( [ eq : trump ] ) , proposition [ prop : uniform ] extends to steiner symmetrization along directions chosen independently and uniformly at random on .[ rate of convergence for random steiner symmetrizations ] [ cor : uniform - s ] if is a sequence of independent uniformly distributed random variables on , then for every nonnegative bounded measurable function with support in .if is hlder continuous with modulus of continuity for some ] , and .its polarization at is given by ^+\,,\ ] ] where is the folding map that fixes the positive half - space and reflects across the separating hyperplane .[ prop : lower - p ] let be given by eq .( [ eq : def - f - p ] ) . *( convergence of random polarizations is not faster than exponential ) .+ if , then for every sequence of independent random variables on such that the distribution of each is symmetric under . *( non - convergence ) . if , then there exists a dense sequence in such that has no limit in . a single random polarization results in ^+ ] , where and . by the markov property , , and the first claim follows . for the second claim , we realize an arbitrary sequence as a subsequence of one for which convergence fails . given , fix and define as follows .on the odd integers set , where , and the sign is chosen in such a way that is unchanged by . on the even integers , set . if is dense , then is dense as well . set ^+ ] for some .let with . by density , we can find a subsequence that converges to .since both and converge to , we must have .since was arbitrary , it follows that . on the other hand , , a contradiction .the corresponding bounds for steiner symmetrizations on are slightly more involved . as an example, we use the function ^+\,,\ ] ] where is a positive definite symmetric matrix .the symmetric decreasing rearrangement of is ^+ ] , where is a positive definite symmetric matrix whose the extremal eigenvalues satisfy we apply lemma [ lem : eval ] and take expectations .let and be the eigenvectors of corresponding to and , and set and . by taking advantage of the rotation invariance ,we compute and for all , see ( * ? ? ? * exercise 63 , p. 80 ) .this results in the claim follows by evaluating the right hand side at , where it assumes its minimum value , and using eq .( [ eq : eval ] ) . [ proof of proposition [ prop : lower - s ] ] let be given by eq .( [ eq : def - f - s ] ) with some positive definite symmetric matrix .we first consider the case of a random sequence , where the directions are independent and uniformly distributed on . by lemma [ lem : ellipsoid ], we can write in the form ( [ eq : def - f - s ] ) with a positive definite symmetric matrix that is recursively defined by eq .( [ eq : ellipsoid ] ) with .we iterate the estimate in lemma [ lem : extremal ] , using the markov property , and obtain that the gap between the extremal eigenvalues of is at least . since we assumed that , it follows from eq .( [ eq : uniform - lambdas ] ) that for the second claim , we proceed as in the proof of proposition [ prop : lower - p ] by realizing an arbitrary sequence as a subsequence of one for which convergence fails . given in ,let so small that , where as in lemma [ lem : extremal ] , and construct the sequence as follows . in the first step , pick to be a maximizing eigenvector of .suppose we have already chosen such that for each , and that appear as a subsequence .if , pick . otherwise , choose on the great circle that joints with in such a way that and .since diverges , the entire sequence is incorporated as a subsequence into . if is dense , so is .let ^+ ] .by definition , the level set of at height is the outer parallel set .the level set of at that height is the centered ball of the same volume .its radius , defined by depends continuously on and converges to the radius of as .set . since for all , by theorem [ thm:46 ] .this proves the first claim .if has zero volume , we continuously extend the function such that and replace the auxiliary function with ^+\,,\ ] ] where is an arbitrary constant .the level sets of at heights below are outer parallel sets of , while the level sets at heights above are inner parallel sets .it follows that in the second line , we have used that on .the last line follows from theorem [ thm:46 ] and the continuity of .similar arguments can be used to bound the rate of convergence for sets with additional regularity properties .let be a compact set in , and define and as in the proof of the second claim of proposition [ prop : compact ] .assume that , and that is differentiable at with .by proposition [ prop : uniform ] there exists a sequence such that expanding about , we obtain from eq .( [ eq : radii - s ] ) that as , where . after dropping an initial segment from the sequence, we may replace with the radius of the smallest centered ball containing .choosing sufficiently large and sufficiently small , we can find a sequence of polarizations where eq. holds with ._ the conclusions of proposition [ prop : compact ] also hold for random steiner symmetrizations that satisfy the assumptions of corollary [ cor:46 ] .likewise , eq . applies to sequences of steiner symmetrizations along i.i.d .uniformly distributed directions .however , in view of klartag s result for convex sets , we expect such sequences to converge more rapidly ( see eq .( [ eq : klartag ] ) ) .this paper is based on results from m.f.s 2010 master s thesis at the university of toronto .our research was supported in part by nserc through discovery grant no .311685 - 10 ( burchard ) and an alexander graham bell canada graduate scholarship ( fortier ) .a.b . wishes to thank gerhard huisken ( albert - einstein institut in golm ) , bernold fiedler ( freie universitt berlin ) , nicola fusco ( universit di napoli federico ii ) and adle ferone ( seconda universit di napoli , caserta ) for hospitality during a sabbatical in 2008/09 .special thanks go to aljoa voli for an inspiring discussion of his results on steiner symmetrization that provided the original motivation for our work , and to bob jerrard for pointing out an error in an earlier version of eq .( [ eq : radii ] ) .we note that a non - convergence result similar to proposition [ prop : lower - s]b was obtained independently by bianchi , klain , lutwak , yang , and zhang .y. benyamini , _ two - point symmetrization , the isoperimetric inequality on the sphere , and some applications _ , texas functional analysis seminar ( 198384 ) , longhorn notes , university of texas press , austin , 1984 , pp .
|
we derive conditions under which random sequences of polarizations ( two - point symmetrizations ) converge almost surely to the symmetric decreasing rearrangement . the parameters for the polarizations are independent random variables whose distributions need not be uniform . the proof of convergence hinges on an estimate for the expected distance from the limit that yields a bound on the rate of convergence . in the special case of i.i.d . sequences , almost sure convergence holds even for polarizations chosen at random from suitable small sets . as corollaries , we find bounds on the rate of convergence of steiner symmetrizations that require no convexity assumptions , and show that full rotational symmetry can be achieved by randomly alternating steiner symmetrizations in a finite number of directions that satisfy an explicit non - degeneracy condition . we also present some negative results on the rate of convergence and give examples where convergence fails .
|
complex dynamical systems are high dimensional in nature .the determination of simple general principles governing the behavior of such systems is an outstanding problem which has attracted a great deal of attention in connection with recent network and graph - theoretical constructs . herei focus on synchronization , which is the process that has attracted most attention , and use this process to study the interplay between network structure and dynamics .synchronization is a widespread phenomenon in distributed systems , with examples ranging from neuronal to technological networks .previous studies have shown that network synchronization is strongly influenced by the randomness , degree ( connectivity ) distribution , correlations , and distributions of directions and weights in the underlying network of couplings .but what is the ultimate origin of these dependences ? in this paper ,i show that these and other important effects in the dynamics of complex networks are ultimately controlled by a small number of network parameters . for concreteness ,i focus on complete synchronization of identical dynamical units , which has served as a prime paradigm for the study of collective dynamics in complex networks . in this case, the synchronizability of the network is determined by the largest and smallest nonzero eigenvalues of the coupling ( laplacian ) matrix .my principal result is that , for a wide class of complex networks , these eigenvalues are tightly bounded by simple functions of the weights and degrees in the network .the quantities involved in the bounds are either known by construction or can be calculated in at most operations for networks with nodes and links , whereas the numerical calculation of the eigenvalues of large networks would be prohibitively costly since it requires in general operations even for the special case of undirected networks .these bounds are in many aspects different from those known in the literature of graph spectral theory and are suitable to relate the physically observable structures in the network of couplings to the dynamics of the entire system .the eigenvalue bounds are then applied to design complex networks that display predetermined dynamical properties and , conversely , to determine how given structural properties influence the network dynamics .this is achieved by exploring the fact that the quantities used to express the bounds have direct physical interpretation .this leads to conditions for the enhancement and suppression of synchronization in terms of physical parameters of the network. the main results also apply to a class of weighted and directed networks and are thus important to assess the effect of nonuniform connection weights in the synchronization of real - world networks . the proposed method for network designis based on a relationship between the eigenvalues of a substrate network that incorporates the structural constraints imposed to the system and those of weighted versions of the same network .this method is thus complementary to other recently proposed approaches for identifying or constructing networks with desired dynamical properties .the paper is organized as follows . in sec .[ sec2 ] , i define the class of networks to be considered and announce the main result on the eigenvalue bounds , which is proved in the appendix . in sec. [ sec3 ] , i discuss an eigenvalue approach to the study of network synchronization .the problem of network design and the impact of the network structure on dynamics is considered in secs .[ sec4 ] and [ sec5 ] , respectively . concluding remarks are incorporated in the last section .the dynamical problems considered in this paper are related to the extreme eigenvalues of the laplacian matrix .this section concerns the bounds of these eigenvalues .most previous studies related to network spectra and dynamics have focused on unweighted networks of symmetrically coupled nodes . in order to account for some important recent models of weighted and directed networks ,here i consider a more general class of networks .the networks are defined by adjacency matrices satisfying the condition that is a symmetric matrix , where is the degree of node , factor is the total strength of the input connections at node , and is the number of nodes in the network . according to this condition , the in- and out - degrees are equal at each node of the network , although the strengths of in- and out - connections are not necessarily the same .matrix is possibly weighted : if there is a connection between nodes and and otherwise , where because of the normalization factor . the class of networks defined by eq .( [ e3 ] ) includes as particular cases all undirected networks ( both unweighted and weighted ) and all directed networks derived from undirected networks by a node - dependent rescaling of the input strengths .the dominant directions of the couplings are determined by and the weights by both and , where defines the mean and the relative strength of the individual input connections at node .the usual unweighted undirected networks correspond to the case where is binary and for all the nodes .the study of this class of networks is motivated by both physical and mathematical considerations . from the mathematical viewpoint ,i show in the appendix that the conditions imposed to matrix guarantee that the corresponding coupling matrices are diagonalizable and have real spectra .physically , this coupling scheme is general enough to reproduce the weight distribution of numerous realistic networks and to show how the combination of topology , weights , and directions affect the dynamics .indeed , the weighted and directed networks comprised by the adjacency matrix in ( [ e3 ] ) include important models previously considered in the literature , such as the models where , used to study coupled maps and to address the effects of asymmetry and saturation of connection strengths .it also includes the models introduced in refs . , where the connection strengths depend on the degrees of the neighboring nodes , and other models reviewed in ref . . inwhat follows , i consider the general class of networks defined by eq .( [ e3 ] ) with the additional assumption that each network has a single connected component .the coupling matrix relevant to this study is the laplacian matrix , where the laplacian matrix can be written as , where is the matrix of input strengths , is a normalized laplacian matrix , is the matrix of degrees , and .as shown in the appendix , matrices and are diagonalizable and all the eigenvalues of and are real . for connected networks where all the input strengths are positive , as assumed here , the eigenvalues of matrices , , and can be ordered as respectively . the strict inequalities and follow from eq .( [ eeq9 ] ) , which expresses ( and also if one takes ) as a sum of nonnegative terms with at least one of them being nonzero when the network is connected .the identities are a simple consequence of the zero row sum property of matrices and .i now turn to the analysis of the eigenvalues of the laplacian matrix .i use , and to denote the minimum , maximum and mean degree in the network .the minimum and maximum input strengths are denoted by and , respectively , while is used to denote the minimum degree among the nodes with input strength .i first state the following general theorem .* theorem : * the largest and smallest nonzero eigenvalues of matrices , , and are related as where , , and , for any network with adjacency matrix satisfying ( [ e3 ] ) .this theorem is important because it relates the desired and usually unknown eigenvalues of laplacian matrix with the input strengths and the often approximately known eigenvalues of the normalized laplacian matrix . in general, one has , which follows as a simple generalization of the results in ref . to the weighted and directed networks defined by eq .( [ e3 ] ) .physically , the eigenvalues and are related to relaxation rates , while and are just the input strengths and , respectively .a special case of the theorem was announced in ref .the theorem is proved in the appendix . in the remaining part of the paperi explore applications of the theorem .in this section , networks of identical oscillatory systems are used to discuss how the coupling cost and stability of synchronous states are expressed in terms of the eigenvalues considered in the previous section .consider a network of diffusively coupled dynamical units modeled by where the first term on the r.h.s. describes the dynamics of each unit , while the second equals ] , where correspond to perturbations transverse to the synchronization manifold .the synchronous state is linearly stable if and only if the largest lyapunov exponent for this equation is negative for each transverse mode , where are the nonzero eigenvalues of in eq .( [ eq31 ] ) . in a broad class of oscillatory dynamical systems ,function is negative in a single interval .the synchronous state is then stable for some if the eigenvalues of the laplacian matrix satisfy the condition \equiv \frac{\lambda_n}{\lambda_2 } < \frac{\alpha_2}{\alpha_1}[{\bf f},{\bf h},\bf{s } ] .\label{e7}\ ] ] the r.h.s . of this inequality depends only on the dynamics while the l.h.s .depends only on the structure of the network , as indicated in the brackets .the smaller the ratio of eigenvalues the larger the number of dynamical states for which condition ( [ e7 ] ) is satisfied . moreover , when this condition is satisfied and is finite , the smaller the ratio the larger the relative interval of the coupling parameter for which the corresponding synchronous state is stable .when condition ( [ e7 ] ) is satisfied , the eigenvalues and are related to the synchronization thresholds as where and are the minimum and maximum coupling strengths for stable synchronization , respectively .these relations will be explored in the design of networks with predefined thresholds in sec .[ sec4 ] .this characterization is not complete without taking into account the cost involved in the coupling .the coupling cost required for stable synchronization was defined in refs . as the sum of the coupling strengths at the lower synchronization threshold , . this cost function can be expressed in terms of eigenvalues of the laplacian matrix , and depends separately on the dynamics ( ) and structure ( ) of the network .this can be used to derive an upper bound for expressed in terms of the ratio : therefore , ratio is a measure of the synchronizability _ and _ cost of the network , with the interpretation that the network is more synchronizable and the cost is more tightly upper bounded when is smaller .the synchronization problem is then reduced to the study of eigenvalues of the laplacian matrix .in this section , i show how the theorem of sec .[ sec2 ] can be used to design large networks with predetermined eigenvalues and . in the synchronization problem of sec .[ sec3 ] , this corresponds to the design of networks with predetermined lower ( ) and upper ( ) synchronization thresholds .given an arbitrary _ substrate _ network of nodes and known eigenvalues and , the bounds in eqs .( [ eq41 ] ) and ( [ eq42 ] ) can be used to generate networks of eigenvalues and , where the uncertainties and depend on and , respectively . here , and denote the desired values and and denote the resulting eigenvalues , which have some uncertainty .this procedure is illustrated in fig .[ fig1 ] and can be used to systematically design robust networks with tunable extreme eigenvalues .the rationale here is that the substrate network is chosen to incorporate topological constraints relevant to the problem , such as the nonexistence of links between certain nodes or a limit in the number of links , and that the extreme eigenvalues of the normalized laplacian of this network are calculated beforehand .then , by adjusting the minimum and maximum input strengths and , one can define new networks with the same topology but with the desired extreme eigenvalues for the laplacian matrix . ] for a given substrate network with normalized eigenvalues and . in ( a ) , for given , the blue area ( bottom ) represents the uncertainty in due to the factor and the triangular line ( top ) accounts for any inaccuracy in the determination of . in ( b ) , for given , the black , blue , gray and black areas ( top to bottom ) represent the uncertainty in due to the factors , , , and any inaccuracy in the determination of , respectively . here , and are used to indicate the maximum and minimum of and , respectively , in the given interval of , title="fig:",scaledwidth=48.0% ] ] for a given substrate network with normalized eigenvalues and . in ( a ) , for given , the blue area ( bottom ) represents the uncertainty in due to the factor and the triangular line ( top ) accounts for any inaccuracy in the determination of . in ( b ) , for given , the black , blue , gray and black areas ( top to bottom ) represent the uncertainty in due to the factors , , , and any inaccuracy in the determination of , respectively . here , and are used to indicate the maximum and minimum of and , respectively , in the given interval of , title="fig:",scaledwidth=48.0% ] more specifically , if is known , one can adjust the largest input strength using eq .( [ eq41 ] ) to obtain a new network with in the interval ] by taking [ see fig .[ fig1](b ) ] .naturally , the usefulness of this construction will depend on how close to are the eigenvalue and , and how close to are kept and as the weights are changed .the former condition can be justified for most networks in the usual ensembles of densely connected random networks and also in ensembles of sparse networks with large mean degree .note that this approach can be effective even when and are only approximately known , as represented by the upper and lower black diagonal lines in fig .[ fig1](a ) and ( b ) , respectively .the last observation is relevant precisely when and are estimated from an ensemble distribution or through any other probabilistic procedure .importantly , because is mainly controlled by and by , both eigenvalues can be adjusted simultaneously . in the synchronization problem , this can be used to define networks with predetermined synchronizability and predetermined upper bound for the coupling cost . moreover ,this construction is not unique , that is , there are multiple choices of the substrate network and of the assignment of weights versus degrees that will lead to the same pair of predefined eigenvalues and .this freedom can be explored to increase robustness against structural perturbations and to control the uncertainty by keeping large and small . and ( b ) for erds - rnyi networks .the histrograms correspond to realizations of the networks for and ., scaledwidth=48.0% ] consider unweighted erds - rnyi networks , generated by adding with probability a link between each pair of given nodes .as shown in the histograms of fig .[ fig2 ] , the eigenvalues and are narrowly distributed close to even for relatively small and sparse networks .such networks can thus be used as substrate networks to generate , with good accuracy , new networks of predefined eigenvalues and by reassigning the input strengths and , respectively .while a single realization of the substrate network and a deterministic assignment of input strengths and would suffice to generate the desired networks , the robustness of the proposed procedure becomes more visible if one considers various independent random constructions .for this purpose , i consider random realizations of the substrate network and assume that , for each such realization , the input strength of each node is assigned with equal probability to be either or . and ( b ) : the black lines indicate the upper and lower bounds given by eqs .( [ eq41])-([eq42 ] ) and the red lines indicate to the numerically determined eigenvalues as functions of ( for ) and of ( for ) , respectively .each choice of in ( a ) and of in ( b ) corresponds to realizations of the substrate networks for the same parameters used in fig .insets : distributions of ( a ) and ( b ) for the networks used in the main panels ( a ) and ( b ) , respectively ., title="fig:",scaledwidth=48.0% ] and ( b ) : the black lines indicate the upper and lower bounds given by eqs .( [ eq41])-([eq42 ] ) and the red lines indicate to the numerically determined eigenvalues as functions of ( for ) and of ( for ) , respectively .each choice of in ( a ) and of in ( b ) corresponds to realizations of the substrate networks for the same parameters used in fig .insets : distributions of ( a ) and ( b ) for the networks used in the main panels ( a ) and ( b ) , respectively ., title="fig:",scaledwidth=48.0% ] figure [ fig3 ] shows the numerically computed eigenvalues and , and the respective bounds , as functions of and .this figure is a scattered plot with independent realizations of the substrate networks ( and assignments of input strengths ) for each choice of and . as shown in the figure , except for the lower bound of , which exhibits observable dependence on the specific network realization , the distributions of the eigenvalues and bounds are narrower than the width of the lines in the figure .in addition , the numerically computed values of and are tightly bounded by the lower and upper limits in eqs .( [ eq41])-([eq42 ] ) .the difference between the bounds of in fig .[ fig3](b ) is thinner than the width of line .moreover , as ( ) is varied for fixed ( ) in fig .[ fig3](a ) ( fig .[ fig3](b ) ) , the value of ( ) remains nearly constant , as shown in the insets .thus , by varying both and , one can design networks where both and are predetermined .figure [ fig4 ] , shows the result of such a construction for the ratio of eigenvalues .note that if all the input strengths are re - scaled by a common factor , the terms , , and in eq .( [ eq41 ] ) as well as the terms , , and in eq .( [ eq42 ] ) will change by the same factor .therefore , the ratio and corresponding bounds do not change if , in our simulations , both and are re - scaled by a common factor .for the ratio as a function of the ratio .,scaledwidth=48.0% ] _ remark : _ if no constraints are imposed to the topology of the network other than the number of nodes , then one could easily construct networks having exactly any given set of eigenvalues and any given set of orthonormal eigenvectors , where .the network satisfying this conditions is defined by the symmetric laplacian , where is the diagonal matrix of eigenvalues and is the orthogonal matrix of eigenvectors note that matrix is indeed a well - defined laplacian satisfying the zero row sum condition .equations ( [ eq41 ] ) and ( [ eq42 ] ) can be used to address the influence of the network structure on the dynamics . in particular , they imply that therefore , under rather general conditions , the synchronizability of the network is strongly limited by and .the first ratio depends on the distribution of weights while the second also depends on the topology of the network .the bounds in eq .( [ e11 ] ) are valid for any network satisfying condition ( [ e3 ] ) , but are tighter for classes of networks with , and closer to . in this sectioni focus on large random networks , which forms one such class of networks . for concreteness ,consider random networks for which the normalized matrix is unweighted .that is , random networks which are either unweighted or whose weights are factored out completely in eq .( [ e3 ] ) . for these networks, one can invoke the known result from graph spectral theory that the expected values of the extreme eigenvalues of approach as and for large mean degree .this behavior has been shown to remain valid for networks with quite general expected degree sequence and to be consistent with numerical simulations on various models of growing and scale - free networks , even when the networks are relatively small and only approximately random insofar as .in addition , the distribution of the eigenvalues across the ensemble of random networks becomes increasingly peaked around the expected values as the size of the networks increases .furthermore , for most realistic networks , is bounded away from zero and approaches for large [ it can be replaced by if the conditions in _ remark 1 _ ( appendix ) apply ] . for unweighted networks , in particular , and , where is the average of in the network .therefore , for a wide class of complex networks , the eigenvalues and are mainly determined by , through , and by , through , respectively . in the case of unweighted ( and undirected ) networks ,the input strengths are determined by the degrees of the nodes and .thus , the bounds in eq .( [ e11 ] ) can be used to assess the effect of the degree distribution . as a specific example , consider random scale - free networks with degree distribution for and , where is a normalization factor . from the condition , one has , which leads to for large and .this simple scaling for the expected value of explains the counter - intuitive results about the suppression of synchronizability in networks with heterogeneous distribution of degrees reported in ref .random scale - free networks were found to become less synchronizable as the scaling exponent is reduced , despite the concomitant reduction of the average distance between nodes that could facilitate the communication between the synchronizing units .equation ( [ e11 ] ) shows that this effect of the degree distribution is a direct consequence of the increase in the heterogeneity of the input strengths , characterized by .equation ( [ e13 ] ) predicts this effect as a function of both the scaling exponent and the size of the network .in particular , this equation shows that scale - free networks become more difficult to synchronize as increases and this is again because increases . on the other hand , synchronizability increases as increased and becomes independent of the system size for , indicating that networks with the same degree for all the nodes are the most synchronizable random unweighted networks ( see also ref . ) . in the more general case of weighted networks ,the input strengths are not necessarily related to the degrees of the nodes .an important implication of eq .( [ e11 ] ) is that , given a heterogeneous distribution of input strengths in eq .( [ e3 ] ) , the synchronizability of the network is to some extent independent of the way the input strengths are assigned to the nodes of the network , rendering essentially the same result whether this distribution is correlated or not with the degree distribution . in both cases ,synchronizability is mainly determined by the heterogeneity of the input strengths and the mean degree .in particular , synchronizability tends to be enhanced ( suppressed ) when the mean degree is increased ( reduced ) and when the ratio is reduced ( increased ) .this raises the interesting possibility of controlling the synchronizability of the network by adjusting these two parameters , which was partially explored in sec .[ sec4 ] . as a specific example of control , consider a given random network with arbitrary input strengths , where the topology of the network is kept fixed and the input strengths are redefined as with regarded as a tunable ( control ) parameter .for large , synchronizability is now mainly determined by . within this approximation, synchronizability is expected to reach its maximum around , quite independently of the initial distribution of input strengths and the details of the degree distribution .this generalizes a result first announced in ref . , namely that networks with good synchronization properties tend to be at least approximately uniform with respect to the strength of the input signal received by each node ( but see remark below ) .these optimal networks have interesting properties . for ,all the nodes of the network have exactly the same input strength .thus , if nodes and are connected , the strength of the connection from to scales as , while the strength of the connection from to scales as .this indicates that , unless all the nodes have exactly the same degree , the networks that optimize synchronizability for that degree distribution are necessarily weighted _ and _ directed .moreover , if , the strength of connection from node to node is larger than the strength of the connection from node to node .therefore , in the most synchronizable networks , the dynamical units are asymmetrically coupled and the stronger direction of the connections is from the nodes with higher degrees to the nodes with lower degrees .the asymmetry and the predominance of connections from higher to lower degree nodes is a consequence of the condition that nodes with different degrees have the same input strength , a condition that introduces correlations between the weights of individual connections and the topology of the network and that has been observed to have similar consequences in other coupling models .these results combined with the interesting recent work of giuraniuc _ et al . _ on critical behavior suggest that , in realistic systems , the properties of individual connections are at least partially shaped by the topology of the network . _remark : _ the above analysis shows that for the networks satisfying the condition in eq .( [ e3 ] ) , is more tightly bounded close to the optimal value when the distribution of input strengths is more homogeneous . indeed , the bounds in eq .( [ e11 ] ) leave little room for the improvement of synchronizability by changing the weights of individual links or the way the nodes are connected if is not reduced . for classes of more general directed networks , however , one can have highly synchronizable networks with a heterogeneous distribution of . to see this ,consider the set of most synchronizable networks among all possible networks , which is precisely the set of networks with and eigenvalues .as shown in refs . , if the laplacian matrix is diagonalizable , then the networks with are those where each node either has output connections with the same strength to all the other nodes ( and at least one node does so ) or has no output connections at all . from this and the zero row sum property of the laplacian matrix, it follows that , where is the strength of each output connection from node .accordingly , the input strength is upper bounded by , but not necessarily the same for all the nodes . in particular, since the strengths of the output connections can have any values ( as long as at least one is non - zero ) , in this case there is no lower limit for and the ratio can be arbitrarily large despite the fact that .therefore , even when the spectra is real , strictly directed networks can be fundamentally different from the directed networks considered here .i have presented rigorous results showing that the extreme eigenvalues of the laplacian matrix of many complex networks are bounded by the node degrees and input strengths , where the latter can be interpreted as the _ weighted _ in - degrees in the networks .these results can be used to predict and control the coupling cost and a number of implications of the network structure on the dynamical properties of the system , such as its tendency to sustain synchronized behavior .i have shown here that these results can also be used to design networks with predefined dynamical properties . while i have focused on complete synchronization of identical units , the leading role of and revealed in this workalso provides insights into other forms of synchronization .in particular , it seems to help explain : the suppressive effects of heterogeneity in the synchronization of pulse - coupled and non - identical oscillators ; the dominant effect of the mean degree in the synchronization of time - delay systems with normalized input signal ; and the dominant effect of the degree in the synchronization of homogeneous networks of bursting neurons . the scale - free model of neuronal networks considered in ref . , which was shown to generate large synchronous firing peaks , is also consistent with ( an extrapolation of ) the results above .indeed , the networks in that model are scale free only with respect to the out - degree distribution and are homogeneous with respect to the in - degree distribution .therefore , the results presented here may serve as a reference in the study of more general systems , including those with heterogeneous dynamical units . in general, the impact of the network structure will change both with the specific synchronization model and with the specific question under consideration , and an important open problem is to understand _ how _ it changes .finally , since the laplacian eigenvalues also govern a variety of other processes , including the relaxation time in diffusion dynamics , community formation , consensus phenomena , and first - passage time in random walk processes , the results reported here are also expected to meet other applications in the broad area of dynamics on complex networks , particularly in connection with network design in communication and transport problems .the author thanks dong - hee kim for valuable discussions and for reviewing the manuscript .in what follows i use the notation that , if is a matrix with eigenvalues , then denotes a normalized eigenvector of eigenvalue .the proof of the theorem is divided in 6 steps .+ _ step 1 _ : the eigenvalues of matrices and satisfy where and .equations ( [ eq61 ] ) and ( [ eq62 ] ) follow from the identities det det and det det , respectively , where is an arbitrary number and is the identity matrix . because matrices and are symmetric , their eigenvalues are real , as assumed in eqs .( [ eq31 ] ) and ( [ eq32 ] ) , and the corresponding eigenvectors can be chosen to form orthonormal bases .+ _ step 2 _ : the diagonalizability of matrices and , a condition invoked in the rest of this appendix , can be demonstrated as follows .matrix is symmetric and hence has a set of orthonormal eigenvectors .then , from the identity , and the fact that is nonsingular , it follows that forms a set of linearly independent eigenvectors of .this implies that is diagonalizable . from the special case , it follows that the same holds true for . + _ step 3 _ : the upper bound of in eq .( [ eq41 ] ) follows immediately from where is the usual euclidean norm .+ _ step 4 _ : the lower bound of in eq .( [ eq41 ] ) is derived from where is a unit vector chosen such that and is the index of a node with the largest input strength .equation ( [ eeq8 ] ) leads to and this leads to the lower bound in eq .( [ eq41 ] ) with a strict inequality for finite size networks . in the particular case of unweighted networks , eq .( [ eqn1 ] ) implies ( see also ref .+ _ step 5 _ : now i turn to the upper bound of in eq .( [ eq42 ] ) . from the identity eig eig, one has where denotes the usual scalar product .this equation can be rewritten as where i have used that to obtain the identities where is the component of orthogonal to .the minimum in eq .( [ eeq9 ] ) can be upper - bounded by taking , where is the index of a node with the smallest input strength , and this leads to the upper bound in eq .( [ eq42 ] ) .+ _ remark 1 : _ a different bound , , is obtained for any by using to upper - bound in eq .( [ step5 ] ) .this leads to if there are two nodes with minimum input strength that are not connected to each other .+ _ remark 2 : _ alternatively , one can show that eig eig and use this to upper - bound with for in the span of .if there are two or more nodes with minimum input strength , then it follows from this bound that .+ _ step 6 _ : the lower bound of in eq .( [ eq42 ] ) is derived as follows . from the identity eig eig, one has the identity and the observation that the minimum of the product is lower - bounded by the product of the minimums lead to where in the r.h.s . of eq .( [ eq22 ] ) one has the maximum of function under the constraints and , which can be determined using the lagrange multipliers method with two multipliers .the resulting set of equations is where and are the lagrange multipliers .this system of equations can be solved for under the corresponding constraints by taking of eq .( [ eq23 ] ) multiplied by , , and , respectively .the result is the lower bound in eq .( [ eq42 ] ) follows from eqs .( [ eq18 ] ) , ( [ eq22 ] ) , and ( [ eq24 ] ) , and this concludes the proof of the theorem99 a different scaling is possible if there is a cut - off in the degree distribution [ burda z and krzywicki a 2003 _ phys .e _ * 67 * 046118 ] or if the networks are constructed following a different ( e.g. , non - random ) procedure [ pecora l m and barahona m 2005 _ chaos complexity lett ._ * 1 * 61 ] .dynamically , these differences are expected to be more evident for networks of non - identical oscillators or excitable systems [ paula d r , arajo a d , andrade j s , herrmann h j and gallas j a c 2006 _ phys .e _ * 74 * 017102 ] .an important byproduct of identities ( [ eq61 ] ) and ( [ eq62 ] ) is that , numerically , the computation of the eigenvalues and is significantly less time demanding when evaluated from the symmetric matrices and , respectively .
|
the identification of the limiting factors in the dynamical behavior of complex systems is an important interdisciplinary problem which often can be traced to the spectral properties of an underlying network . by deriving a general relation between the eigenvalues of weighted and unweighted networks , here i show that for a wide class of networks the dynamical behavior is tightly bounded by few network parameters . this result provides rigorous conditions for the design of networks with predefined dynamical properties and for the structural control of physical processes in complex systems . the results are illustrated using synchronization phenomena as a model process . _ keywords _ : laplacian eigenvalues , weighted networks , network dynamics
|
the open archives initiative ( oai ) announced the oai metadata harvesting protocol ( oaimh ) v1.0 on 21 january 2001 after a period of pre - release testing .it is intended that the protocol will not be changed until 12 - 18 months have elapsed from the initial release .this period of stability is designed to allow time for thorough evaluation without the cost of multiple rewrites for early implementers .the oaimh protocol was designed as a simple , low - barrier way to achieve interoperability through metadata harvesting .it is still an open question as to exactly how useful metadata sharing will be .however , there is certainly considerable interest in oai and experience with early oaimh implementations is encouraging .this tutorial is organized in four main sections . in section [ sec : not ] , i hope to clear up some common misconceptions about what oaimh is . in section [ sec: concepts ] , i review some of the concepts and assumptions that underly the oaimh protocol . then , in the remaining two sections , sections [ sec : dp ] and [ sec : sp ] , i consider implementation of the _ data - provider _ and _ service - provider _ sides of the oaimh protocol .perl code examples are given to implement bare - bones versions of these two interfaces .it is not my intention to offer a complete description of the oaimh protocol but instead to describe its use in very practical terms , and to highlight common practice among implementers .a copy of the oaimh protocol specification should be at hand while reading this tutorial .i will refer to sections within the protocol specification as 2.1 ( for section 2.1 ) .the most common misconception of oaimh , as it currently stands , is that it provides mechanisms to expose and harvest full - content ( documents , images , ) .this is not true , oaimh is a protocol for the exchange of _ metadata _ only .however , it may be that a future oai protocol will provide facilities for the exchange of full - content .oaimh is not about direct interoperability between archives .it is based on a model which puts a very clean divide between data - providers ( entities which expose metadata ) and service - providers ( entities which harvest metadata , presumably with the intention of providing some service ) .while the model has a clear divide between data - providers and service - providers , there is nothing to say that one entity can not be both ; cite base is one example .the model has an obvious scalability problem if every service - provider is expected to harvest data from every data - provider .it may be that is is not an issue if service - providers are specific to a particular community and thus harvest only from a subset of data - providers .we may also see the creation of aggregators which harvest from a number of data - providers and the re - export this data .oaimh is not limited to dublin core ( dc ) metadata . however , since oai aims to promote interoperability , dc metadata has been adopted as a lowest common - denominator metadata format which all data - providers should support .it is not intended that the requirement to export dc metadata should preclude the use of other metadata sets that may be more appropriate within particular communities .the oai encourages the development of community specific standards that provide the functionalities required by specific communities .service - providers make requests to data - providers ; there is no support for data - provider driven interaction .all requests and replies occur using the http protocol . requests may be made using either the http get or post methods .all successful replies are encoded in xml , and all exception and flow - control replies are indicated by http status codes .oaimh protocol requests are made using one of six verbs : identify , getrecord , listidentifiers , listrecords , listsets , and listmetadataformats .some of these verbs accept or require additional parameters to completely specify the request . the verb and any parametersare specified as key = value pairs 3.1.1 either in the url ( if the get method is used ) , or in the body of the message ( if the post method is used ) .the oaimh protocol is based on a model of repositories that hold metadata about _ items _ 2 .the nature of the items is outside the scope of the protocol ; they might be electronic documents , artifacts in a museum , people , or almost anything else .the requirement for oai compliance is that the repository be able to disseminate metadata for these items in one or more formats including dublin core ( dc ) .metadata is disseminated via the getrecord and listrecords verbs .these requests result in zero or more _ records _ being returned .a record consists of 2 or 3 parts : a container , a container , and possibly an container 2.2 .the metadata for each item has a unique identifier which , when combined with the metadataprefix , acts as a key to extract a metadata record .note that although all metadata types for an item share the same identifier , the identifier is explicitly _ not _ an identifier for the item 2.3 .identifiers may be any valid uri but an optional oai identifier syntax a2 has been adopted widely .the oai identifier syntax divides the identifier into three parts separated by colons ( :) , e.g. oai : arxiv : hep - lat/0008015 where ` oai ' is the scheme , ` arxiv ' identifies the repository , and ` hep - lat/0008015 ' is the identifier within the particular repository .the metadata for an item ( considered as a whole , not as individual formats ) has a datestamp which is the date of last modification .the purpose of the datestamp is to support date - selective harvesting and incremental harvesting in particular .datestamps are returned in the records returned by a data - provider and may be used as optional arguments to the listidentifiers and listrecords requests .the datestamps have the granularity of a day , they are in yyyy - mm - dd format and no time is specified .this simple date format actually creates some additional complexity because the service - provider and data - provider may not be in the same time - zones .this is considered further in section [ sec : incharvest ] .typically , a service - provider would initially harvest all metadata records from a repository by issuing a listrecords request without from or until restrictions .subsequently , the service - provider would issue listrecords requests with a from parameter equal to the date of the last harvest .sets are provided as an _ optional _ construct for grouping items to support selective harvesting 2.5 .it is not intended that they should provide a mechanism by which a search is implemented , and there is no controlled vocabulary for set names so automated interpretation of set structure is not supported .it should be noted that sets are optional both from the point of view of the data - provider which may or may not implement sets ; and the service - provider which may ignore any set structure that is exposed .it is not clear whether sets will be widely used and i shall not consider them further in this tutorial .the oaimh protocol supports _ multiple parallel metadata formats_. dublin core ( dc ) is mandated for lowest common denominator interoperability .the use of other formats within particular communities or for special purposes is encouraged . within a particular repository ,metadata formats are identified by a metadataprefix .each metadataprefix is associated with the url of the schema which may be used to validate the metadata records ; the url has cross repository scope .the only globally meaningful metadataprefix is oai_dc ( for dc ) , which is associated with the schema at http://www.openarchives.org/oai/dc.xsd .the listmetadataformats request will return the metadataprefix , schema , and optionally a metadatanamespace , for either a particular record or for the whole repository ( if no identifier is specified ) . in the case of the whole repository ,all metadata formats supported by the repository are returned .it is not implied that all records are available in all formats .the oaimh protocol has very simple exception handling : syntax errors result in http status code 400 replies , and parameters that are invalid or have values that do not match records in the repository result in empty replies .for example , a listrecords request for a date range when there were no changes , or for a metadata format not supported , will result in a reply with header information but no elements .flow control is supported with the http retry - after status code 503 .this allows a server ( data - provider ) to tell the harvesting agent ( service - provider ) to try the request again after some interval .it is left entirely up to the server implementer to determine the conditions under which such a response will be given .the server could base the response on current machine load or limit the frequency requests will be serviced from any given ip address .the retry - after response may also be used to handle temporary outages without simple taking the server off - line . in an environment where one of a set of servers may handle a request ,the server may dynamically redirect a request using the http 302 response .to date this has been implemented only by the naca repository .to expose metadata within the oai , one must implement the data - provider site of the oaimh protocol .this provides a small set of functions which can be used to extract information about and metadata from the underlying repository .the perl files oai1.pl , oaiserver.pm and database.pm implement a bare - bones data - provider interface .the file oai1.pl handles http requests and must be associated with a url in the web server configuration file ; for the apache web server , the configuration line is scriptalias /oai1 /some/ directory / oai1.pl if the code is in /some / directory .it is also possible to run oai1.pl from the command line , the request is specified with the -r flag , e.g. ./oai1.pl-r verb = identify. the algorithm for oai1.pl is simply : .... read get , post or command line request check syntax of request if syntax correct return xml reply to request else return http 400 error code and message .... an example of an invalid request is : .... simeon>./oai1.pl -r ' bad - request ' status : 400 malformed request content - type : text / plain no verb specified ! ....exports two subroutines , one ( oaicheckrequest ) to check the request against a grammar stored in a data structure , and another ( oaisatisfyrequest ) which calls the appropriate routine to implement the required oai verb .i will consider each verb in turn .database.pm is a dummy database interface with a ` database ' of three records : record1 , record2 and record3 .metadata for record1 and record2 is available in dc format ; metadata for record1 is also available in another format with the metadataprefix ` wibble ' ; and record3 is a ` deleted ' record so no metadata is available .this verb takes no arguments and returns information about a repository 4.2 .the example code implements identify by simply writing out information from configuration variables .the protocol allows for additional blocks which may contain community - specific information .examples include - identifier which specifies a particular identifier syntax , and which includes additional information appropriate for the e - print community a2 .this verb takes no arguments and returns the set structure of the repository 4.6 .the example code does not implement any sets so the response is an empty list .this verb may be used either with a identifier argument or without any arguments 4.4 .if an identifier is specified then the verb returns the metadata formats available for that record . in many casesa repository may be able to disseminate metadata for all records in the same format or formats . in this casethe response will be the same if there is no identifier argument or if the identifier argument specifies any record that exists .the example code implements the general case by calling a routine in the database.pm module to ask what formats are available , and then formats the reply appropriately . for each metadata format, the reply must include a ( used to specify that format in other requests ) , and a url .a element may optionally be returned but is not implemented in the example code .an example request and response is : .... simeon>./oai1.pl -r ' verb = listmetadataformats&identifier = record1 ' content - type : text / xml < ?xml version="1.0 " encoding="utf-8 " ?> < listmetadataformats xmlns="http://www.openarchives.org / oai / oai_listmetadataformats " xsi : schemalocation="http://www.openarchives.org / oai/1.0/oai_listmetadataformats http://www.openarchives.org/oai/1.0/oai_listmetadataformats.xsd " > < responsedate>2001 - 05 - 05t12:27:36 - 06:00</responsedate > < requesturl > http://localhost / oai1?verb = listmetadataformats& ; identifier = record1&verb = listmetadataformats</requesturl > < metadataformat > < metadataprefix > wibble</metadataprefix >< schema > http://wibble.org / wibble.xsd</schema > < /metadataformat> < metadataformat >< metadataprefix > oai_dc</metadataprefix > <schema > http://www.openarchives.org / oai / dc.xsd</schema > < /metadataformat< /listmetadataformats > .... the response indicates that the record record1 may be disseminated in either oai_dc or wibble formats .this verb requests metadata for a particular record in a particular format 4.1 .the example code implements this as a call to a subroutine disseminate ( shared with listrecords ) after checking that the record exists .the record returned consists of two parts if the record is not deleted ; a block which contains the identifier and the datestamp ( the information required for harvesting ) and a block which contains the xml metadata record in the requested format .the block will be missing if the record is deleted or if the requested metadata format is not available .for example , a request for oai_dc for record2 would be : .... simeon>./oai1.pl -r ' verb = getrecord&identifier = record2&metadataprefix = oai_dc ' content - type : text / xml < ?xml version="1.0 " encoding="utf-8 " ?> < getrecord xmlns="http://www.openarchives.org / oai / oai_getrecord " xsi : schemalocation="http://www.openarchives.org / oai/1.0/oai_getrecord http://www.openarchives.org/oai/1.0/oai_getrecord.xsd " > < responsedate>2001 - 05 - 05t12:50:23 - 06:00</responsedate > < requesturl > http://localhost / oai1?verb = getrecord&identifier = record2& ; metadataprefix = oai_dc&verb = getrecord</requesturl > <record > < header > <identifier > record2</identifier > < datestamp>1999 - 02 - 12</datestamp >< /header > < metadata >< oai_dc xsi : schemalocation="http://purl.org / dc / elements/1.1/ http://www.openarchives.org/oai/dc.xsd " xmlns : xsi="http://www.w3.org/2000/10/xmlschema - instance " xmlns="http://purl.org / dc / elements/1.1/ " > < title > item 2</title >< creator > a n other</creator > < /oai_dc> < /metadata> < /record > < /getrecord > .... but a request for the unavailable format wibble would be : .... simeon>./oai1.pl -r ' verb = getrecord&identifier = record2&metadataprefix = wibble ' content - type : text / xml < ?xml version="1.0 " encoding="utf-8 " ?> < getrecord xmlns="http://www.openarchives.org / oai / oai_getrecord " xsi : schemalocation="http://www.openarchives.org / oai/1.0/oai_getrecord http://www.openarchives.org/oai/1.0/oai_getrecord.xsd " > < responsedate>2001 - 05 - 05t12:52:13 - 06:00</responsedate > < requesturl > http://localhost / oai1?verb = getrecord& ; identifier = record2&metadataprefix = wibble&verb = getrecord</requesturl > < record > < header > < identifier > record2</identifier > <datestamp>1999 - 02 - 12</datestamp > < /header >< /record > < /getrecord > .... which includes a block but no block .the protocol also permits the addition of an container 2.2 for each record this is provided as a hook for additional information such as rights or terms information .it is not currently used by any of the registered oai data - providers and is not implemented in the example code .listidentifiers 4.3 and listrecords 4.5 both implement a search by date , the difference is whether they return a list of identifiers or complete metadata records in the specified format .the example code implements both of these verbs using the subroutine listeither which calls a search by date ( getidsbydate ) in database.pm . in the case of listidentifiersthe output consists of elements which may include the attribute status=``deleted '' if the record is deleted .an example request without date restriction is : .... simeon>./oai1.pl -r ' verb = listidentifiers ' content - type : text / xml < ?xml version="1.0 " encoding="utf-8 " ?> < listidentifiers xmlns="http://www.openarchives.org / oai / oai_listidentifiers " xsi : schemalocation="http://www.openarchives.org / oai/1.0/oai_listidentifiers http://www.openarchives.org/oai/1.0/oai_listidentifiers.xsd " > < responsedate>2001 - 05 - 05t12:59:30 - 06:00</responsedate > < requesturl > http://localhost / oai1?verb = listidentifiers&verb = listidentifiers</requesturl > < identifier > record1</identifier >< identifier > record2</identifier > < identifier status="deleted" > record3</identifier > < /listidentifiers > .... the response lists the identifiers of the three records in the repository and indicates that record3 is deleted .if the parameter until=2000 - 01 - 01 were added then only the first two identifiers would be returned since the datestamp of record3 is 2000 - 03 - 13 . in the case of listrecordsthe output consists of blocks similar to those obtained from getrecord requests .listrecords requests must include a metadataprefix parameter . the oaimh protocol allows for partial responses 3.4 for all of the list verbs ( listidentifiers , listsets , listmetadataformats and listrecords ) .this feature has been implemented by most of the larger registered oai repositories for the listidentifiers and listrecords verbs .the example code does not implement this feature .to harvest metadata within the oai , one must implement the service - provider site of the oaimh protocol .i will consider the implementation of a harvester that performs two functions : firstly , harvest all metadata in a particular format , and secondly , harvest all metadata in a particular format that has changed since a given date .these functions are the basis of a system that can create and maintain an up to date copy of the metadata from an oai compliant repository . as one of the maintainers of a heavily used archive i am painfully aware of the importance of avoiding inadvertent denial - of - service attacks created by badly written harvesting software .automated agents should always include a useful user - agent string and a valid e - mail contact address in their http requests .the flow - control elements of the protocol must be respected and careful testing is essential .i will assume that the goal is to create software which will run on some schedule so that the local copy of metadata from some set of repositories is kept updated without manual intervention .however , it would be reckless to assume that the details of repositories will not change over time . in order to avoid the need for manual polling to detect such changes, we should ask how they can be detected automatically . to detect changes other than the addition and deletion of records which are part of normal repository operation, one can compare the response to oai requests that describe the repository between successive harvests .these requests are identify and probably listsets and listmetadataformats ( for the whole repository as opposed to any single record ) .for all of the requests we expect the to change with each request but for these requests we expect the rest of the response to be unchanged .note that to do the test correctly one should compare the xml data in such a way that valid transformations , say re - ordering elements , are ignored . however , in practice it is likely to be sufficient ( if over sensitive ) to do a string comparison of the responses so long as changes in the are ignored . in the example harvesteri have included the facility to specify a file containing the identify response from the previous harvest .this is used both to extract the date of the last harvest and to check for changes in that response .i have not implemented a test for changes in the listsets and listmetadataformats responses .the oaimh protocol was designed to facilitate incremental harvesting .the idea is that a service - provider will maintain an up to date copy of the metadata from a repository by periodically harvesting ` changed ' records .this is why all records have a datestamp , the date of last modification , associated with them .the 1 day granularity of the datestamp and the possibility of data - providers and service - providers being in different time zones means that there must be some overlap between the date ranges of successive requests .if the service - provider and data - provider share the same time - zone then a 1 day overlap is sufficient to ensure that updates are not missed ; records might be updated after the harvest on the day of the last harvest , but provided records that have changed on that day are reharvested then no changes will be missed . to cope with different time zonesit is necessary to extend this to a 2 day overlap if the harvester works with dates local to itself .an alternative strategy , which i prefer , is to use only the dates returned by the repository and thus , by working in the local time zone of the repository , reduce the required overlap to 1 day . in the example harvesteri implement this last strategy by taking the date of the last harvest from the of the stored identity response ( the must be specified in the local time zone of the repository 3.1.2.1 .this date may then be used as the from date ( inclusive ) for the next listrecords or listidentifiers request .the module oaiget.pm examines the http reply for status codes 302 ( redirect ) and 503 ( retry - after ) .both replies are handled automatically , a default retry period is assumed if the 503 response does not specify a time ( though this is an error on the part of the data - provider ) .messages are printed if the verbose option is selected .oaimh protocol replies are designed to be self contained , in part to allow off - line processing thereby separating the harvesting and database - update processes . however , in order to deal with partial responses the harvesting software must be able to parse the responses to all the list requests ( listidentifiers , listsets , listmetadataformats and listrecords ) sufficiently to extract any resumptiontoken 3.4 . to date , none of the registered oai compliant repositories give partial responses for listsets and listmetadataformats requests , but several do for listidentifiers and listrecords requests .perhaps the neatest way to implement a harvester would be to have it recombine partial responses into a complete reply .the example code does not do this but does parse all list requests to look for a so that further requests can be used to complete the original request .the files oaiharvest.pl , oaiget.pm and oaiparser.pm implement a simple harvester that illustrates the points mentioned above .oaiharvest.pl is the executable and accepts a variety of flags , these can be displayed by executing oaiharvest.pl -h .the algorithm is : .... read command line arguments check options and parameters issue identify request compare response with previous identify response if given extract ` from ' date from command line , previous identify response or do complete harvest loop : issue listrecords or listidentifiers request check for resumptiontoken , loop if present .... the subroutine oaiget in oaiget.pm is used to issue the oaimh requests and this handles any retry - after or redirect replies .xml parsing is handled by the oaiparser.pm module which extends xml - parser , which itself is based on the expat parser .let us take as an example , harvesting metadata from the example data - provider code which has be set up at the url http://localhost / oai1 .first we would issue a harvest command without any time restriction ( to harvest all records ) . in the examples ,i harvest just the identifiers using listidentifiers requests , the flags -r and -m metadataprefix can be used to instruct oaiharvest.pl to issue listrecords requests and to specify a metadataprefix other than oai_dc . ....simeon > mkdir harvest1 simeon>./oaiharvest.pl -d harvest1 http://localhost / oai1 oaiharvest.pl : harvest from http://localhost / oai1 using post oaiget : doing post to http://localhost / oai1 args : verb = identify oaiget : got 200 ok ( 479bytes ) oaiharvest.pl : doing complete harvest .oaiget : doing post to http://localhost / oai1 args : verb = listidentifiers oaiget : got 200 ok ( 537bytes ) oaiharvest.pl : got 3 identifiers ( running total : 3 ) oaiharvest.pl : no resumptiontoken , request complete . oaiharvest.pl : done .simeon > ls harvest1 identify listidentifiers.1 .... if we then do an incremental harvest specifying the file name of the last identify response , harvest1/identify , the harvester checks against this response for changes ( none except date ) and extracts the date of the last harvest ( 2001 - 06 - 05 ) to be used as the from date for the new harvest . ....simeon > mkdir harvest2 simeon>./oaiharvest.pl -d harvest2 -i harvest1/identify http://localhost / oai1 oaiharvest.pl : harvest from http://localhost / oai1 using post oaiget : doing post to http://localhost / oai1 args : verb = identify oaiget : got 200 ok ( 479bytes ) oaiharvest.pl : identify response unchanged from reference ( except date ) oaiharvest.pl : reading harvest1/identify to get from date oaiharvest.pl : incremental harvest from 2001 - 06 - 05 ( from harvest1/identify ) oaiget : doing post to http://localhost / oai1 args : from=2001 - 06 - 05&verb = listidentifiers oaiget : got 200 ok ( 444bytes ) oaiharvest.pl : got 0 identifiers ( running total : 0 ) oaiharvest.pl : no resumptiontoken , request complete .oaiharvest.pl : done ..... since there have been no changes in the database this harvest results in no identifiers being returned . to extend this example, i then edited the database ( database.pm ) to add a new record ( record4 ) with datestamp 2001 - 06 - 05 which simulates the addition of a record after the last harvest but on the same day .i then ran another harvest command .simeon > diff database.pm~ database.pm 24c24,26 < ' record3 ' = > [ ' 2000 - 03 - 13 ' , undef ] # deleted --- > ' record3 ' = > [ ' 2000 - 03 - 13 ' , undef ] , # deleted > ' record4 ' = > [ ' 2001 - 06 - 05 ' , { > ' oai_dc ' = > [ ' title','item 4 ' , ' creator','someone else ' ] } ] simeon > mkdir harvest3 simeon>./oaiharvest.pl -d harvest3 -iharvest2/identify http://localhost / oai1 oaiharvest.pl : harvest from http://localhost / oai1 using post oaiget : doing post to http://localhost / oai1 args : verb = identify oaiget : got 200 ok ( 479bytes ) oaiharvest.pl : identify response unchanged from reference ( except date ) oaiharvest.pl : reading harvest2/identify to get from date oaiharvest.pl : incremental harvest from 2001 - 06 - 05 ( from harvest2/identify ) oaiget : doing post to http://localhost / oai1 args : from=2001 - 06 - 05&verb = listidentifiers oaiget : got 200 ok ( 478bytes ) oaiharvest.pl : got 1 identifiers ( running total : 1 ) oaiharvest.pl : no resumptiontoken , request complete .oaiharvest.pl : done ..... this harvest results in one additional identifier , record4 , being returned as expected .below are two excerpts from harvests from real repositories which illustrate the flow - control features of the protocol .the first is from arxiv which uses 503 retry - after replies to enforce a delay between requests .the second if from naca which uses 302 redirect replies to demonstrate a load - sharing scheme . .... ... oaiget : doing post to http://arxiv.org/oai1 args : verb = listidentifiers oaiget : got 503 , sleeping for 60 seconds ... oaiget : woken again , retrying ... oaiget : got 200 ok ( 27398bytes ) oaiharvest.pl : got 502 identifiers ( running total : 502 ) oaiharvest.pl : got resumptiontoken : ` 1997 - 02 - 10 _ _ _ ' oaiget : doing post to http://arxiv.org/oai1 args : resumptiontoken=1997 - 02 - 10___&verb = listidentifiers oaiget : got 503 , sleeping for 60 seconds ... oaiget : woken again , retrying ... oaiget : got 200 ok ( 28330bytes ) oaiharvest.pl : got 520 identifiers ( running total : 1022 ) oaiharvest.pl : got resumptiontoken : ` 1997 - 03 - 06 _ _ _ ' ... ... oaiget : doing post to http://naca.larc.nasa.gov/oai/ args : verb = listidentifiers oaiget : got 302 , redirecting to http://buckets.dsi.internet2.edu/naca/oai/ ? ... oaiget : doing post to http://buckets.dsi.internet2.edu/naca/oai/ args : verb = listidentifiers oaiget : got 200 ok ( 336705bytes ) oaiharvest.pl : got 6352 identifiers ( running total : 6352 ) ... .... i hope the examples above provide a useful demonstration of some of the features of the oaimh metadata harvesting .be sure to exercise caution and restraint when running tests against registered repositories .there is some cost in associated with answering oaimh requests , and recklessly downloading large amounts of data for no good reason is not helpful .the oaimh protocol has been public for 5 months now and experience shows that it is adequate for its intended purpose .there are now 30 registered repositories which together expose over 600,000 metadata records .while there are currently just two registered service providers , ` arc ' and the repository explorer , there is an increasing number of tools and libraries available to assist in the development of harvesting applications .publicly available tools and libraries are listed on the oai web site .this includes tim brody s perl library which is considerably more extensive than the examples presented here .the uptake of oai is very encouraging and it is feedback from the current implementers which will shape the next version of the oaimh protocol .anyone implementing , or interested in implementing , either side of the oaimh protocol should subscribe to the oai - implementers mailing list .it is a helpful and friendly forum .the example programs are : * oai1.pl and oaiserver.pm for the server ; and * oaiharvest.pl , oaiget.pm and oaiparser.pm for the harvester .these files are included with this paper , please download the source . in order to run the example programs, you will require perl 5.004 or later and the following modules ( the precise version i used is given in parenthesis ) .for the the server : * xml - writer ( xml - writer-0.4 ) and for the harvester : * mime - base64 ( mime - base64 - 2.11 ) * uri ( uri-1.09 ) * html - tagset ( html - tagset-3.02 ) * html - parser ( html - parser-3.11 ) * libnet ( libnet-1.0703 ) * digest::md5 ( digest - md5 - 2.11 ) * lwp ( libwww - perl-5.48 ) * expat library ( expat-1.95.1 ) * xml - parser ( xml - parser-2.30 ) all of the above except for expat are available from cpan ( http://www.cpan.org/ ) and can be installed with the standard perl makefile.pl ; make ; make test ; make install sequence .there should not be any dependency problems if the modules are installed in the order listed .the expat xml parsing library upon which xml - parser relies , is available from source forge ( http://sourceforge.net/projects/expat/ ) .before running oaiharvest.pl you should first edit the line that defines the variable $ contact and insert your e - mail address .this will then be specified as the contact address for all http requests and will enable the server maintainer to contact you if there are problems .the example code has been tested only on a linux system and with the apache server . while i hope that it will work on other systems this has not been verified .simeon warner is one of the maintainers and developers of the arxiv e - print archive .he has been actively involved with the development and implementation of the oai since its inception .cite base at the university of southampton , a prototype open archives federating service which extracts and re - exports citation information in addition to providing a search facility , http://cite-base.ecs.soton.ac.uk/ the oai repository explorer , an interface to interactively test archives for compliance with the oaimh protocol , hussein suleman ( digital libraries research laboratory , virginia tech . ) , http://rocky.dlib.vt.edu/~oai/cgi-bin/explorer/oai1.0/testoai perl class library that allow the rapid deployment of an oai compatible interface to an existing web server / database for oai server and harvester implementation , http://www.ecs.soton.ac.uk/~tdb198/oai/frontend.html
|
in this article i outline the ideas behind the open archives initiative metadata harvesting protocol ( oaimh ) , and attempt to clarify some common misconceptions . i then consider how the oaimh protocol can be used to expose and harvest metadata . perl code examples are given as practical illustration . _ high energy physics libraries webzine _ , issue 4 , june 2001 + http://library.cern.ch/heplw/4/papers/3/
|
accurate knowledge of the distance from the sun to the center of the galaxy , r , is important in many fields of astronomy and space science . in particular , the primary motivation for this study was a wish to improve the accuracy of modeling the galactic aberration ( malkin 2011 ) . over the past decades ,many tens of r determinations have been made , making use of different principles and observing methods , and thus characterized by different random and systematic errors .therefore , deriving a best r estimate from these data is not only an astronomical but also a metrological task , similar to deriving the best estimates of the fundamental constants in physics . for the latter, several statistical methods have been developed to obtain best estimates as well as their realistic uncertainties from heterogeneous measurements . in this study , we applied those methods to the r data .another goal of this study was to investigate a possible trend in the multi - year series of r estimates , as discussed by many authors .however , estimates of any such trend differ significantly among papers . therefore , we tried to clarify this issue using the latest results .more details are provided by malkin ( 2012 ) .for the present study , we used all r measurements published in the period 19922011 , with the exception of a number of results that were revised in subsequent papers .we did not use the results of glushkova et al .( 1998 ; revised by glushkova et al . 1999 ) , paczynski & stanek ( 1998 ; revised by stanek et al .2000 ) , eisenhauer et al .( 2003 ; revised by eisenhauer et al . 2005 , which was in turn revised by gillessen et al . 2009 ) . in total , 53 estimates ( listed in table [ tab : allr0 ] ) were used . where both random ( statistical ) and systematic uncertainties were given , they were summed in quadrature . if two different values were given for the lower and upper boundaries of the confidence interval , the mean value of these boundaries was used as the uncertainty in the result ( the lower and upper boundaries were close to each other in all cases , so that this approximation procedure does not significantly affect the final result ) . where authors gave several estimates of r without a final preference , the unweighted average of these estimates was computed ..r estimates used in this study .[ cols= " < ,< , < " , ]although the published r estimates were obtained based on different methods and samples of objects , they are consistent rather than discrepant .the results of the computation of a mean r value obtained using different statistical techniques converge at the level of approximately 0.1 kpc , which is much smaller than the uncertainty in the average value .note , however , that this conclusion is not evident .similar analysis of measurements has shown that the latter are much less consistent .as significant experience in deriving best estimates of the physical constants has shown , using various statistical methods to evaluate the best r estimate is very important to assess data consistency and obtain realistic uncertainties .therefore , careful astronomical consideration of the published measurements should be accompanied by a careful statistical analysis .it should be recognized that the computation of the new conventional iau r value is not only an astronomical , but also a metrological task .another result of this study is that any trend in the r estimates obtained during the last 20 years is statistically insignificant , which makes it unlikely that r results are significantly affected by a bandwagon effect . on the other hand , the formal errors in the published r estimates improve significantly with time .note that the average value r kpc computed in this study differs from the results of the latest direct measurements obtained from stellar orbits about sgr a * , trigonometric parallaxes to sgr b2 , and over 60 trigonometric parallaxes of masers , which give r kpc ( m. j. reid , priv . comm . ) , although these values are still formally consistent .on the other hand , it seems important to properly combine all results obtained based on different methods , because this provides better systematic stability of the average result .indeed , the systematic and random errors of all these results should be assessed and the corresponding correction should be applied when possible before averaging ( although this is a very difficult task ) .
|
based on several tens of r measurements made during the past decades , several studies have been performed to derive the best estimate of r . some used just simple averaging to derive a result , whereas others provided comprehensive analyses of possible errors in published results . in either case , detailed statistical analyses of data used were not performed . however , a computation of the best estimates of the galactic rotation constants is not only an astronomical but also a metrological task . here we perform an analysis of 53 r measurements ( published in the past 20 years ) to assess the consistency of the data . our analysis shows that they are internally consistent . it is also shown that any trend in the r estimates from the last 20 years is statistically negligible , which renders the presence of a bandwagon effect doubtful . on the other hand , the formal errors in the published r estimates improve significantly with time .
|
neural networks have been popular in the machine learning community since the 1980s with repeating rises and falls of popularity .their main benefit is their ability to learn complex , non - linear hypotheses from data without the need of modeling complex features .this makes them of particular interest for computer vision , in which feature description is a long - standing and largely non - understood topic .neural networks are difficult to train and for the last ten years they have come to enormous fame under the topic `` deep learning '' .new advances in training methods and the movement of training from cpus to gpus allow to train more reliable models much faster .deep neural networks are not a silver bullet , as training is still heavily based on model selection and experimentation .overall , significant progress in machine learning and pattern recognition has been made in natural language processing , computer vision and audio processing .leading it companies have made significant investments into deep learning for these reasons , such as baidu , google , facebook and microsoft . concretely , previous work of the author on deep learning for facial expression recognition in resulted in a deep neural network model that significantly outperformed the best contribution to the 2013 kaggle facial expression competition .therefore , a further investigation on the recognition of action units and in particular smile using deep neural networks and convolutional neural networks seems desirable .only very few works on this topic have been reported so far , such as in .it would also be interesting to compare the input of the entire face versus the mouth to study differences in the performance of deep convolutional models .this chapter provides an overview of different types of neural networks , their capabilities and training challenges , based on .this chapter does not provide an introduction to neural networks , the reader is therefore referred to and for a comprehensive introduction to neural neural networks .neural networks are inspired by the brain and composed of multiple layers of logistic regression units , called neurons .they experienced different periods of hypes in the 1960s and 1980s/90s .neural networks are known to be able to learn complex hypotheses for regression and classification .conversely , training neural networks is difficult , as their cost functions have many local minima .hence , training tends to converge to a local minimum , resulting in poor generalization of the network .for the last ten years , neural networks have been celebrating a comeback under the term deep learning , taking advantage of many hidden layers in order to build more powerful machine learning algorithms .feed - forward neural networks are the simplest type of neural networks .they are composed of an input layer , one or more hidden layers and an output layer , as visualized in figure [ fig : nn ] . and .,scaledwidth=60.0% ] using learned weights or , they propagate an input through the network to the output to make predictions .the activation of unit of layer can be calculated as follows : is an activation function , for which often the sigmoid activation function is used in the hidden layers . the sigmoid function or its generalization , the softmax function , are used for classification problems in the output layer units . for regression problems ,the sum of equation [ eq : activation ] is used directly in the output layer without the use of any activation functions . in order to learn the weights ,a cost function is minimized .there are different cost functions , such as the least squares or cross - entropy cost function , described in .the latter one has been reported to generalize better and speed up learning as discussed in . in order to learn the weights , algorithm [ alg : backpropagation ] named backpropagationis used to efficiently compute the partial derivatives , which are then fed into an optimization algorithm , such as gradient descent ( algorithm [ alg : gradientdescent ] ) or stochastic gradient descent ( algorithm [ alg : stochasticgradientdescent ] ) , as described in .those three algorithms are based on . ( for all ) ( for all ) perform forward propagation to compute for using , compute compute : ( simultaneously for all ) randomly shuffle data set ( simultaneously for all ) generally , the more units in a neural network , the higher its expressional complexity .in contrast , the more units , the more it tends to overfit . to prevent overfitting ,various approaches have been described in the literature , including / regularization , early stopping , tangent propagation and dropout .deep neural networks use many hidden layers .this allows to learn increasingly more complex features hierarchies , as visualized in figure [ fig : nn_example ] for the google brain .such architectures are of enormous benefit , as the long - standing problem of feature description in signal processing disappears to a large extend .conversely , training of deep neural networks gets more difficult because of the increased number of parameters . as described in and , backpropagation does not scale to deep neural networks : starting with small random initial weights , the backpropagated partial derivatives go towards zero . as a result, training becomes infeasible and is called the vanishing gradient problem . for deep neural networks ,training has therefore been split in two parts : pre - training and fine - tuning .pre - training allows to initialize the weights to a location in the cost function which can be optimized quickly using regular backpropagation .various pre - training methods have been described in the literature .most prominently , unsupervised methods , such as restricted boltzmann machines ( rbm ) in and or autoencoders in and are used .both methods learn exactly one hidden layer .this hidden layer is then used as input to the next rbm or autoencoder to learn the next hidden layer .this process can be repeated for many times in order to pre - train a so - called deep belief network ( dbn ) or stacked autoencoder , composed of rbms or autoencoders respectively .in addition , there are denoising autoencoders defined in , which are autoencoders that are trained to denoise corrupted inputs . furthermore, other methods such as discriminative pre - training or reduction of internal covariance shift have been reported as effective training methods for deep neural networks . in the past ,mostly sigmoid units have been used in the hidden layers , with sigmoid or linear units in the output layer for classification or regression , respectively . for classification ,the softmax activation is preferred in the output layer .as described by norvig in , the output of a set unit is much stronger than the others .another benefit of softmax is that it is always differentiable for a weight .recently , the so - called rectified linear unit ( relu ) has been proposed in , which has been used successfully in many deep learning applications .figure [ fig : activation_functions ] visualizes the sigmoid and relu functions .relu has a number of advantages over sigmoid , reported in and .first , it is much easier to compute as it is either or the input value . also , sigmoid has for non - activated input values less than or equal to an activation value of greater than .in contrast , relu models biological behavior of neurons more accurately , as it is for those cases . with many units set to ,a sparse activation of the networks follows , which is another form of regularization .furthermore , the vanishing gradient problem becomes less of an issue as relu units result in a simpler cost function .last , for some experiments , relu reduces the importance of pre - training or may not be necessary at all . in the context of this project ,deep neural networks have been successfully applied to facial expression recognition in . in that study ,rbms , autoencoders and denoising autoencoders were compared on a noisy dataset from a 2013 kaggle challenge named `` emotion and identity detection from face images '' .this challenge was won by a neural network presented in , which achieved an error rate of 52.977% . in ,a stacked autoencoder was trained with an error of 39.75% . in a subsequent project , this error could be reduced further to 28% with a stacked denoising autoencoder .this study also showed that deep neural networks are a promising machine learning method for this context , but not a silver bullet as data pre - processing and intensive model selection are still required .recurrent neural networks ( rnns ) are cyclic graphs of neurons as displayed in figure [ figure : recurrentneuralnetwork ] .( a12 ) ; ( a22 ) ; ( a32 ) ; ( x1 ) ; ( x2 ) ; ( x3 ) ; ( l3 ) ; ( x1 ) edge[- > ] ( a12 ) ( x1 ) edge[- > ] ( a22 ) ( x1 ) edge[- > ] ( a32 ) ( x2 ) edge[- > ] ( a12 ) ( x2 ) edge[- > ] ( a22 ) ( x2 ) edge[- > ] ( a32 ) ( x3 ) edge[- > ] ( a12 ) ( x3 ) edge[- > ] ( a22 ) ( x3 ) edge[- > ] ( a32 ) ( a12 ) edge[- > ] ( l3 ) ( a22 ) edge[- > ] ( l3 ) ( a32 ) edge[- > ] ( l3 ) ( a32 ) edge [ -latex , bend left , ultra thick ] node ( x3 ) ; they have increased representational power as they create an internal state of the network which allows them to exhibit dynamic temporal behavior .training rnns is more complex as this depends on their structure .the rnn in figure [ figure : recurrentneuralnetwork ] can be trained using a simple variant of backpropagation . in practice ,recurrent networks are more difficult to train than feedforward networks and do not generalize as reliably .a long short - term memory ( lstm ) defined in is a modular recurrent neural network composed of lstm cells .a lstm cell is visualized in figure [ fig : lstm_cell ] .inputs are fed in , for which a value is computed using the sigmoid function of the dot product of the input and weights .the second sigmoid unit is the input gate .if its output value is near to zero , the product is near to zero , too , thus zeroing out the input value . as a consequence ,this blocks the input value , preventing it from going further into the cell .the third sigmoid unit is the output gate .its function is to determine when to output the internal state of the cell .this is the case when the output of this sigmoid unit is close to one .lstm cells can be put together in a modular structure , as visualized in figure [ fig : lstm_example ] to build complex recurrent neural networks .training lstms takes advantage of backpropagation through time , a variant of backpropagation .its goal is to minimize the lstm s total cost on a training set .lstms have been reported to outperform regular rnns and hidden markov models in classification and time series prediction tasks .lstms have also been reported in to perform well on prediction of image sequences .invariance to transformations is a desired property of learning algorithms .typical variances of images and videos include translation , rotation and scaling .tangent propagation is one method in neural networks to handle transformations by penalizing the amount of distortion in the cost function .convolutional neural networks ( cnns ) are a different approach to implementing invariance in neural networks , which are inspired by biological processes .cnns were initially proposed by lecun in .they have been successfully applied to computer vision problems , such as hand - written digit recognition . in images ,nearby pixels are strongly correlated , a property of which local features take advantage of . in a hierarchical approach ,local features are used in the first stage of pattern recognition , allowing recognition of more complex features .the concept of cnns is illustrated in figure [ fig : cnn_example ] for a layer of convolutional units , followed by a sub - sampling layer , as described in .the convolutional layer is composed of so - called feature maps .units in a feature map take inputs from a small subregion of the input .all units in a feature map share the same weights , which is called weight sharing . replicating units in this way allows for features to be detected independently of their position in the visual field .the subsampling layer takes small regions of convolutional layer as input and computes the average ( or maximum or other functions ) of those inputs , multiplied by a weight and finally applies the sigmoid function to the value .the result of a unit in the subsampling layer is relatively insensitive to small shifts or rotations of the image in the corresponding regions of the input space .this concept can be repeated for more times to subsequently be more invariant and to detect more complex features . because of the constraints of weights , the number of independent parameters in the network is smaller than in a fully - connected network . this allows to train the network faster and to be less prone to overfitting .training of cnns requires minimization of a cost function .the idea of backpropagation can be applied to cnn with a small modification taking into account the weight sharing .recently , cnns have been reported to work well on processing of image sequences , for example in for multiple convolutions , as visualized in figure [ fig : multiple_convolutions ] .a related approach is reported in .cnns are expanded to work on image sequences instead of single images .the extra weights need to be initialized in a way so that training can easily optimize them .an extensive study and comparison of different initialization methods is provided in . describes a deep architecture composed of convolutions , lstms and regular layers for a nlp problem .it begins with multiple convolutional layers .next , a linear layers follows with fewer units in order to reduce the dimensionality of the features recognized by the convolutional layers .next , the reduced features are fed into a lstm . the output of the lstmis then used in regular layers for classification .the entire architecture is visualized in figure [ fig : full_architecture ] .similar architectures exist for processing of image sequences and are elaborated further .very successful results using fusion of different video inputs have been reported , too .for example , a reported architecture in fuses a low - resolution version of the input with a higher - resolution input of the center of the video .this is visualized in figure [ fig : fusion_center ] .conversely , fuses a low - resolution version of the input with the optical flow , as visualized in figure [ fig : fusion_flow ] .the final stage of video classification can alternatively be done by a different classification , such as a support vector machine ( svm ) .this is described in and visualized in figure [ fig : final_svm ] .furthermore , a spatio - temporal convolutional sparse autoencoder for sequence classification is described in .in this chapter , various popular databases relevant to action unit recognition are presented .each database includes annotations per frame of the respective action units , among other features .furthermore , statistics of the distribution of action units were generated for each database in order to select databases rich of smiles .the facial action coding system ( facs ) is a system to taxonomize any facial expression of a human being by their appearance on the face .it was published by paul ekman and wallace v. friesen in 1978 .relevant to this thesis are so - called action units ( aus ) , which are the basic actions of individual facial muscles or groups of muscles .action units are either set or unset . if set , different levels of intensity are possible .popular databases in the field of action unit recognition and studies of facial expressions include the following , which are presented briefly in this section .the reader is referred to the relevant literature for details .the affectiva - mit facial expression dataset ( amfed ) contains 242 facial videos ( 168,359 frames ) , which were recorded in the wild ( real world conditions ) .the chinese academy of sciences micro - expression ( casme ) database was filmed at 60fps and contains 195 micro - expressions of 22 male and 13 female participants .the denver intensity of spontaneous facial action ( disfa ) database contains videos of 15 male and 12 female subjects of different ethnicities .action unit annotations are on different levels of intensity. the geneva multimodal emotion portrayals ( gemep ) contains audio and video recordings of 10 actors which portray 18 affective states . the mahnob laughter database contains 22 subjects recorded using a video camera , a thermal camera and two microphones . recorded were laughter , posed smiles , posed laughter and speech .it includes 180 sessions with a total duration of 3h and 49min .the unbc - mcmaster shoulder pain expression archive database contains 200 video sequences of participants that were suffering from shoulder pain and their corresponding spontaneous facial expressions . in total , it includes 48,398 facs coded frames .for the databases presented in the previous section , statistics of the annotations of action units were generated .this task has proven to be complex , as the structure of each database is different and need to be parsed accordingly .comprehensive plots and statistics of the individual action units were generated .for example , figure [ fig : casme_stat ] represents the binary distribution of au12 , which represents smile in facs coding , of the casme database .table [ table : some_stats ] contains a selection of action units of the different databases . due to different terminology ,the amfed database does not use au12 , but a feature called `` smile '' as explained in ..selected statistics of action units in databases : an integer denotes the number of frames in which an action unit is set ( intensity ) .a hyphen indicates that an action unit is not available in a database .[ cols="^,^,^,^,^,^,^",options="header " , ]
|
this thesis describes the design and implementation of a smile detector based on deep convolutional neural networks . it starts with a summary of neural networks , the difficulties of training them and new training methods , such as restricted boltzmann machines or autoencoders . it then provides a literature review of convolutional neural networks and recurrent neural networks . in order to select databases for smile recognition , comprehensive statistics of databases popular in the field of facial expression recognition were generated and are summarized in this thesis . it then proposes a model for smile detection , of which the main part is implemented . the experimental results are discussed in this thesis and justified based on a comprehensive model selection performed . all experiments were run on a tesla k40c gpu benefiting from a speedup of up to factor 10 over the computations on a cpu . a smile detection test accuracy of 99.45% is achieved for the denver intensity of spontaneous facial action ( disfa ) database , significantly outperforming existing approaches with accuracies ranging from 65.55% to 79.67% . this experiment is re - run under various variations , such as retaining less neutral images or only the low or high intensities , of which the results are extensively compared .
|
computational methods play an increasingly important role in the professional life of many working physicists , whether in experiment or theory , and very explicitly indeed for those doing simulational work , a ` category ' that might not even have been listed separately when some senior physics faculty were students themselves .that same reality is reflected in the curriculum requirements and course offerings at any number of undergraduate institutions , ranging from specific programming classes required in the major to entire computational physics programs - .at my institution , three of the five options in the physics major require at least one programming course ( from a list including c++ , visual basic , java , and even fortran ) offered by departments outside of physics , so the majority of our majors typically have a reasonable amount of programming experience no later than the end of their sophomore year , in time to start serious undergraduate research here ( or elsewhere , in reu programs ) during their second summer , often earlier . students in our major ,however , have historically expressed an interest in a course devoted to one of the popular integrated multi - purpose ( including symbolic manipulation ) programming languages such as , maple , or matlab , taught in the context of its application to physics problems , both in the undergraduate curriculum , and beyond , especially including applications to research level problems . in a more global context ,studies from physics education research have suggested that computer - based visualization methods can help address student misconceptions with challenging subjects , such as quantum mechanics , so the hope was that such a course would also provide students with increased experience with visualization tools , in a wide variety of areas , thereby giving them the ability to generate their own examples . with these motivations in mind , we developed a one - credit computational physics course along somewhat novel lines , first offered in the spring 2007 semester . inwhat follows , i review ( in sec .[ sec : description_course ] ) the structure of the course , then describe some of the homework - to - research related activities developed for the class ( in sec . [sec : description_activities ] ) , and finally briefly outline some of the lessons learned and conclusions drawn from this experimental computational physics course .an appendix contains a brief lecture - by - lecture description of the course as well as some data on student satisfaction with each lecture topic .based on a variety of inputs ( student responses to an early survey of interest , faculty expertise in particular programming languages and experience in their use in both pedagogical and research level applications , as well as practical considerations such as the ready accessibility of hardware and software in a convenient computer lab setting ) the course was conceptualized as a one - credit `` _ introduction to mathematica in physics _ '' course .the strategies outlined in the course syllabus to help achieve the goals suggested by the students are best described as follows : ( ) first use familiar problems from introductory physics and/or math courses to learn basic commands and programming methods .then use those techniques to probe harder physics problems at the junior - senior level , motivating the need for new skills and more extensive program writing to address junior - senior level physics problems not typically covered in standard courses . finally , extend and expand the programming experience in order to obtain results comparable to some appearing in the research literature .this ` vertical ' structure was intentionally woven with cross - cutting themes involving comparisons of similar computational methods across topics , including numerical solutions of differential equations , matrix methods , special functions , connections between classical and quantum mechanical results , etc .. in that context , the emphasis was almost always on breadth over depth , reviewing a large number of both physics topics and programming commands / methods , rather than focusing on more detailed and extensive code writing .the visualization of both analytic and numerical results in a variety of ways was also consistently emphasized .ideas for some lecture topics came from the wide array of ` physics and ' books available , - , but others were generated from past experience with teaching junior - senior level courses on ` core ' topics , pedagogical papers involving the use of computational methods and projects ( from the pages of ajp and elsewhere ) , and especially from the research literature . given my own interests in quantum mechanics and semi - classical methods , there was an emphasis on topics related to those areas .on the other hand , despite many excellent simulations in the areas of thermodynamics and statistical mechanics , because of my lack of experience in teaching advanced undergraduate courses on such topics , we covered only random walk processes in this general area . finally , the desire to make strong connections between research results and standardly seen topics in the undergraduate curriculum had a very strong affect on the choice of many components .weekly lectures ( generated with latex and printed into .pdf format ) were uploaded to a course web site , along with a number of ( uncompiled ) notebooks for each weeks presentation .links were provided to a variety of accompanying materials , including on - line resources , such as very useful mathworld ( ` http://mathworld.wolfram.com ` ) articles and carefully vetted _ wikipedia _( ` http://wikipedia.org/ ` ) entries , as well as .pdf copies of research papers , organized by lecture topic .the lecture notes were not designed to be exhaustive , as we often made use of original published papers as more detailed resources , motivating the common practice of working scientists to learn directly from the research literature . while there was no required text ( or one we even consulted regularly ) a variety of books ( including refs . - and others ) were put on reserve in the library . while the lecture notes and notebooks were ( and still are ) publicly available , because of copyright issues related to the published research papers , the links to those components were necessarily password protected .( however , complete publication information is given for each link so other users can find copies from their own local college or university subscriptions . )the web pages for the course have been revised slightly since the end of the spring 2007 semester , but otherwise represent fairly well the state of the course at the end of the first offering .the site will be hereafter kept ` as - is ' to reflect its state at this stage of development and the url is ` www.phys.psu.edu/~rick/math/phys497.html ` .we have included at the site an extended version of this paper , providing more details about the course as well as personal observations about its development and outcomes . a short list of topics covered ( by lecture ) is included in the appendix , and we will periodically refer to lectures below with the notation * l1 * , * l2 * , etc . in sec .[ sec : description_activities ] , but we will assume that readers with experience or interest in will download the notebooks and run them for more details .as an example of the philosophy behind the course structure , the first lecture at which serious commands were introduced and some simple code designed ( * l2 * ) , began with an extremely brief review ( via the on - line lecture notes ) of the standard e&m problem of the on - axis magnetic field of a helmholtz coil arrangement . this problem is discussed ( or at least assigned as a problem ) in many textbooks and requires only straightforward , if tedious , calculus ( evaluating up through a 4th derivative ) and algebra to find the optimal separation to ensure a highly uniform magnetic field at the center of two coils .a heavily commented sample program was used to ` solve ' this problem , which introduced students to many of the simplest constructs , such as defining and plotting functions , and some of the most obvious calculus and algebra commands , such as ` series [ ] ` , ` normal [ ] ` , ` coefficient [ ] ` , ` expand [ ] ` , and ` solve [ ] ` .( it helped to have a real pair of helmholtz coils where one could measure the separation with a ruler and compare to the radius ; lecture demonstrations , even for a computational physics course , are useful ! ) this simple exercise was then compared ( at a very cursory level ) to a much longer , more detailed notebook written by a former psu physics major ( now in graduate school ) as part of his senior thesis project dealing with designing an atom trap .links were provided to simple variations on this problem , namely the case of an anti - helmholtz coil , consisting of two parallel coils , with currents in opposite directions , designed to produce an extremely uniform magnetic field * gradient*. we were thus able to note that the initial investment involved in mastering the original program , could , by a very simple ` tweak ' of the notebook under discussion ( requiring only changes in a few lines of code ) solve a different , equally mathematically intensive problem almost for free .one of the very few examples of an explicit time - dependent solution of a quantum mechanical problem in the junior - senior level curriculum ( or standard textbooks at that level ) , in fact often the * only * such example , is the gaussian wavepacket solution of the 1d free - particle schrdinger equation .it is straightforward in to program readily available textbook solutions for this system and to visualize the resulting spreading wavepackets , allowing students to change initial conditions ( central position and momentum , initial spatial spread , etc . ) in order to study the dependence on such parameters .plotting the real and imaginary parts of the wavefunction , not just the modulus , also reminds students of the connection between the ` wiggliness ' of the solution and the position - momentum correlations that develop as the wave packet evolves in time .this exercise was done early in the course ( * l4 * ) when introducing visualizations and animations , but relied only on ` modern physics ' level quantum mechanics , though most students were already familiar with this example from their junior - level quantum mechanics course .students can easily imagine that such gaussian examples are only treated so extensively because they can be manipulated to obtain closed - form solutions , and often ignore the connection between that special form and its role as the ground - state solution of the harmonic oscillator .recent advances in atom trapping have shown that bose - einstein condensates can be formed where the time - development of the wavefunction of the particles , initially localized in the ground - state of a harmonic trap , can be modeled by the free - expansion of such gaussian solutions after the trapping potential is suddenly removed .students can then take ` textbook - level ' programs showing the spreading of gaussian solutions and profitably compare them with more rigorous theoretical calculations ( using the gross - pitaevskii model ) showing the expected coherent behavior of the real and imaginary parts of the time - dependent phase of the wave function of the condensate after the trapping potential is turned off . while this comparison is itself visually interesting ,the experimental demonstration that the ` wiggles ' in the wavefunction are truly there comes most dramatically from the _ observation of interference between two bose condensates _ and one can easily extend simple existing programs to include two expanding gaussians , and ` observe ' the resulting interference phenomena in a simulation , including the fact that the resulting fringe contrast in the overlap region is described by a time - dependent spatial period given by where is the initial spatial separation of the two condensates ; some resulting frames of the animation are shown in fig [ fig : bec_pix ] . since the ( justly famous ) observations in ref . are destructive in nature , a simulation showing the entire development in time of the interference pattern is especially useful .the topic of wave phenomena in 1d and 2d systems , with and without boundary conditions , is one of general interest in the undergraduate curriculum , in both classical and quantum mechanical examples , and was the focus of * l5 * and * l6 * respectively .the numerical study of the convergence of fourier series solutions of a ` plucked string ' , for example , can extend more formal discussions in students math and physics coursework .more importantly , the time - dependence of solutions obtained in a formal way via fourier series can then also be easily visualized using the ability to ` animate [ ] ` in . bridging the gap between classical and quantum mechanical wave propagation in 1d systems with boundaries ( plucked classical strings versus the 1d quantum well ) , time - dependent gaussian - like wave packet solutions for the 1d infinite square well can be generated by a simple generalization of the fourier expansion , with numerically accurate approximations available for the expansion coefficients to allow for rapid evaluation and plotting of the time - dependent waveform ( in either position- or momentum - space . ) animations over the shorter - term classical periodicity as well as the longer term quantum wave packet revival time scales , allow students to use this simplest of all quantum models to nicely illustrate many of the revival ( and fractional revival ) structures possible in bound state systems , a subject which is not frequently discussed in undergraduate textbooks at this level .examples of the early observations of these behaviors in rydberg atoms ( see , _ e.g. _ ref . ) are then easily appreciated in the context of a more realistic system with which students are well - acquainted , and are provided as links .students at the advanced undergraduate level will have studied the behavior of many differential equations in their math coursework ( sometimes poorly motivated ) , along with some standard , more physically relevant , examples from their core physics curriculum .less familiar mathematical systems , such the lotka - volterra ( predator - prey ) equations , which can be used to model the time - dependent variations in population models , are easily solved in mathematica using ` ndsolve [ ] ` , and these were one topic covered in * l9*. the resulting solutions can be compared against linearized ( small deviations from fixed population ) approximations for comparison with analytic methods , but are also nicely utilized to illustrate ` time - development flow ' methods for coupled first - order equations .for example , the lotka - volterra equations can be written in the form and one can use mathematica functions such as ` plotvectorfield [ ] ` to plot in the plane to illustrate the ` flow ' of the time - dependent solutions , themselves graphed using ` parametricplot [ ] ` . note that the lotka - volterra equations can also be integrated exactly to obtain implicit solutions , for which ` implicitplot [ ] ` can be used to visualize the results .these methods of analysis , while seen here in the context of two coupled first - order differential equations , are just as useful for more familiar single second - order equations of the form by writing to form a pair of coupled first - order equations , a common trick used when implementing tools such as the runge - kutta method . with this approach ,familiar problems such as the damped and undamped harmonic oscillator can also be solved and visualized by the same methods , very naturally generating phase space plots .more generally , such examples can be used to emphasize the importance of the mathematical description of nature in such life - science related areas as biophysics , population biology , and ecology .in fact , ` phase - space ' plots of the data from one of the early experimental tests of the lotka - volterra description of a simplified _ in vitro _biological system are a nice example of the general utility of such methods of mathematical physics .examples of coupled non - linear equations in a wide variety of physical systems can be studied in this way , _e.g. _ ref . , to emphasize the usefulness of mathematical models , and computer solutions thereof , across scientific disciplines .other non - linear problems were studied in * l8 * and * l9 * using the ` ndsolve [ ] ` utility , including a non - linear pendulum .the motion of a charged particle in spatially- or temporally - dependent magnetic fields was also solved numerically , to be compared with closed - form solutions ( obtained using ` dsolve [ ] ` ) for the more familiar case of a uniform magnetic field , treated earlier in * l3*. the study of the motion of a particle moving under the influence of an inverse square law is one of the staples of classical mechanics , and every undergraduate textbook on the subject treats some aspect of this problem , usually in the context of planetary motion and kepler s problem . in the context of popular textbooks , , the strategy is almost always to reduce the two - body problem to a single central - force problem , use the effective potential approach to solve for using standard integrals , and to then identify the resulting orbits with the familiar conic sections .solving such problems directly , using the numerical differential equation solving ability in , especially ` ndsolve [ ] ` , was the single topic of * l10*. for example , one can first easily check standard ` pencil - and - paper ' problems , such as the time to collide for two equal masses released from rest , as perhaps the simplest 1d example . given a program solving this problem , one can easily extend it to two - dimensions to solve for the orbits of two unequal mass objects for arbitrary initial conditions .given the resulting numerically obtained and , one can then also plot the corresponding relative and center - of - mass coordinates to make contact with textbook discussions .effective one - particle problems can also be solved numerically to compare most directly with familiar derivations , but with monitoring of energy and angular momentum conservation made to test the numerical accuracy of the ` ndsolve [ ] ` utility ; one can then also confirm numerically that the components of the lenz - runge vector are conserved . it is also straightforward to include the power - law exponent of the force law ( with for the coulomb / newton potential ) as a tunable parameter , and note that closed orbits are no longer seen when is changed from its inverse - square - law value , but are then recovered as one moves ( far away ) to the limit of the harmonic oscillator potential , and ( or ) , as discussed in many pedagogical papers pointing out the interesting connections between these two soluble problems . with such programs in hand ,it is relatively easy to generalize 2-body problems to 3-body examples , allowing students to make contact with both simple analytic special cases and more modern research results on special classes of orbits , as in ref .the two most famous special cases of three equal mass particles with periodic orbits are shown in fig .[ fig : three_body ] ( a ) and ( b ) ( and were discovered by euler and lagrange respectively ) .they are easily analyzed using standard freshman level mechanics methods , and just as easily visualized using simulations .an explicit example of one of the more surprising ` figure - eight ' type trajectories ( as shown in fig . [ fig : three_body](c ) ) posited in ref . was discovered and discussed in detail in ref .it has been cited by christian , belloni , and brown as a nice example of an easily programmable result in classical mechanics , but arising from the very modern research literature of mathematical physics . in all three cases , it s straightforward to arrange the appropriate initial conditions to reproduce these special orbits , but also just as easy to drive them away from those values to generate more general complex trajectories , including chaotic ones .for example , the necessary initial conditions for the ` figure - eight ' orbit are given by the study of such so - called _ choreographed _ n - body periodic orbits has flourished in the literature of mathematical physics and a number of web sites illustrate some very beautiful , if esoteric , results .students expressed a keen interest in having more material about probability and statistical methods , so there was one lecture on the subject ( * l11 * ) which was commented upon very favorably in the end - of - semester reviews ( but not obviously any more popular in the numerical rankings ) dealing with simple 1d and 2d random walk simulations .this included such programming issues as being able to reproduce specific configurations using constructs such as the ` randomseed [ ] ` utility .such topics are then very close indeed to more research related methods such as the diffusion monte carlo approach to solving for the ground state of quantum systems , but also for more diverse applications of brownian motion problems in areas such as biophysics .the only topic relating to probability was a very short discussion of the ` birthday problem ' , motivated in part by the fact that the number of students in the course was always very close to the ` break even ' ( 50 - 50 probability ) number for having two birthdays in common ! the problem of the quantum bouncer , a particle of mass confined to a potential of the form is a staple of pedagogical articles where a variety of approximation techniques can be brought to bear to estimate the ground state energy ( variational methods ) , the large energy eigenvalues ( using wkb methods ) , and even quantum wave packet revivals .the problem can also be solved exactly , in terms of airy functions , for direct comparison to both approximation and numerical results . while this problem might well have been historically considered of only academic interest , experiments at the ill ( institute laue langevin ) , have provided evidence for the _ quantum states of neutrons in the earth s gravitational field _ where the bound state potential for the neutrons ( in the vertical direction at least ) is modeled by eqn .( [ bouncer_potential ] ) , using . in the context of our course, students studied this system first in * l9 * in the context of the shooting method of finding well - behaved solutions of the 1d schrdinger equation , which then correspond to the corresponding quantized energy eigenvalues .the analogous ` half - oscillator ' problem , namely the standard harmonic oscillator , but with an infinite wall at the origin , can be used as a simple starting example for this method , motivating the boundary conditions ( and arbitrary ) imposed by the quantum bouncer problem .it can then be used as a testbed for the shooting method , seeing how well the exact energy eigenvalues , namely the values with odd , are reproduced .the change to dimensionless variables for the neutron - bouncer problem already provides insight into the natural length and energy scales of the system , allowing for an early comparison to the experimental values obtained in refs . , .in fact , the necessary dimensionful combinations of fundamental parameters ( , , ) can be reduced ( in a sledge - hammer sort of way ) using the built - in numerical values of the physical constants available in ( loading ` < < miscellaneous\`physicalconstants\ `` ) which the students found amusing , although did not automatically recognize that ` joule = kilogram meter^2/second^2 ` . the numerically obtained energy eigenvalues ( obtained by bracketing solutions which diverge to ) can be readily obtained and compared to the ` exact ' values , but estimates of the accuracy and precision of the shooting method results are already available from earlier experience with the ` half - oscillator ' example .then , in the lecture on special functions ( * l12 * ) this problem is revisited using the exact airy function solutions , where one can then easily obtain the properly normalized wavefunctions for comparison with the results shown in fig . 1 of ref. , along with quantities such as the expectation values and spreads in position , all obtained using the ` nintegrate [ ] ` command . once experience is gained with using the ` findroot [ ] ` option to acquire the airy zeros ( and corresponding energies ) , one can automate the entire process to evaluate all of the parameters for a large number of low - lying states using a ` do [ ] ` structure . obtaining physical values for such quantities as for the low - lying states was useful as their macroscopic magnitudes ( s of ) play an important role in the experimental identification of the quantum bound states .more generally , the study of the and solutions of the airy differential equation provided an opportunity to review general properties of second - order differential equations in 1d of relevance to quantum mechanics .topics discussed in this context included the behavior of the airy solutions for ( two linearly independent oscillatory functions , with amplitudes and ` wiggliness ' related to the potential ) and for ( exponentially growing and decaying solutions ) with comparisons to the far more familiar case arising from the study of a step potential .following up on * l6 * covering 2d wave physics , a section of * l12 * on special functions was devoted to bessel function solutions of the 2d wave equation for classical circular drumheads and for quantum circular infinite wells .many features of the short- and long - distance behavior of bessel functions can be understood in terms of their quantum mechanical analogs as free - particle solutions of the 2d schrdinger equation , and these aspects are emphasized in the first discussion of their derivation and properties in the lecture notes .such solutions can then be compared to now - famous results analyzing the _ confinement of electrons to quantum corrals on a metal surface _ using just such a model of an infinite circular well .the vibrational modes of circular drumheads can , of course , also be analyzed in this context , and a rather focused discussion of the different classical oscillation frequencies obtained from the bessel function zeros was motivated , in part , by an obvious error in an otherwise very nice on - line simulation of such phenomena .the site ` http://www.kettering.edu/~drussell/demos/membranecircle/circle.html ` displays the nodal patterns for several of the lowest - lying vibrational modes , but the oscillations are synched up upon loading the web page , so that they all appear to have the same oscillation frequency ; hence an emphasis in this section on ` bug - checking ' against various limiting cases , the use of common sense in simulations , and the perils of visualization .the discussions of the energy eigenvalues ( normal mode frequencies ) for a variety of 2d infinite well geometries ( drumhead shapes ) generated earlier in the semester , allowed us to focus on using information encoded in the ` spectra ' arising from various shapes and its connection to classical and quantum results in * l13*. for example , the weyl area rule for the number of allowed -states in the range for a 2d shape of area and perimeter is given by \, dk \ , , \ ] ] which upon integration gives identical results in quantum mechanics are obtained by using the free - particle energy connection so that in the context of the schrdinger equation for free - particles bound inside 2d infinite well ` footprints ' , we have given a long list of ( or ) values for a given geometry , it is straightforward to order them and produce the experimental ` staircase ' function and so the weyl - like result of eqn .( [ weyl_prediction ] ) will be an approximation to a smoothed out version of the data. a relatively large number of ` exact ' solutions are possible for such 2d geometries , including the square , rectangle , triangle ( isosceles triangle obtained from a square cut along the diagonal , ) , equilateral ( ) triangle and variations thereof , as well as circular or half - circular wells , and many variations .( we note that current versions of give extensive lists of zeros of bessel functions , by loading `< < numericalmath\`besselzeros\ `` , which allows for much more automated manipulations of solutions related to the circular cases . ) as an example , we show in fig .[ fig : weyl_triangle ] a comparison between the ` theoretical ' result in eqn .( [ weyl_prediction ] ) and the ` experimental ' data in eqn .( [ staircase ] ) for the isosceles right triangle .in this case , the area and perimeter are and respectively , and the allowed values are where , namely those for the square but with a restriction on the allowed ` quantum numbers ' . while that type of analysis belongs to the canon of classical mathematical physics results , more modern work on periodic orbit theoryhas found a much deeper relationship between the quantum mechanical energy eigenvalue spectrum and the classical closed orbits of the same system , . given the spectra for the infinite well ` footprints ' mentioned above , it is easy to generate a minimal program to evaluate the necessary fourier transforms to visualize the contributions of the familiar ( and some not so familiar ) orbits in such geometries ; in fact , an efficient version of this type of analysis is used as an example of good programming techniques in ref .links to experimental results using periodic orbit theory methods in novel contexts are then possible . these types of heavily numerical analyses , which either generate or make use of energy spectra , can lead to interesting projects based on pedagogical articles which reflect important research connections , such as in refs . and . in the original plan ,the last two lectures were to be reserved for examples related to chaos .we did indeed retain * l14 * for a focused discussion of chaotic behavior in a simple deterministic system , namely the logistic equation , using this oft - discussed calculational example , which requires only repeated applications of a simple iterative map of the form , as one of the most familiar examples , citing its connections to many physical processes .the intent was to then continue in * l15 * earlier studies of the ` real ' pendulum ( to now include driving forces ) to explore the wide variety of possible states , including chaotic behavior .based on student comments early in the semester , however , there was a desire among many of the students ( especially seniors ) to see examples of programs being used for ` real - time ' research amongst the large graduate student population in the department .one senior grad student , cristiano nisoli , who had just defended his thesis , kindly volunteered to give the last lecture , demonstrating in detail some of his notebooks and explaining how the results they generated found their way into many of his published papers .examples included generating simple graphics ( since we made only occasional use of ` graphics [ ] ` elements and absolutely no use of palette symbols ) to much more sophisticated dynamical simulations ( some using genetic algorithm techniques ) requiring days of running time . while some of the physics results were obviouslyfar beyond the students experience , a large number of examples of command structures and code - writing methods were clearly recognizable from programs we d covered earlier in the semester , including such ` best - practice ' checks as monitoring ( numerically ) the total energy of a system , in this case , in various verlet algorithms .some of the notebooks and published papers he discussed are linked at the course web site .evaluation and assessment can be one of the most challenging aspects of any educational enterprise , and many scientists may not be well trained to generate truly meaningful appraisals of their own pedagogical experiments . in the case of this course , where the goals were less specific and fixed than in a standard junior - senior level course in a traditional subject area , that might be especially true .since the course was not designed to cover one specific set of topics , the use of well - known instruments for assessment such as the fci and others for concepts related to topics more often treated at the introductory level , or specialized ones covering more advanced topics , did not seem directly relevant .weekly graded homework assignments were used to evaluate the students , but during the entire development and delivery of the course , there were also attempts at repeatedly obtaining student feedback , at regular intervals . some of the results can be shared here , but we stress that they are only of the ` student satisfaction ' type .we note that in the spring 2007 semester , there were a total of 23 students enrolled in this trial offering , 12 juniors and 11 seniors , 4 female and 19 male , 21 physics majors and 2 majors in astronomy / astrophysics .students in almost every course at penn state are asked to provide anonymous student ratings of teaching effectiveness each semester .four questions are common to every form , including _ rate the overall quality of the course _ and _ rate the overall quality of the instructor _ , all on a scale from 1 - 7 . for the initial offering of this course ,the results for those two questions ( obtained after the semester was over and grades were finalized and posted ) were found to be 6.05/7.00 and 6.89/7.00 respectively .additional ` in - house ' departmental evaluation forms were used to solicit students comments , and were also only returned after the semester was completed .these forms are very open - ended and only include instructions such as _ in the spaces below , please comment separately about the and about the _ .all of the resulting comments were positive , and consistent with similar feedback obtained from the ` for - credit ' surveys .while such results are certainly encouraging , recall that the students registered for the course were highly self - selected and all rightly answered in the same surveys that this course was a true elective and not required in our major .one of the very few explicit goals was to try to encourage students to make use of in their other coursework , and a question related to just such outcomes was posed in a final survey .the vast majority of students replied that they had used it somewhat or even extensively in their other courses that semester .for juniors , the examples were quantum mechanics ( doing integrals , plotting functions ) , the complex analysis math course ( doing integrals to compare to results obtained by contour integration ) and to some extent in the statistical mechanics course . for seniors ,the typical uses were in their physics elective courses ( especially the math intensive special and general relativity elective ) , a senior electronics course ( where the professor has long made use of ` canned ' programs ) and senior level mathematics electives being taken to fulfill the requirements of a minor or second major . at least onestudent used techniques to complete an honors option in a course he was taking , but the majority seemed to use in either ` graphing calculator ' or ` math handbook ' modes , and not for further extensive programming .finally , while i used as the programming tool , set in a linux classroom , for the development and delivery of this course , these choices were only because of my personal experience with the software and the readily available access to the hardware , as i have no very strong sectarian feelings about either component .i think that many physics faculty with facility in languages such as maple or matlab , access to a computer lab / classroom facility , and personal interests in modern research in a wide variety of areas can rather straightforwardly generate a similar course .i only suggest that the approach , namely using introductory physics and math problems to motivate the use of an integrated programming language , which can then be used to bridge the gap between more advanced coursework and research results , can be a fruitful one . *acknowledgments * 0.1 cm i thank r. peet for asking an important question which eventually led to the development of this class and am very grateful to j. albert and j. sofo for their help in preparing various aspects of this project .i want to thank c. nisoli for his presentation in class , and for his permission to post his related materials , and w. christian for a careful reading of an early draft of the manuscript .finally , and perhaps most importantly , i wish to thank all of the students in phys 497c in spring 2007 for their contributions to the development of the course .we include a rough outline of the course material , organized by lecture , but remind readers that the entire set of materials is available on - line at the web site mentioned in sec .[ sec : description_course ] . the numbers ( with error bars ) after each lecture are the results of student evaluations of each lecture , asking for ratings of _ ` ... interest , understandability , and general usefulness ... ' _ on a scale from 1 ( low ) to 3 ( medium ) to 5 ( high ) , combining all aspects of each presentation .differences in the ratings between the junior and senior groups were typically not significant so the results for all students have been combined , except for * l12*. the last two lectures which covered material which students had nt ever seen in their undergraduate coursework , were somewhat less popular , although some seniors cited * l14 * as the most interesting of all . * l9 * - numerical solutions of differential equations ii ( ) * l10 * - classical gravitation ( ) * l11 * - probability and statistics ( ) * l12 * - special functions and orthogonal polynomials in classical and quantum mechanics ( for juniors , but for seniors ) 99 d. cook , `` computers in the lawrence physics curriculum - part i '' , comput .11 * , 240 - 245 ( 1997 ) ; `` computers in the lawrence physics curriculum - part ii '' , comput .phys . * 11 * , 331 - 335 ( 1997 ) ; _ computation in the lawrence physics curriculum : a report to the national science foundation , the w. m. keck foundation , and departments of physics on twenty years of curricular development at lawrence university_. r. h. landau , h. kowallik , and m. j. pez , `` web - enhanced undergraduate course and book for computational physics '' , comput . phys .* 12 * , 240 - 247 ( 1998 ) ; r. landau , `` computational physics : a better model for physics education ? '' comp .* 8 * , 22 - 30 ( 2006 ) ; see also ` http://www.physics.oregonstate.edu/~rubin/cpug ` .h. gould , `` computational physics and the undergraduate curriculum '' , comput .. comm . * 127 * , 6 - 10 ( 2000 ) .r. l. spencer , `` teaching computational physics as a laboratory sequence '' , am .* 73 * , 151 - 153 ( 2005 ) .see also the special issue of computing in science and engineering , * 8 * ( 2006 ) . c. singh , m. belloni , and w. christian , `` improving students understanding of quantum mechanics '' , phys . today * 59 * , 43 - 49 , august ( 2006 ). j. m. feagin , _ quantum methods with mathematica _( springer - verlag , new york , 1994 ) .p. tam , _ a physicist s guide to mathematica _ ( academic press , san diego , 1997 )zimmermann and f. i. olness , _mathematica for physics _ ,2nd edition ( addison - wesley , san francisco , 2002 ) .s. hassani , _ mathematical methods using mathematica for students of physics and related fields _ ( springer - verlag , new york , 2003 ) .this book is the accompanying volume to a purely ` pencil - and - paper ' text on math methods by the same author , _ mathematical methods for students of physics and related fields _( springer - verlag , new york , 2000 ) .d. h. e. dubin , _ numerical and analytical methods for scientists and engineers using mathematica _ ( wiley - interscience , hoboken , 2003 ) .m. trott , _ the mathematica guidebooks _ ( programming , graphics , numerics , symbolics ) ( springer - verlag , new york , 2004 ) .g. baumann , _mathematica for theoretical physics _, volumes i and ii , ( springer , new york , 2005 ) j. tobochnik , h. gould , and j. machta , `` understanding temperature and chemical potential using computer simulations '' , am . j. phys . * 73 * , 708 - 716 ( 2005 ) ; s .- h .tsai , h. k. lee , and d. p. landau , `` molecular and spin dynamics simulations using modern integration methods '' , am . j. phys .* 73 * , 615 - 624 ( 2005 ) .d. j. griffiths , _ introduction to electrodynamics _ , 3rd edition ( prentice - hall , upper saddle river , 1999 ) , p. 249; j. r. reitz , f. j. milford , and r. w. christy , _ foundations of electromagnetic theory _ , 4th edition ( addison - wesley , reading , 1993 ) , p. 201 - 203 .r. w. robinett and l. c. bassett , `` analytic results for gaussian wave packets in four model systems : i. visualization on the kinetic energy '' , found .* 17 * , 607 - 625 ( 2004 ) ; r. w. robinett , m. a. doncheski , and l. c. bassett , `` simple examples of position - momentum correlated gaussian free - particle wave packets in one - dimension with the general form of the time - dependent spread in position '' , found .. lett . * 18 * , 445 - 475 ( 2005 ) .h. wallis , a. rhrl , m. naraschewski and a. schenzle , `` phase - space dynamics of bose condensates : interference versus interaction '' , phys . rev . *a55 * , 2109 - 2119 ( 1997 ) .e. w. hagley _et al . _ ,`` measurement of the coherence of a bose - einstein condensate '' , phys . rev . lett . * 83 * , 3112 - 3115 ( 1999 ) .m. r. andrews , c. g. townsend , h .- j .miesner , d. s. durfee , d. m. kurn , and w. ketterle , `` observation of interference between two bose condensates '' , science * 275 * , 637 - 641 ( 1997 ) .m. a. doncheski , s. heppelmann , r. w. robinett , and d. c. tussey , `` wave packet construction in two - dimensional quantum billiards : blueprints for the square , equilateral triangle , and circular cases '' , am . j. phys . * 71 * , 541 - 557 ( 2003 ) .d. f. styer , `` quantum revivals versus classical periodicity in the infinite square well '' , am .* 69 * , 56 - 62 ( 2002 ) .r. bluhm , v. a. kostelecky , and j. porter , `` the evolution and revival structure of localized wave packets '' , am . j. phys . *64 * , 944 - 953 ( 1996 ) . for a review of quantum wave packet revivals ,see r. w. robinett , `` quantum wave packet revivals '' , phys .rep . * 392 * , 1 - 119 ( 2004 ) .j. a. yeazell , m. mallalieu , and c. r. stroud , jr .`` observation of the collapse and revival of a rydberg electronic wave packet '' , phys .rev . lett . * 64 * , 2007 - 2010 ( 1990 ) .a. j. lotka , `` analytical note on certain rhythmic relations in organic systems '' , proc .* 6 * 410 - 415 ( 1920 ) ; _ elements of physical biology _ ( williams and wilkins , new york , 1925 ) ; v. volterra , `` variazioni e fluttauazioni del numero di individui in specie animali conviventi '' , memorie dell - academia dei lincei * 2 * , 31 - 113 ( 1926 ) ; _ leon sur la theorie mathematique de la lutte pour le vie _ ( gauthier - villars , paris , 1931 ) .g. f. gause , `` experimental demonstration of volterra s periodic oscillations in the numbers of animals '' , br .* 12 * , 44 - 48 ( 1935 ) .i. boutle , r. h. s. taylor , and r. a. rmer , `` el nio and the delayed action oscillator '' , am . j. phys .* 75 * , 15 - 24 ( 2007 ) .j. b. marion and s. t. thornton , _ classical dynamics of particles and systems _, 5th edition ( brooks / cole , belmont ca , 2004 ) .g. r. fowles and g. l. cassiday , _ analytical mechanics _ , 7th edition ( brooks / cole , belmont ca , 2004 ) .see , for example , problem 8 - 5 of ref .m. mccall , `` gravitational orbits in one dimension '' , am .* 74 * , 1115 - 1119 ( 2006 ) .h. kaplan , `` the runge - lenz vector as an `` extra '' constant of the motion '' , am . j. phys .* 54 * , 157 - 161 ( 1986 ) . for background on the development of the lenz - runge vector , see h. goldstein , `` prehistory of the `` runge - lenz '' vector '' , am . j. phys .* 43 * , 737 - 738 ( 1975 ) ; `` more on the prehistory of the laplace or runge - lenz vector '' , am .* 44 * , 1123 - 1124 ( 1976 ) .e.g. _ , a. k. grant and j. l. rosner , `` classical orbits in power - law potentials '' , am .* 62 * , 310 - 315 ( 1994 ) and many references therein . c. moore , `` braids in classical dynamics '' , phys .* 70 * , 3675 - 3679 ( 1993 ) .a. chenciner and r. montgomery , `` a remarkable periodic solution of the three - body problem in the case of equal masses '' , ann .math . * 152 * , 881 - 901 ( 2000 ) . w. christian , m. belloni , and d. brown , `` an open - source xml framework for authoring curricular material '' , comp .* 8 * , 51 - 58 ( 2006 ) .e.g. _ r. montgomery , `` a new solution to the three - body problem '' , notices of the ams , * 48 * , no . 5 , 471 - 481 ( 2001 ) ; a. chenciner , joseph gerver , r. montgomery , and c. sim , `` simple choreographic motions of bodies : a preliminary study '' in _ geometry , mechanics , and dynamics _ , edited by p. newton , p. holmes , and a. weinstein ( springer - verlag , new york , 2002 ) , pp .288 - 309 see ` http://www.soe.ucsc.edu/~charlie/3body/ ` for animations of the three - body ` figure - eight ' as well as many more complex n - body periodic orbits .see also ` http://www.ams.org/featurecolumn/archive/orbits1.html ` . for pedagogical articles on the quantum monte carlo method ,see i. kosztin , b. faber , and k. schulten , `` introduction to the diffusion monte carlo method '' , am .* 64 * , 633 - 644 ( 1996 ) ; h. l. cuthbert and s. m. rothstein , `` quantum chemistry without wave functions : diffusion monte carlo applied to and '' , j. chem .educ . , * 76 * , 1378 - 1379 ( 1999 ) . both cite the very readable description by j. b. anderson , `` a random - walk simulation of the schrdinger equation : '' , j. chem .phys . * 63 * , 1499 - 1503 ( 1975 ) , but see also the reference book by the same author , _ quantum monte carlo : origins , development , applications _ ( oxford university press , new york , 2007 ) .j. f. beausang , c. zurla , l. finzi , l. sullivan , and p. c. nelson , `` elementary simulation of tethered brownian motion '' , am .* 75 * , 520 - 523 ( 2007 ) .p. w. langhoff , `` schrdinger particle in a gravitational well '' , am .* 39 * , 954 - 957 ( 1971 ) ; r. l. gibbs , `` the quantum bouncer '' , am .* 43 * , 25 - 28 ( 1975 ) ; r. d. desko and d. j. bord , `` the quantum bouncer revisited '' , am .* 51 * , 82 - 84 ( 1983 ) ; d. a. goodings and t. szeredi , `` the quantum bouncer by path integral method '' , am .* 59 * , 924 - 930 ( 1991 ) ; s. whineray , `` an energy representation approach to the quantum bouncer '' , am .* 60 * , 948 - 950 ( 1992 ) .j. gea - banacloche , `` a quantum bouncing ball '' , am . j. phys .* 67 * , 776 - 782 ( 1999 ) ; o. valle , `` comment on ` a quantum bouncing ball ' '' , am .* 68 * , 672 - 673 ( 2000 ) ; d. m. goodmanson , `` a recursion relation for matrix elements of the quantum bouncer '' , am .* 68 * , 866 - 868 ( 2000 ) ; m. a. doncheski and r. w. robinett , `` expectation value analysis of wave packet solutions for the quantum bouncer : short - term classical and long - term revival behaviors '' , am .* 69 * , 1084 - 1090 ( 2001 ) .v. v. nesvizhevsky _et al . _ ,`` quantum states of neutrons in the earth s gravitational field '' , nature * 415 * , 297 - 299 ( 2002 ) . v.v. nesvizhevsky __ , `` measurement of quantum states of neutrons in th earth s gravitational field '' , phys . rev .d * 67 * , 102002 ( 2003 ) ; v. v. nesvizhevsky _et al . _ , `` study of the neutron quantum states in the gravity field '' , eur .j. c * 40 * , 479 - 491 ( 2005 ) .m. f. crommie , c. p. lutz , and d. m. eigler , `` confinement of electrons to quantum corrals on a metal surface '' , science * 262 * , 218 - 220 ( 1993 ) .p. m. morse and h. feshbach , _ methods of theoretical physics : part i _ , ( mcgraw - hill , new york , 1953 ) , pp . 759 - 762 .li , `` a particle in an isosceles right triangle '' , j. chem* 61 * , 1034 ( 1984 ) . c. jung ,`` an exactly soluble three - body problem in one - dimension '' , can . j. phys . * 58 * , 719 - 728 ( 1980 ) ; p. j. richens and m. v. berry , `` pseudointegrable systems in classical and quantum mechanics '' , physica d * 2 * , 495 - 512 ( 1981 ) ; j. mathews and r. l. walker , _ mathematical methods of physics _ ( w. a. benjamin , menlo park , 1970 ) , 2nd ed . , pp .237 - 239 ; w. -k .li and s. m. blinder , `` particle in an equilateral triangle : exact solution of a nonseparable problem '' , j. chem .educ . * 64 * , 130 - 132 ( 1987 ) .r. w. robinett , `` energy eigenvalues and periodic orbits for the circular disk or annular infinite well '' , surf .lett . * 5 * , 519 - 526 ( 1998 ) ; `` periodic orbit theory of a continuous family of quasi - circular billiards '' , j. math . phys . * 39 * , 278 - 298 ( 1998 ) ; `` quantum mechanics of the two - dimensional circular billiard plus baffle system and half - integral angular momentum '' , eur .* 24 * , 231 - 243 ( 2003 ) . m. c. gutzwiller , _ chaos in classical and quantum mechanics _( springer - verlag , berlin , 1990 ) . m. brack and r. k. bhaduri , _ semiclassical physics _( addison - wesley , reading , 1997 ) .r. w. robinett , `` visualizing classical periodic orbits from the quantum energy spectrum via the fourier transform : simple infinite well examples '' , am . j. phys . * 65 * , 1167 - 1175 ( 1997 ) .m. a. doncheski and r. w. robinett , `` quantum mechanical analysis of the equilateral triangle billiard : periodic orbit theory and wave packet revivals '' , ann . phys . *299 * , 208 - 277 ( 2002 ) .see ref . ( programming volume ) , pp .stckmann and j. stein , `` ` quantum ' chaos in billiards studied by microwave absorption '' , phys . rev* 64 * , 2215 - 2218 ( 1990 ) .d. l. kaufman , i. kosztin , and k. schulten , `` expansion method for stationary states of quantum billiard '' , am .* 67 * 133 - 141 ( 1999 ) .t. timberlake , `` random numbers and random matrices : quantum chaos meets number theory '' , am .* 74 * , 547 - 553 . , 2nd edition , edited by p. cvitanovi , ( adam hilger , bristol , 1989 ) . for examples at the pedagogical level , and many references to the original research literaturesee , for example , b. duchesne , c. w. fischer , c. g. gray , and k. r. jeffrey , `` chaos in the motion of an inverted pendulum : an undergraduate laboratory experiment '' , am. j. phys . *59 * , 987 - 992 ( 1991 ) ; h. j. t. smith and j. a. blackburn , `` experimental study of an inverted pendulum '' , am. j. phys . *60 * , 909 - 911 ( 1992 ) ; j. a. blackburn , h. j. t. smith , and n. grnbech - jensen , `` stability and hopf bifurcations in an inverted pendulum '' , am. j. phys . * 60 * 903 - 908 ( 1992 ) ; r. deserio , `` chaotic pendulum : the complete attractor '' , am . j. phys . *71 * , 250 - 257 ; g. l. baker , `` probability , pendulums , and pedagogy '' , am .* 74 * , 482 - 489 ( 2006 ) .for example , dr .nisoli discussed his contributions to the following publications : c. nisoli _ et al ._ , `` rotons and solitons in dynamical phyllotaxis '' , preprint ( 2007 ) .r. wang _ et al ._ , `` artificial spin ice in a geometrically frustrated lattice of nanoscale ferromagnetic islands '' , nature * 446 * , 102 - 104 ( 2007 ) ; c. nisoli _ et al ._ , `` ground state lost but degeneracy found : the effective thermodynamics of artificial spin ice '' , phys .* 98 * , 217203 ( 2007 ) .d. hestenes , m. wells , and g. swackhammer , `` force concept inventory '' , phys .teach . * 30 * 141 - 158 ( 1992 ) ; d. hestenes and m. wells , `` a mechanics baseline test '' , phys .30 * , 159 - 166 ( 1992 ) ; r. k. thornton and d. sokoloff , `` assessing student learning of newton s laws : the force and motion conceptual evaluation and the evaluation of active learning laboratory and lecture curricula '' , am .* 66 * , 338 - 352 ( 1998 ) ; t. okuma , c. hieggelke , d. maloney , and a. van heuvelen , `` developing conceptual surveys in electricity and magnetism '' , announcer * 28 * , 81 ( 1998 ) ; `` preliminary interpretation of the cse / csm / csem student results '' , _ ibid ._ , * 29 * , 82 ( 1999 ) ; `` some results from the conceptual survey of electricity and magnetism '' , _ ibid . _ , * 30 * 77 ( 2000 ) .c. singh , `` student understanding of quantum mechanics '' , am .* 69 * , 885 - 895 ( 2001 ) ; e. cataloglu and r. w. robinett , `` testing the development of student conceptual and visualization understanding in quantum mechanics through the undergraduate career '' , am .* 70 * , 238 - 251 ( 2002 ) .
|
we describe the development of a junior - senior level course for physics majors designed to teach skills in support of their undergraduate coursework , but also to introduce students to modern research level results . standard introductory and intermediate level physics homework - style problems are used to teach commands and programming methods , which are then applied , in turn , to more sophisticated problems in some of the core undergraduate subjects , along with making contact with recent research papers in a variety of fields .
|
the rapid proliferation of wireless devices and related service results in the recent exploding demand on extra frequency bands . because available frequency bands are limited , spectral efficiency becomes a key design criterion of wireless communication systems .full - duplex ( fd ) operation has received a great attention since theoretically it is able to double spectral efficiency compared to half - duplex ( hd ) operation . in fd operation , presuming the self - interference ( si ) from the transmitted signal is properly suppressed , simultaneous transmission and reception are allowed in the same frequency band . to make fd communication viable , si cancellation techniques are essential , and fortunately the recent advancement of si cancellation techniques sheds light on practical feasibility of fd communication . some experimental results based on the advanced si suppression techniques demonstrated a possibility of fd communication in real environments . in , an adaptive cancellation scheme was proposed to overcome some practical limitations of the previous si cancellation schemes . a combination of passive si suppression and active si cancellation was shown to achieve 74 db suppression of si on average .fd communication can be leveraged by beamforming with multiple antennas .there exists residual si due to imperfect si suppression , and beamforming can be exploited to address the residual si . in particular , to maximize sum rate of a fd system , beamforming has to balance between residual si suppression and information transfer . in bi - directional communications ,an iterative precoding technique based on sequential convex programming ( scp ) was developed to balance sum rate maximization and si suppression , using appropriate weighting factors . in fd multiuser network where a fd base station ( bs ) concurrently serves uplink hd users and downlink hd users in the same frequency band , the base station suffers from si and the downlink usersare interfered by the transmitted signals of the uplink users , i.e. , co - channel interference ( cci ) .if the downlink transmit power increases to combat cci perceived at the downlink users , si also increases at the base station and thus the uplink sum rate decreases . on the other hand ,if the uplink users increase transmit power against si at the base station , cci increases at the downlink users .thus , in the fd multiuser network , transmission strategies at the base station and the uplink users are coupled and have to be designed to address both cci and si simultaneously , which poses a jointly coupled optimization problem . in , as a simplified problem for the fd multiuser system , single antenna users were considered when the base station performs linear beamforming for the downlink users and non - linear multiuser detection , i.e. , minimum mean - square - error successive interference cancellation ( mmse - sic ) , for the uplink users .then , the downlink beamformer design problem was formulated as a rank-1 constrained optimization problem and suboptimal solutions were presented based on rank relaxation and approximations .multiple antenna in full duplex multiuser systems ( fd mu - mimo ) were studied in .uplink beamformer design and downlink power allocation addressing si at the base station ( bs ) was studied in .however , cci was discarded and zero - forcing ( zf ) downlink beamforming at bs was assumed for simplicity , albeit its suboptimality . in , both cci and si were considered but with the assumption of large scale mimo , zf downlink beamforming was simply used for si suppression while treating cci as a background noise .the authors of addressed si and cci simultaneously based on scp algorithms in linear beamformer design . in this paper ,when the uplink users and the downlink users have multiple antennas in the fd multiuser system , we explore novel transmission strategies at the base station and the uplink users in terms of maximizing sum rate of the uplink and downlink users . to this end, we formulate a joint beamformer design problem to maximize sum rate , i.e. , a joint transmit covariance matrix design problem , modeling the coupled effects between si and cci . however , the optimization problem is non - convex and not easy to find the optimal transmit covariance matrices due to the coupled si and cci . to circumvent this difficulty, we exploit the duality between broadcast channel ( bc ) and multiple access channel ( mac ) and reformulate the sum rate maximization problem as an equivalent optimization problem for mac .although the reformulated problem is still non - convex , the objective is represented as a difference of two concave functions and then , using the minorization maximization ( mm ) algorithm based an affine approximation , the objective can be approximated as a concave function .accordingly , we solve the problem with disciplined convex programming ( dcp ) using cvx program . in addition , without any approximation of the objective function and thus performance degradation , we develop an alternating iterative water - filling ( iwf ) algorithm to solve the non - convex problem . the proposed algorithm is based on the iterative water - filling algorithm which is known to provide the optimal transmit covariance matrices for mac .the proposed algorithms ensure fast convergence and low computational complexity . compared to and which address si and cci simultaneously as in our paper , the design approach differs ; in and , uplink and downlink linear beamformers were developed with the scamp algorithm and the cvx solver , both of which are based on the scp approach . on the contrary, our non - linear beamformer design is based on dirty paper coding in downlink and mmse - sic in uplink , which are known as capacity achieving schemes in downlink mu - mimo and uplink mu - mimo , respectively , and the transmit covariance matrices are found with the proposed algorithms based on the mac - bc duality .the proposed mm algorithm differs from the algorithms in and in the respect that it is used for non - linear beamforming , although it is also based on the scp approach .moreover , since the dc - based algorithms using affine approximations can suffer from information loss , we proposed the alternating iterative water - filling algorithm enabled by the mac - bc duality , which does not rely on cvx solver requiring long computational time .the remainder of this paper is organized as follows . in section [ sec : sys_model ] , we describe the system and channel model and then formulate the design problem .the proposed iterative beamforming algorithms are proposed in section [ sec : bf_design ] .numerical results are presented in section [ sec : results ] .finally , conclusions are drawn in section [ sec : conclusion ] .we consider a single cell fd mu - mimo system as shown in fig . [ fd_model ] , where a fd base station ( bs ) with antennas concurrently serves uplink users and downlink users with antennas each . since the bs transmits and receives simultaneously in the same frequency band , the transmitted signal unavoidably interferes with the received signals from the uplink users , which is called self - interference ( si ) .even with recent advanced si cancellation techniques , there exists residual si due to imperfect si cancellation .moreover , the downlink users suffer from co - channel interference ( cci ) caused by the uplink users .let and be the channel matrices from uplink user to the bs and from the bs to downlink user , respectively , each element of which is an independent and identically distributed ( i.i.d . )complex gaussian random variable with zero mean and unit variance . represents the si channel matrix which typically models the residual si after si cancellation .the channel matrix of cci is given by ^{h} ] and represents the cci from to .the entries of si and cci channel matrices are assumed to be i.i.d .complex gaussian random variables with zero mean and variance and , respectively .the received signal at the bs is represented as where and represent the index of the -th user in the uplink channel and the index of the -th user in the downlink channel , respectively , is the transmitted signal vector of uplink user , is the transmitted signal vector from the bs to the -th downlink user , and is the additive white gaussian noise ( awgn ) vector with zero mean and covariance . on the other hand , the received signal at is represented as where is the awgn vector with zero mean and covariance . defining ^{t} ] where the operation ^{+} ] and is chosen to satisfy the total power constraint . to guarantee convergence of the iterative water - filling algorithm with a total sum power constraint , as in , the covariance matrix at the -th iterationis updated as which ensures the non - decreasing property .finally , using the dual uplink - downlink transformation , the covariance matrices for the dual uplink channel is transformed to the covariance matrices of the original downlink channel . with the covariance matrices obtained at the -th iteration , subproblems in ( [ subprob1 ] ) and ( [ subprob2 ] )are sequentially solved to obtain the covariance matrices at the -th iteration .this procedure is repeated until the sum rate objective converges .the proposed alternating algorithm is summarized in algorithm 2 and its convergence is proved in the following theorem .transform the downlink channel to the dual uplink channel .initialize for and for ; and .transform to , .set .solve ( [ subprob1 ] ) using the water - filling algorithm to find while keeping all other variables fixed .calculate svd of for water - filling while keeping all other variables fixed .solve ( [ subprob2 ] ) using the water - filling algorithm to find .transform to , . .algorithm 2 converges for any and .algorithm 2 is based on alternating iterations between two subproblems for the uplink channel and the dual uplink channel .thus , convergence is proved by showing that each step of iteration is non - decreasing in sequence : first , ( [ eq:1 ] ) holds since for fixed , the update from to is made by solving subproblem ( [ subprob1 ] ) with the iterative water - filling algorithm , which corresponds to conventional mac and thus the iterative water - filling algorithm ensures a non - decreasing update .second , is due to the sum power iterative water - filling algorithm with an additional averaging step , for given .specifically , define an expanded function as {k_{d}}\right)_{j}\mathbf{\bar{h}}_{d_{j } } \right|,\end{aligned}\]]where for any , {k_{d}})_{j} ] .note that due to the concavity of , we always have as a result , is satisfied for the obtained by finally , since the cyclic coordinated algorithm to find the optimal set for the expanded function maximization problem is equivalent to the iterative water - filling algorithm for the dual uplink , follows from where is due to the concavity of the function and is the set of updated covariance matrices from .in this section , we show the average sum rates achieved by the proposed algorithms and verify their convergence . in addition , we evaluate complexity of the proposed algorithms . for numerical performance evaluations ,we consider a single cell environment constituted by a fd bs with antennas , hd uplink users , and hd downlink users .the uplink and downlink users have antennas each .we assume and unless otherwise stated .the bs and uplink user transmit with power and , respectively , when each uplink user has the same transmit power as for .each element of the uplink and downlink channel matrices is realized as i.i.d complex gaussian random variables with zero mean and variance for downlink and for uplink . also , the elements of the interference channel matrices are i.i.d .complex gaussian random variables with zero mean and variance of for the residual si and for the cci . according to , sibefore cancellation is almost the same as the transmit power at bs but with cancellation si can be suppressed up to approximately 110 db .so we set the si cancellation capability to be db .then , given bs transmit power of 27 dbm , the residual si is assumed to be . according to the line - of - sight ( los ) path - loss model in given by , the path - loss between the bs and a user is assumed to be 91 db which corresponds to the distance of about km .thus , for and , we assume dbm and dbm .the path - loss from an uplink user to a downlink user follows the non - line - of - sight ( nlos ) path - loss model in given by . for andthe distance of km , the cci channel path - loss is assumed to be 97 db and so dbm . to evaluate the effect of interference , the ratio of the received interference power to the desired signal power is defined as and for si at the bs and for cci at the downlink users , respectively . and ,width=453 ] for given assumptions , we evaluate convergence rate of the proposed algorithms in fig .[ fig : converge ] .both algorithms converge within 3 or 4 iterations , although each algorithm has different computational complexity for each iteration .[ rate_interf1 ] shows the average sum rate versus the ratio of the received interference power to the desired signal power . for fixed si cancellation capability of ,residual si is calculated according to the transmit power of bs , , when dbm and dbm in fig .[ fig : rate_p_d ] . also , when the path - loss between an uplink user and a downlink user is fixed as , cci varies according to the transmit power of the uplink user when and dbm in fig .[ fig : rate_p_u ] .as the transmit power grows , the sum rates of the proposed algorithms increase and the alternating iwf outperforms the mm approach . to clearly capture the effect of each interference , fig .[ fig : rate_si ] exhibits the sum rate versus according to si cancellation capability ( ) when cci is fixed to be dbm , dbm , and dbm .[ fig : rate_cci ] shows the sum rate versus defined by path - loss between the uplink and downlink users ( ) when si is fixed to be dbm , dbm , and dbm . " for a reference , we also plot the achievable total sum rate of bc and mac in a hd system with the same numbers of antennas and users as the fd system . in the hd system ,the iterative water - filling based algorithm is used for determining the optimal transmit covariance matrices , under individual power constraints for mac and a sum power constraint for bc .the transmit times for bc and mac are assumed to be the same and the total sum rate of the mac and bc accounts for the rate loss due to the duplex duty cycle . in fig .[ fig : rate_si ] , as si grows , the sum rates of both the algorithms decrease since the increased si degrades the uplink sum rate .both the algorithms outperform the hd system , owing to the well balanced beamforming , in the presence of cci ( i.e. , ) . in fig .[ fig : rate_cci ] , as cci increases when , the sum rate reduces since cci directly decreases the downlink sum rate .[ fig : rate_si ] and [ fig : rate_cci ] show that the alternating iwf algorithm outperforms the mm algorithm which has a loss from the affine approximation . to analyze computational complexity, we first evaluate the number of floating point operations ( flop count ) required per iteration where either a complex multiplication or a complex addition is counted as one flop . in the mm algorithm , the convex optimization problem in ( 19 ) is solved by using cvx solver based on semi - definite programming ( sdp ) .the computational complexity of sdp is obtained by counting the operations of an interior - point method .thus , computational complexity of solving the problem in ( 19 ) is for uplink and for downlink . as a result ,the overall computational complexity of the mm algorithm is per iteration where is the accuracy target .on the other hand , in the alternating iwf algorithm , we can count the exact number of flops at each step . in the uplink ,computational complexity is dominated by calculation of an effective channel matrix , eigenvalue decomposition of , and calculation of the covariance matrix for . in the downlink , besides the dominant operations in the uplink , the operation to update the covariance matrix from to has to be additionally taken into account .consequently , computational complexity for solving the problem in ( 21 ) and ( 23 ) is and , respectively .therefore , the total computational complexity of the alternating iwf algorithm is approximately at each iteration .we developed two iterative algorithms to solve the non - convex problem of sum rate maximization in full - duplex multiuser mimo systems . using the mac - bc duality , we first reformulated the sum rate maximization problem into an equivalent sum rate maximization of mac to properly address the couple effects of mac and bc due to self - channel interference and co - channel interference .although the equivalent problem was still non - convex , the transformed objective function allowed us to apply the mm algorithm which makes an affine approximation to the difference of concave functions . to avoid performance degradation resulting from the affine approximation, we also devised an alternating algorithm based on iterative water - filling without any approximation to the objective function .the proposed two algorithms were shown to address the coupled design issue well and properly balance between the individual sum rates of mac and bc to maximize the total sum rate .it was also proved that the proposed two algorithms ensured fast convergence and low computational complexity .m. duarte and a. sabharwal , `` full - duplex wireless communications using off - the - shelf radio : feasibility and first results , '' in _ proc .asilomar conf .signals , syst ._ , nov .2010 , pp . 15581562 .m. jain , j. choi , t. m. kim , d. bharadia , s. seth , k. srinivasan , p. levis , s. katti , and p. sinha , `` practical , real - time full duplex wireless , '' in _ proc .int . conf .mobile comput .netw . _ , 2011 , pp. 301312 .a. sabharwal , p. schniter , d. guo , d. w. bliss , s. rangarajan , and r. wichman , `` in - band full - duplex wireless : challenges and opportunities , '' _ ieee journal on selected areas in commun ._ , vol .32 , no . 9 , pp . 16371652 , sep . 2014. s. vishwanath , n. jindal , and a. goldsmith , `` duality , achievable rates , and sum - rate capacity of gaussian mimo broadcast channels , '' _ ieee trans .inf . theory _ ,49 , no .10 , pp . 26582668 , oct .2003 .n. jindal , w. rhee , s. vishwanath , s. a. jafar , and a. goldsmith , `` sum power iterative water - filling for multi - antenna gaussian broadcast channels , '' _ ieee trans .inf . theory _ ,51 , no . 4 , pp. 15701580 , apr .
|
we solve a sum rate maximization problem of full - duplex ( fd ) multiuser multiple - input multiple - output ( mu - mimo ) systems . since additional self - interference ( si ) in the uplink channel and co - channel interference ( cci ) in the downlink channel are coupled in fd communication , the downlink and uplink multiuser beamforming vectors are required to be jointly designed . however , the joint optimization problem is non - convex and hard to solve due to the coupled effect . to properly address the coupled design issue , we reformulate the problem into an equivalent uplink channel problem , using the uplink and downlink channel duality known as mac - bc duality . then , using minorization maximization ( mm ) algorithm based on an affine approximation , we obtain a solution for the reformulated problem . in addition , without any approximation and thus performance degradation , we develop an alternating algorithm based on iterative water - filling ( iwf ) to solve the non - convex problem . the proposed algorithms warrant fast convergence and low computational complexity . full - duplex , multiuser , mimo , beamforming , duality , difference of concave functions , iterative water - filling .
|
linear system stability can be formulated semialgebraically in the space of coefficients of the characteristic polynomial .the region of stability is generally _ nonconvex _ in this space , and this is a major obstacle when solving fixed - order and/or robust controller design problems . using the hermite stability criterion , these problems can be formulated as parametrized polynomial matrix inequalities ( pmis ) where parameters account for uncertainties and the decision variables are controller coefficients .recent results on real algebraic geometry and generalized problems of moments can be used to build up a hierarchy of convex linear matrix inequality ( lmi ) _outer _ approximations of the region of stability , with asymptotic convergence to its convex hull , see e.g. for a software implementation and examples , and see for an application to pmi problems arising from static output feedback design . if outer approximations of nonconvex semialgebraic sets can be readily constructed with these lmi relaxations , _ inner _ approximations are much harder to obtain. however , for controller design purposes , inner approximations are essential since they correspond to sufficient conditions and hence guarantees of stability or robust stability . in the robust systems control literature ,convex inner approximations of the stability region have been proposed in the form of polytopes , ellipsoids or more general lmi regions derived from polynomial positivity conditions .interval analysis can also be used in this context , see e.g. . in this paperwe provide a numerical scheme for approximating from inside the feasible set of a parametrized pmi ( for some matrix polynomial ) , that is , the set of points such that for _ all _ values of the parameter in some specified domain ( assumed to be a basic compact semialgebraic set ) .this includes as a special case the approximation of the stability region ( and the robust stability region ) of linear systems .the particular case where is affine in covers parametrized lmis with many applications in robust control , as surveyed e.g. in . given a compact set containing , this numerical scheme consists of building up a sequence of inner approximations , , which fulfils two essential conditions : 1 .the approximation converges in a _ well - defined analytic _sense ; 2 .each set is defined in a _simple _ manner , as a superlevel set of a single polynomial . in our mind , this feature is essential for a successful implementation in practical applications . more precisely , we provide a hierarchy of inner approximations of , where each is a basic semi - algebraic set for some polynomial of degree .the vector of coefficients of the polynomial is an optimal solution of an lmi problem .when increases , the convergence of to is very strong .indeed , the lebesgue volume of converges to the lebesgue volume of .in fact , on any ( a priori fixed ) compact set , the sequence converges for the -norm on to the function where is the minimum eigenvalue of the matrix - polynomial associated with the pmi .consequently , in ( lebesgue ) measure on , and almost everywhere and almost uniformly on , for a subsequence .in addition , if one defines the piecewise polynomial , then almost everywhere , almost uniformly and in ( lebesgue ) measure on .in addition , we can easily enforce that the inner approximations are nested and/or convex .of course , for the latter convex approximations , convergence to is lost if is not convex . however , on the other hand , having a convex inner approximation of may reveal to be very useful , e.g. , for optimization purposes . on the practical and computational sides ,the quality of the approximation of depends heavily on the chosen set on which to make the approximation of the function .the smaller , the better the approximation . in particular, it is worth emphasizing that when the set to approximate is the stability or robust stability region of a linear system , then its particular geometry can be exploited to construct a tight bounding set .therefore , a good approximation of is obtained significantly faster than with an arbitrary set containing .finally , let us insist that the main goal of the paper is to show that it is possible to provide a tight and explicit inner approximation with no quantifier , of nonconvex feasible sets described with quantifiers .then this new feasible set can be used for optimization purposes and we are facing two cases : * the convex case : if and are convex polynomials , and then the optimization problem is polynomially solvable .indeed , functions , , are polynomially computable , of polynomial growth , and the feasible set is polynomially bounded .then polynomial solvability of the problem follows from ( * ? ? ?* theorem 5.3.1 ) . *the nonconvex case : if is not convex then notice that firstly we still have an optimization problem with no quantifier , a nontrivial improvement . secondly we are now faced with an polynomial optimization problem with a single polynomial constraint and possibly bound constraints .one may then apply the hierarchy of convex lmi relaxations described in ( * ? ? ?* chapter 5 ) .of course , in general , polynomial optimization is np - hard .however , if the size of the problem is relatively small and the degree of is small , practice seems to reveal that the problem is solved exactly with few relaxations in many cases , see .in addition , if some structured sparsity in the data is present then one may even solve problems of potentially large size by using an appropriate sparse version of these lmi relaxations as described in , see also .the outline of the paper is as follows . in section [ problem ] we formally state the problem to be solved . in section [ lmi ]we describe our hierarchy of inner approximations . in section [ control ] , we show that the specific geometry of the stability region can be exploited , as illustrated on several standard problems of robust control .the final section collects technical results and the proofs .let ] be the vector space of real polynomials of degree at most .similarly , let \subset\r[x] ] its subcone of sos polynomials of degree at most .denote by the space of real symmetric matrices . for a given matrix , the notation means that is positive semidefinite , i.e. , all its eigenvalues are real and nonnegative .let \to\s^m ] , and is a given symmetric polynomial matrix of size .as is compact , without loss of generality we assume that for some , , where is sufficiently large .we also assume that is bounded and that we are given a compact set with explicitly known moments , , of the lebesgue measure on , i.e. where .typical choices for are a box or a ball . to fix ideas , let for some polynomials ] , such that in addition , we may want the sequence of inner approximations to satisfy additional nesting or convexity conditions . *( nested inner approximations)*[nest ] solve problem [ inner ] with the additional constraint * ( convex inner approximations)*[convex ] given set , build up a sequence of nested basic closed convex semialgebraic sets , for some ] which define the uncertain set in ( [ setu ] ) , let denote the euclidean unit sphere of and let be the function : as the robust minimum eigenvalue function of .function is continuous but not necessarily differentiable .it allows to define set alternatively as the superlevel set let ] , {2d_r} ] , , and {d_{t_j}} ] be the linear functional .\ ] ] with , the moment matrix of order associated with is the real symmetric matrix with rows and columns indexed in , and defined by a sequence has a representing measure if there exists a finite borel measure on , such that for every . with as above and ] and a detailed proof of lemma [ lemma1 ] can be found in [ proof - lemma1 ] . in particularwe prove that there is no duality gap between sos sdp problem ( [ sdp ] ) and moment sdp problem ( [ sdp * ] ) , i.e. . for every , let be the piecewise polynomial we are now in position to prove our main result .[ thmain ] let {2d} ] be an optimal solution of sdp problem ( [ sdp ] ) , let be the piecewise polynomial defined in ( [ piecewise ] ) , and let then that is , sequence solves problem [ inner ] and sequence solves problem [ nest ] if piecewise polynomials are allowed .a proof can be found in [ proof - coro1 ] .we now consider problem [ nest ] where is constrained to be a polynomial instead of a piecewise polynomial . we need to slightly modify sdp problem ( [ sdp ] ) .suppose that at step in the hierarchy we have already obtained an optimal solution {2d-2} ] and {d - d_{b_j}} ] be an optimal solution of sdp problem ( [ sdp ] ) with the additional constraint ( [ add1 ] ) and let be as in ( [ coro1 - 0 ] ) for . then the sequence solves problem [ nest ] . for a proof see [ proof - coro2 ] . finally , for {2d} ] , {d - d_{b_j}} ] .let {2d} ] be the ideal generated by the polynomial so that the real variety associated with is just the unit sphere .it turns out that the real radical is zariski dense in so that . but being irreducible , is a prime ideal and so . ] of is itself , that is , ( where for , denotes the vanishing ideal ) . andafter embedding in ] .moreover , for every , {d-2}}\,(1-v^tv),\ ] ] for some real coefficients , and some {d-2} ] .so because of the constraints , the semidefinite program ( [ sdp * ] ) is equivalent to the semidefinite program : where the smaller moment matrix is the submatrix of obtained by looking only at rows and columns indexed in the monomial basis , , instead of .similarly , the smaller localizing matrix is the submatrix of obtained by looking only at rows and columns indexed in the monomial basis , , instead of ; and similarly for .indeed , in view of ( [ substitute ] ) and using , every column of associated with is a linear combination of columns associated with . andsimilary for and .hence , , and for all , . next , let be the sequence of moments of the ( product ) measure uniformly distributed on , and scaled so that for all ( with the rotation invariant measure on ) . therefore , for every , moreover , for every and importantly , , and see why , suppose for instance that for some vector .this means that for some non trivial polynomial /j ] .therefore is a strictly feasible solution of ( [ sdp2 * ] ) and so slater s condition holds for ( [ sdp2 * ] ) .denote by ] . as is a strictly feasible solution of the semidefinite program ( [ sdp2 * ] ) , by a standard result of convex optimization, there is no duality gap between ( [ sdp2 * ] ) and its dual \mathrm{s.t . } & v^t p(x , u ) v - g(x ) \,=\ , r(x , u , v ) ( 1-v^t v ) \\ & + \displaystyle\sum_{i=0}^{n_a } s_i(x , u , v ) a_i(u)+\displaystyle\sum_{j=1}^{n_b } t_j(x , u , v ) b_j(x ) \quad\forall ( x , u , v)\\ \end{array}\ ] ] where now the decision variables are coefficients of polynomials {2d} ] , and coefficients of sos polynomials {d_{a_i}} ] , .that is , and so because .if then ( [ sdp2 ] ) is guaranteed to have an optimal solution . but observe that such an optimal solution is also feasible in ( [ sdp ] ) , and so having value , is also an optimal solution of ( [ sdp ] ) .it remains to prove that is bounded . for any feasible solution of ( [ sdp ] ) , , and for all , , follows from , and , where and ; see the comments after ( [ setu ] ) and ( [ momb ] ) .then by ( * ? ? ? * lemma 4.3 ) , one obtains , for all , which shows that the feasible set of ( [ sdp * ] ) is compact .hence ( [ sdp * ] ) has an optimal solution and is finite ; therefore its dual ( [ sdp ] ) also has an optimal solution , the desired result .\(a ) let and consider the infinite - dimensional optimization problem where is the space of finite borel measures on .problem ( [ momp ] ) has an optimal solution .indeed , because for every , ; and so for every feasible solution , because for all and hence the marginal of on is the lebesgue measure on . on the other hand , observe that for every , for some . therefore , let be the borel measure concentrated on for all , i.e. where denotes the indicator function of set and denotes the borel -algebra of subsets of .then is feasible for problem ( [ momp ] ) with value which proves that .next , being continuous on compact set , by the stone - weierstrass theorem , for every there exists a polynomial ] , and ] such that .hence for all and all , and so the polynomial satisfies and for all . again , by putinar s positivstellensatz , see e.g ( * ? ? ?* section 2.5 ) , is feasible for ( [ sdp ] ) with the additional constraint ( [ add1 ] ) , provided that is sufficiently large , and with associated value remember that ] ( with same name of simplicity ) we also have .write for some polynomials ], it vanishes on and as , , that is , for some polynomial {d} ] and /j ] .but since ( [ reduction3 ] ) holds for every , we obtain and as has nonempty interior , i.e. , , which proves the desired result that .
|
following a polynomial approach , many robust fixed - order controller design problems can be formulated as optimization problems whose set of feasible solutions is modelled by parametrized polynomial matrix inequalities ( pmi ) . these feasibility sets are typically nonconvex . given a parametrized pmi set , we provide a hierarchy of linear matrix inequality ( lmi ) problems whose optimal solutions generate inner approximations modelled by a single polynomial superlevel set . those inner approximations converge in a well - defined analytic sense to the nonconvex original feasible set , with asymptotically vanishing conservatism . one may also impose the hierarchy of inner approximations to be nested or convex . in the latter case they do not converge any more to the feasible set , but they can be used in a convex optimization framework at the price of some conservatism . finally , we show that the specific geometry of nonconvex polynomial stability regions can be exploited to improve convergence of the hierarchy of inner approximations . : polynomial matrix inequality , linear matrix inequality , robust optimization , robust fixed - order controller design , moments , positive polynomials .
|
one hundred years ago , albert einstein published his general theory of relativity ( general relativity or gr ) , which describes gravity as the warping of space and time due to the presence of matter and energy , and forever changed our conceptual understanding of one of nature s fundamental forces . prior to gr ,isaac newton s theory of gravitation , which describes gravity as the force of one massive object on another , reigned supreme and unchallenged for nearly a quarter of a millennium . with the advent of gr , newtonian gravitation was reduced to that of an approximation to a more fundamental underlying theory of nature and must be abandoned altogether when probing phenomena near highly relativistic objects such as black holes and neutron stars .nevertheless , newtonian gravitation is highly successful in accounting for the approximate motion of the planets in our solar system and allows for a theoretical derivation of kepler s three empirical laws . according to newtonian gravitation ( andkepler s first law of planetary motion ) the planets move in _ closed _ elliptical orbits with the sun residing at one focus. for these orbits , the translating body finds itself at precisely the same radial distance after the polar angle has advanced by and will therefore retrace its preceding elliptical motion indefinitely .this newtonian prediction of closed elliptical orbits holds true only in the absence of other massive objects as additional orbiting bodies generate perturbations to the motion . additionally , when the theoretical framework of gr is employed , the closed or _stationary _ elliptical orbits of newtonian gravitation are replaced with elliptical - like orbits whose perihelia and aphelia _ precess _ , even in the absence of additional massive objects. before einstein s gr , the small but observable precession of mercury s perihelion was partially unaccounted for and offered the first real test of general relativity .in fact , when the precession of earth s equinoxes and the gravitational effects of the other planets tugging on mercury are accounted for , gr claims exactly the additional measured precession .the predicted value of mercury s perihelion precession due to general relativistic effects is incredibly tiny , equating to an angular shift per orbit of less than one part in ten million . according to general relativity, a massive object causes the spacetime in its vicinity to warp or curve simply due to its presence .hence , the planets move in their respective orbits not due to the gravitational force of the sun acting on them , but rather due to the fact that the planets are moving in the warped four - dimensional spacetime about the sun .in a typical introductory undergraduate gr course , the spacetime external to a non - rotating , spherically symmetric massive object is often extensively studied as it offers a simple , exact solution to the field equations of general relativity . to gain conceptual insight into this non - euclidean spacetime , an embedding diagram is often constructed . by examining the spacetime geometry external to the massive object at one moment in time , with the spherical polar angle set to , a two dimensional ( 2d ) equatorial slice " of the three dimensional space is extracted .a curved two dimensional , cylindrically symmetric surface embedded in a flat three dimensional space is then constructed with the same spatial geometry as this two dimensional spacetime slice ; this is the embedding diagram .these embedding diagrams prove incredibly useful in visualizing spacetime curvature and serve as a conceptual tool .when free particle orbits , or geodesics , about non - rotating , spherically symmetric massive object are then later studied , a conceptual analogy of a marble rolling on a warped two - dimensional surface is often drawn to represent the orbital motion .although this analogy may be useful for the beginning student of gr in visualizing particle orbits about massive objects in curved spacetime , it is nt precise and fundamentally differs from the analogy of embedding diagrams discussed above .although the function that describes the shape of the cylindrically symmetric surface of the embedding diagram is of a relatively simple form , there exists no 2d cylindrically symmetric surface residing in a uniform gravitational field that can generate the exact newtonian orbits of planetary motion , save the special case of circular orbits. this null result holds when considering a fully general relativistic treatment of orbital motion about a non - rotating spherically symmetric massive object. for an object rolling or sliding on the interior of an inverted cone with a slope of , it has been shown that the stationary elliptical orbits of planetary motion can occur when the orbits are nearly circular. later , elliptical - like orbits with small eccentricities were considered on 2d cylindrically symmetric surfaces where a perturbative solution was constructed. there it was found that when the surface s shape takes the form of a power law , precessing elliptical - like orbits with small eccentricities can only occur for certain powers and stationary elliptical orbits with small eccentricities can only occur for certain powers and for certain radii on the surface .although we show that no such surface exists that can _ exactly _ reproduce the arbitrary bound orbits of a spherically - symmetric potential , cylindrically symmetric surfaces _ do _ exist that yield the stationary and precessing elliptical - like orbits with small eccentricities of newtonian gravitation and general relativity , respectively .first , we arrive at a general expression describing the slope of a 2d cylindrically symmetric surface that yields the elliptical orbits of newtonian gravitation for small eccentricities .the slope of this surface reduces to the special case of white s when the arbitrary constant of integration is set equal to zero. we then extend our study to general relativity and arrive at an expression for the slope of a 2d surface that generates the precessing elliptical orbits of gr for small eccentricities . for the sake of brevitywe symbolically represent _ stationary elliptical - like orbits of newtonian gravitation with small eccentricities _ as n orbits and _ precessing elliptical - like orbits of general relativity with small eccentricities _ as gr orbits ; these phrases appear frequently throughout the paper and warrant this use of shorthand .this paper is outlined as follows . in sec .[ ng ] , we derive the equations of motion for an object orbiting in a generic spherically symmetric potential .we examine the case of newtonian gravitation and present the solution for the respective particle orbits while introducing the notation contained in the sections that follow . in sec .[ elliptical - like ] , we consider the rolling or sliding motion of an object constrained to reside on a 2d cylindrically symmetric surface and obtain the corresponding orbital equation of motion . we show that this equation is fundamentally different than the equation of motion for an object subjected to a generic 3d spherically symmetric potential and that there exists no 2d cylindrically symmetric surface that can make these equations coincide for any spherically symmetric potential , except for the special case of circular orbits .for elliptical - like orbits with small eccentricities on the cylindrically symmetric surface , we solve the orbital equation of motion perturbatively .a differential equation , evaluated at the characteristic radius of the orbital motion , emerges and relates the precession parameter of the orbit to the slope of the respective surface . by demanding that the elliptical - like orbits on the surface have a constant precession parameter and by relaxing the condition that the aforementioned differential equation be evaluated at the characteristic radius of the motion , we solve the corresponding differential equation and find the slope of the surface that yields these elliptical - like orbits with small eccentricities for all radii on the surface .we then examine the solution for the slope that generates the n orbits . in sec .[ gr ] , we extend our study to general relativity . by demanding that the elliptical - like orbits on the 2d surface now mimic those of gr , we present the slope of the surface that yields these gr orbits for all radii on the surface .we then compare these two families of surfaces with the intent of offering insight into the aforementioned theories of gravitation .we begin this manuscript with a review of central - force motion and , in particular , of newtonian gravitation .this treatment is not only instructive but also serves to introduce the notation contained within the sections that follow .the lagrangian describing an orbiting body of mass in a generic spherically symmetric potential is of the form [ lagrangiangen ] l = m - u(r ) , where are spherical polar coordinates and a dot indicates a time - derivative . as the angular momentum of an orbiting body in a spherically symmetric potential is conserved , the motion of the object , without loss of generality , can be chosen to reside in the equatorial plane .this is done by setting in eq . where the lagrangian reduces to one dependent only on the two plane polar coordinates and .it is noted that conservation of angular momentum is a general result of central - force motion and , for the case of planetary motion in newtonian gravitation , equates to kepler s second law. the lagrangian of eq . generates two equations of motion , which take the form -+&=&0[teq ] + & = & , [ angmom ] where is a constant of the motion and equates to the conserved angular momentum per unit mass .here we are interested in arriving at an expression for the radial distance of the orbiting body in terms of the azimuthal angle , . using the chain rule and eq .( [ angmom ] ) , a differential operator of the form [ oper ] = can be constructed and used to transform eq . into the form [ phieq ] -()^2-r+=0 .equation is the equation of motion describing the radial distance of an orbiting body as a function of the azimuthal angle in a generic spherically symmetric potential .this equation will later be compared to the equation of motion for an object rolling or sliding on a cylindrically symmetric 2d surface . for an object of mass residing in a newtonian gravitational potential , the general expression of eq .takes the form [ phinew ] -()^2-r+r^2=0 , where is newton s constant and is the mass of the central object .equation is usually presented in a slightly simpler form in most classical dynamics texts by adopting a change of variables of the form .this substitution is unnecessary in the present treatment and will thus be avoided .it is noted that eqs . and are technically the equations of motion for the reduced mass of the equivalent one - body problem , but approximately describe the motion of the particle of mass in the limit that . in this limit , we can neglect the motion of the central mass treating it as fixed , which is exactly the regime this manuscript is interested in understanding .equation can be solved _exactly _ and yields the well - known conic sections of the form [ rext ] r()= , where is a constant , known historically as the semilatus rectum , and is the eccentricity of the orbit . notice that for the eccentricity , corresponds to the special case of circular motion whereas describes elliptical orbits . also notice that eq .is a solution to eq .when .this relation specifies the necessary angular momentum at a given radius , characterized by , for circular or elliptical motion to occur . for small eccentricities , the exact solution of eq .can be approximated as [ rapp ] r()_ap = r_0(1- ) to first - order in the eccentricity .notice that when eq .is taken as the solution for the radius , corresponds to the average radius of an elliptical orbit with small eccentricity to first - order in the eccentricity .the eccentricities of the planets of our solar system are quite small with an average eccentricity of , hence eq .yields an excellent approximation for the radii of the orbits . as a specific example, mercury has the largest of eccentricities with a value of . comparing the exact and approximate radii given by eqs . andrespectively , we find a maximum percent error of see table [ tab : beta ] near the end of this article for the eccentricities of the solar system planets .the lagrangian describing a body of mass residing on a cylindrically symmetric surface in a uniform gravitational field is of the form [ lagrangian ] l = m+i^2-mgz , where we have included both the translational and rotational kinetic energy of the orbiting body and the gravitational potential energy of the object at a height . in an earlier version of this manuscript ,the rolling and sliding motions of an orbiting body were collectively analyzed and presented . in the analysis the rolling objectwas assumed to roll without slipping and the approximation that the angular velocity of a rolling object is proportional to the translational velocity of the object s center of mass was adopted .there it was found that the general solution for the slope of the surface , which will be presented later in this section , does not depend on the intrinsic spin of the orbiting body .in fact , none of the results presented in this manuscript depend on the intrinsic spin of the orbiting body . with the benefit of hindsight and for the sake of simplicity, we forego this more detailed analysis as the results are left unaltered and set the rotational kinetic energy term of eq . to zero .as we are interested in studying the motion of an orbiting body on a cylindrically symmetric surface , we note that there exists an equation of constraint connecting two of the three cylindrical coordinates generically of the form . using this fact and setting the rotational kinetic energy term to zero , eq .can be massaged into the form [ lagrangian2 ] l = m - mgz(r ) , where and we used the fact that via the chain rule .this lagrangian generates two equations of motion , which have been previously derived elsewhere and are of the form ( 1+z^2)+zz^2-+gz&=&0[teqnmotion ] + & = & [ angmom2 ] , where , again , is the conserved angular momentum per unit mass. upon comparison of eqs . and , we note that there exists no spherically symmetric potential , , whose presence exists in the form of a radial derivative in eq ., that can generate the second term of eq . .this is easily seen as the last term of eq .will only generate functions of and not .likewise , there exists no cylindrically symmetric surface , described by , that can generate a term capable of canceling the term in eq . as derivatives of are likewise strictly functions of .the term of eq . arises from motion in the -direction on the cylindrically symmetric surface where . hence , eqs . and can not be made to match for any spherically symmetric potential , , or for any 2d cylindrically symmetric surface , , except for the trivial case of .this disparity implies that there exists _ no _ cylindrically symmetric surface residing in a uniform gravitational field that is capable of reproducing the exact motion of a body orbiting in _ any _ spherically symmetric potential , except for the special case of circular motion .this null result was first pointed out for an orbiting object subjected to the newtonian gravitational potential of a central mass and later for the case of a body orbiting about a non - rotating , spherically symmetric massive object in general relativity. for the special case of circular motion , a 2d cylindrically symmetric surface can yield the orbits of newtonian gravitation , taken here to mean that the surface yields orbits that obey kepler s three laws , when the shape of the surface takes the form of a newtonian gravitation potential well .this result only holds true for circular motion as elliptical - like orbits on this surface precess for all radii and hence violate kepler s first law. see fig . [fig : newpot ] for a surface of revolution plot of vs of a newtonian gravitational potential well .there exists no cylindrically symmetric surface residing in a uniform gravitational field capable of generating the _ exact _ newtonian orbits of planetary motion , except for the case of circular orbits .this can be witnessed explicitly by inserting the solution for newtonian orbits , given by eq ., into eq . . upon this substitution , eqtakes the form [ newsur ] ( 1-z)+(2-z^2)+^2-^3 ^ 3z^2=0 , + where we employed the chain rule and eq . in calculating time derivatives and grouped the resultant expression by powers of the eccentricity .in order for eq . to be an exact solution to eq . for arbitrary eccentricity , each of the four terms of eq . must vanish uniquely .although the first two terms can in fact be set equal to zero , which consequently would yield the required angular momentum per unit mass for elliptical or circular motion to occur and a slope for the surface , notice that the and terms will not vanish for any cylindrically symmetric surface .for circular orbits , where the eccentricity is set to zero , eq . represents an exact solution to eq .when the angular momentum per unit mass obeys the expression for a given characteristic radius and slope . using eq . , eq .( [ teqnmotion ] ) can be transformed into an orbital equation of motion of the form [ phieqnmotion ] ( 1+z^2)+(zz-(1+z^2))()^2-r+zr^4=0 .equation ( [ phieqnmotion ] ) is a non - linear equation of motion that can be solved perturbatively . for elliptical - like orbits with small eccentricities , we choose an approximate solution for the radius of the form [ rnu ] r()=_0(1- ( ) ) , + where and are parameters that are to be determined by eq .( [ phieqnmotion ] ) and is the eccentricity of the orbit , which will be treated as small and used as our expansion parameter. as the remainder of this paper deals _ solely _ with elliptical - like orbits of small eccentricity , all subsequent orbits should be understood as having small eccentricities , even if not explicitly stated .the precession parameter , , is understood as [ precession ] , where corresponds to the angular separation between two apocenters of the orbital motion. notice that when , eq .( [ rnu ] ) has the same functional form as that of eq .( [ rapp ] ) and describes a _ stationary _ elliptical - like orbit on the 2d surface where the angular separation between the two apocenters is . when , eq .( [ rnu ] ) describes a _ precessing _ elliptical - like orbit on the 2d surface where .interestingly , orbits about non - rotating , spherically - symmetric massive objects in general relativity are found to have precession parameters that take on values _ less than _ one .this corresponds to an angular separation of , where the apocenter of the elliptical - like orbit marches forward in the azimuthal direction in the orbital plane . inserting eq .( [ rnu ] ) into eq .( [ phieqnmotion ] ) and keeping terms up to first - order in , we find that eq .( [ rnu ] ) represents an approximate solution to the orbital equation of motion when ^2&=&g_0 ^ 3z_0[zeroordern ] + z_0(1+z^2_0)^2&=&3z_0+_0z_0[firstordern ] , where and are radial derivatives of evaluated at . for claritywe note that in arriving at eqs .and , we expanded the radial derivatives of about to first - order in the eccentricity through [ z ] z(r)=z_0+(r-_0)z_0=z_0-_0()z_0 , where we used eq . in calculating the second equality ;the second derivative was expanded in an identical fashion .notice that eq .( [ zeroordern ] ) specifies the angular momentum per unit mass needed at a given radius , characterized by , on a given 2d surface for elliptical or circular motion to occur .equation ( [ firstordern ] ) determines the precession parameter for elliptical - like motion on a given 2d surface whose surface is defined by .the method of this subsection thus far follows closely to that of an earlier work. there it was shown that for cylindrically symmetric surfaces with a shape profile of the form [ power ] z(r)- , precessing elliptical - like orbits are only allowed when whereas stationary elliptical - like orbits can only occur for certain radii when .interestingly when , which corresponds to eq .( [ power ] ) taking on a shape profile of a newtonian gravitational potentialwell , there exists _ no _ radii that will yield the stationary elliptical orbits of newtonian gravitation .[ fig : newpot ] for a surface of revolution plot of a newtonian gravitational potential well . in this manuscriptwe wish to find the 2d cylindrically symmetric surfaces that will generate the n and gr orbits .this corresponds to finding the slope of the surface that will generate elliptical - like orbits with a constant precession parameter , , for all radii on the surface . to find these surfaces we relax the condition that the radius of the orbit and the radial derivatives of be evaluated at and solve the differential equation , eq .( [ firstordern ] ) , for the slope of the surface , where we treat the precession parameter as an unspecified constant . using this method, we obtain the general solution for the slope of the surface , which takes the form [ slope ] = ( 1+r^2(3-^2))^-1/2 , where is an arbitrary constant of integration and is a constant precession parameter .equation can be integrated to yield the shape profile for the 2d surface , which takes the form of a hypergeometric function . a 2d cylindrically symmetric surface with a slope given by eq .will generate elliptical - like orbits for a constant precession parameter at all radii on the surface .notice that the slope of eq .becomes complex when .this implies that there exists no cylindrically symmetric surface capable of generating elliptical - like orbits with a constant precession parameter greater than . using eq ., this corresponds to a minimum angular separation between two apocenters of the orbital motion of .as previously mentioned at the beginning of this section , eq . is left unaltered when the intrinsic spin of the orbiting body is included in the analysis .thus , for a 2d surface whose slope is given by eq .( [ slope ] ) , both rolling and sliding objects can undergo elliptical - like orbital motion with a constant precession parameter at all radii on the surface when the right initial conditions are imposed . in this subsection, we present the 2d surfaces that yield stationary elliptical orbits with small eccentricities ( n orbits ) for all radii on the surface . setting in eq ., the slope of the surface takes the form [ nslope ] = ( 1+r^4)^-1/2 . notice that for negative , has a range , where the slope of the surface diverges at .when is set equal to zero , eq .( [ nslope ] ) reduces to an equation of an inverted cone with a slope of .this result of an inverted cone with a slope of giving rise to n orbits was previously found elsewhere. see fig .[ fig:2dplot ] for the surfaces of revolution with , , and that yield stationary elliptical - like orbits for all radii on its surface .vs with the slope defined in eq .( [ nslope ] ) for . each surface will generate the stationary elliptical orbits of newtonian gravitation when the eccentricities of the orbits are small ( n orbits ) . for the surface , the slope of the surface diverges at .the surface is that of an inverted cone with a slope of .the slope of all surfaces approach in the small limit .it is noted that part of the surfaces have been removed to allow for better visibility of the surface.,width=321 ] + we further note that although eq .( [ nslope ] ) yields the desired n orbits , these orbits do not , in general , obey kepler s third law . to find the corresponding kepler - like relation for circular orbits on a cylindrically symmetric surface with a slope given by eq .( [ nslope ] ) , we employ eqs .( [ teqnmotion ] ) and ( [ angmom2 ] ) , set and use the fact that for circular orbits .we find a relationship between the period and the radius of the orbit of the form [ period ] t^2=r to zeroth - order in the eccentricity .notice that for shape profiles for the 2d surface where , eq .( [ period ] ) takes on the approximate form [ periodsmall ] t^2r .this corresponds to the kepler - like relationship for an object rolling or sliding on the inside surface of an inverted cone .conversely , for shape profiles for the 2d surface where , eq .( [ period ] ) takes on the approximate form [ periodbig ] t^2r^3 , which is of course kepler s third law for planetary motion . noticethat this regime can only occur when as the radius of the orbiting body resides within for negative .hence , for an object rolling or sliding in an elliptical - like orbit on a 2d cylindrically symmetric surface with a slope given by eq .( [ nslope ] ) , kepler s first law is obeyed to first - order in the eccentricity for all values of and kepler s third law is obeyed only in the regime to zeroth - order in the eccentricity .kepler s second law is obeyed generically for central - force motion to all orders in the eccentricity . see fig . [ fig:2dplotr4 ] for a surface of revolution plot of this surface .having found the slope of the cylindrically symmetric surface that will generate newtonian orbits in the small eccentricity regime , we now wish to extend our study to general relativity ( gr ) . using schwarzschild coordinates , the equations of motion for an object of mass _ m _ orbiting about a non - rotating , spherically symmetric object of mass in gr are of the form -++&=&0[tgr ] + & = & , [ angmom3 ] where , in this section, a dot indicates a derivative with respect to the proper time and is the conserved angular momentum per unit mass. it is noted that the above equations of motion are derivable from a relativistic lagrangian , where here we omit the derivation . without loss of generality, we again choose the motion of the orbiting object to reside in the equatorial plane as the angular momentum is conserved . upon comparison of eqs .( [ tgr ] ) and ( [ angmom3 ] ) with the non - relativistic newtonian equations of motion given by eqs . and ( [ angmom ] ) for a newtonian gravitational potential , we note the presence of an additional term in the first equation of motion of eq . .this relativistic correction term offers a small- modification and gives rise to a precession of elliptical orbits , which will become apparent in what follows .following a similar procedure to that outlined in section [ ng ] , eq .( [ tgr ] ) can be transformed into an orbital equation of motion of the form [ phigr ] -()^2-r+r^2+=0 , where we used the differential operator relation of eq .( [ oper ] ) with the time derivative replaced with a derivative with respect to the proper time .we again wish to explore elliptical - like orbits where the eccentricities are small .we choose a perturbative solution of the form [ rgr ] r()=r_0(1- ( ) ) , where and are free parameters that are to be determined by eq .( [ phigr ] ) . inserting eq .( [ rgr ] ) into eq .( [ phigr ] ) and keeping terms up to first - order in , we find that eq .( [ rgr ] ) is indeed a valid solution when the constants and parameters obey the relations ^2&=&gmr_0(1-)^-1[zeroordergr ] + ^2&=&1-[firstordergr ] .an expansive treatment of perihelion precession for nearly circular orbits of an object residing in a central potential and about a static spherically symmetric massive object in gr can be found elsewhere. notice that when the gr correction term is set to zero , eqs .( [ zeroordergr ] ) and ( [ firstordergr ] ) reduce to the results of newtonian gravitation where and stationary elliptical orbits are recovered as .also notice that for a non - zero gr correction term , only takes on values less than one .this well known result illustrates how the stationary elliptical orbits of newtonian gravitation are replaced with precessing elliptical orbits in gr , for orbits about a non - rotating , spherically symmetric massive object . for a given central mass , eq .( [ firstordergr ] ) implies that the deviation from stationary elliptical orbits increases with decreasing radius , as should be expected .hence , the innermost planets of our solar system experience a greater relativistic correction to the newtonian orbits than the outermost planets . for the student of gr , one may notice that the relation presented in eq .( [ firstordergr ] ) is void of a dependency on the eccentricity of the orbit . in a non - perturbative treatment ,the precession of elliptical orbits is in fact dependent on the eccentricity. this dependency amounts to a second - order contribution for orbits with small eccentricities and is therefore absent in our treatment as our calculation is only valid to first - order in the eccentricity .notice that when , eq .( [ firstordergr ] ) yields a complex value for .this implies that elliptical - like orbits are only allowed for radii larger than this threshold radius .this threshold radius represents the innermost _stable _ circular orbit ( isco ) . also notice that when , the conserved angular momentum per unit mass , , given by eq .( [ zeroordergr ] ) also becomes complex in addition to .this implies that there are no circular orbits , stable or unstable , for the case of an object orbiting around a non - rotating , spherically symmetric central object with a radius . in this subsection, we present the 2d surfaces that generate the gr orbits . adopting eq .( [ firstordergr ] ) for and inserting this into eq ., the slope of this surface takes the form [ grslope ] = ( 1+r^2(2+))^-1/2 , where we defined the dimensionless relativistic quantity .the validity of this solution for the slope can easily be verified by first inserting eq .( [ rnu ] ) into eq .( [ grslope ] ) and expanding to first - order in and then inserting both eqs .( [ rnu ] ) and ( [ grslope ] ) into eq .( [ phieqnmotion ] ) , with defined by eq .( [ firstordergr ] ) . vs with slopes defined in eq .( [ grslope ] ) for where the relativistic quantity ranges from and increases in 0.1 intervals , displayed from bottom to top respectively .each surface will generate the _ precessing _ elliptical - like orbits of general relativity when the eccentricities are small ( gr orbits ) for the given .the shape of each surface , and hence the elliptical - like orbits each surface will generate , are dependent upon the dimensionless relativistic quantity .a surface with a larger value of , which equates to a smaller value of the precession parameter , will generate a larger precession for the orbiting body.,width=321 ] a 2d cylindrically symmetric surface with a slope given by eq .( [ grslope ] ) will generate gr orbits for all radii on the surface .notice that the slope of this surface is dependent upon the central mass , , and the characteristic radius of the orbiting body , , of the celestial system whose orbital dynamics this surface seeks to mimic via the dimensionless relativistic quantity .further , the slope of this surface depends on in both the overall factor and in the power of of the 2d surface , as can be seen in eq .( [ grslope ] ) .notice that a surface defined by a larger , which equates to a smaller value of the precession parameter via eq .( [ firstordergr ] ) , will generate an orbit with a larger precessional rate of the apocenter , as can be seen in eq . .hence , one needs a unique 2d surface to replicate each unique free particle orbit in gr .this result differs dramatically from the result of the newtonian treatment of the preceding section as the slope given by eq .( [ nslope ] ) is independent of the parameters of the theory .also notice that the general relativistic relation of eq .reduces to its newtonian counterpart given by eq . when the relativistic quantity is set equal to zero .although the perturbative treatment of this manuscript is only valid for orbits with small eccentricites , the treatment presented here is fully relativistic .the quantity has a range , as corresponds to the innermost stable circular orbit ( isco ) discussed previously where elliptical - like orbits are not allowed . as this relativistic quantity approaches one , the slope given by eq .diverges for all values of .[ fig : grplot ] for ten stacked surface of revolution plots with , where the relativistic quantity ranges over the interval , increasing in 0.1 intervals . in short ,a surface defined by a larger value of generates an orbit with a larger precession of the apocenter . for the planets of our solar system , the relativistic quantity has been calculated for each and are displayed in table [ tab : beta ] .we note how incredibly small the relativistic parameter is for each of the solar system planets .these values for offer a tiny modification to the slope of the surface defined by eq .( [ nslope ] ) and replace the stationary elliptical orbits of newtonian gravitation with the precessing elliptical - like orbits of gr ..table displaying the eccentricities of the orbits , the average sun - planet distances , and the dimensionless relativistic quantity for the solar system planets. the small values of for the solar system planets reveal the scale of the modifications needed to bring the slopes of the surfaces that generate the n orbits to coincide with those that generate the gr orbits . [cols="<,^,^,^",options="header " , ]in this manuscript , we showed that there exists no 2d cylindrically symmetric surface residing in a uniform gravitational field that is capable of reproducing the exact particle orbits of an object subjected to a generic 3d spherically symmetric potential . by employing a perturbative method to the equations of motion for an object orbiting on a 2d cylindrically symmetric surface, we found the general solution for the slope of the surface , given by eq .( [ slope ] ) , that will generate elliptical - like orbits with small eccentricities with a constant precession parameter for all radii on the surface .we then examined this solution for the special cases of the surfaces that will generate n and gr orbits . for a 2d surface with a slope given by eq . with ,n orbits can be generated to first - order in the eccentricity , where kepler s third law will be met to lowest order in the eccentricity when .the orbits generated on this surface differ from the elliptical - like orbits on a 2d surface corresponding to a newtonian gravitational potential well , where kepler s first law is not obeyed for any radius on the surface . for a 2d surface with a slope given by eq ., gr orbits can be generated .the slopes of these respective surfaces are functionally dependent upon the mass of the central object and the radius of the orbiting body of the gravitationally bound system whose orbital dynamics these surfaces seek to mimic .these surfaces reduce to their newtonian counterpart when the relativistic term is set to zero , as is expected . a comparative study of these surfaces , whose slopes are defined by eqs . and, offers a concrete visualization of the deviations that emerge between newtonian gravitation and general relativity for an object orbiting about a spherically symmetric massive object , when the eccentricity is small .as embedding diagrams prove quite useful to the beginning student of general relativity , it can be argued that these 2d cylindrically symmetric surfaces that yield n and gr orbits are equally useful in gaining insight into these respective theories of gravitation .an interesting undergraduate physics / engineering project could entail the manufacturing and experimental testing of a 2d cylindrically symmetric surface whose slope is defined by eq . . with the advent of 3d printing ,the manufacturing of such a classroom - size surface should be feasible . using a camera mounted above the surface and tracker, a video - analysis and modeling software program ,the angular separation between successive apocenters of elliptical - like orbits with differing eccentricities could be measured and then compared with the theoretical prediction of . as the analysis presented in the manuscript is valid to first - order in the eccentricity , one expects a larger deviation from for an orbit with a larger eccentricity . for details on the above experimental setup , which was used to analyze circular orbits on a warped spandex surface ,see a previous work of the author. for the student who seeks to generate the gr orbits , a surface with a slope defined by eq . could equivalently be constructed . to generate an elliptical - like orbit that advances by per revolution , the relativistic quantity be assigned a value of for the slope of this surface .the inclusion of these surfaces into the classroom could offer the student of newtonian gravitation and general relativity an incredibly useful tool , as the experimenter can see , and measure , some of the properties of these elliptical - like orbits firsthand .bertrand s theorem states that an orbiting body residing in a spherically symmetric potential will move in closed elliptical orbits _ only _ for a newtonian gravitational potential or an harmonic oscillator potential .the perihelion ( aphelion ) is the point of closest ( farthest ) approach of a planet in orbit around the sun .these points march forward , or precess , in the azimuthal direction when additional orbiting bodies are included in the newtonian treatment or when the theoretical framework of gr is employed .the pericenter ( apocenter ) refers to the point on the elliptical orbit that is closest to ( farthest from ) the central mass .this percent error is calculated from , where we used eqs . and in calculating the second equality .this percent error is independent of , depending only on the eccentricity of the orbit and the angular position of the planet , and yields a maximum value when .notice that the constant is the characteristic radius of an elliptical - like orbit in both _newtonian gravitation and general relativity_. similarly , in eq .( [ rnu ] ) is the characteristic radius of an elliptical - like orbit on the _2d surface_. notice that the numerical value of merely sets the scale for the 2d surface and , without loss of generality , can be normalized to take on values of .this result is reminiscent of the curvature constant that arises in frw cosmology. the values for the mass of the sun and the average sun - planet distance , which were used to calculate the relativistic factor , are found on the nasa web site , ://nssdc.gsfc.nasa.gov .we note that we approximated the characteristic radius , , as the average sun - planet distance , , in calculating the s for this table . for mercury , where the discrepancy between and is largest, the deviation between and occurs in the second significant figure of .
|
embedding diagrams prove to be quite useful when learning general relativity as they offer a way of visualizing spacetime curvature through warped two dimensional ( 2d ) surfaces . in this manuscript we present a different 2d construct that also serves as a useful conceptual tool for gaining insight into gravitation , in particular , orbital dynamics - namely the cylindrically symmetric surfaces that generate newtonian and general relativistic orbits with small eccentricities . although we first show that no such surface exists that can _ exactly _ reproduce the arbitrary bound orbits of newtonian gravitation or of general relativity ( or , more generally , of _ any _ spherically symmetric potential ) , surfaces do exist that closely approximate the resulting orbital motion for small eccentricities ; exactly the regime that describes the motion of the solar system planets . these surfaces help to illustrate the similarities , as well as the differences , between the two theories of gravitation ( i.e. stationary elliptical orbits in newtonian gravitation and precessing elliptical - like orbits in general relativity ) and offer , in this age of 3d printing , an opportunity for students and instructors to experimentally explore the predictions made by each .
|
essential for the feasibility of acoustic neutrino detection is a good understanding of the background of transient acoustic signals in the deep sea and the ability to suppress them or identify them as background .the transient signals are very diverse and originate from anthropogenic and biological sources as well as weather - correlated sources .the aim of the amadeus project is to investigate the method of acoustic neutrino detection .amadeus is integrated into the antares neutrino telescope , which is located in the mediterranean sea and the acoustic set - up consists of six clusters of six acoustic sensors each .the spaces between the sensors within the clusters are about 1 m and between the clusters up to 350 m . in the experiment ,transient signals with bipolar ( i.e.neutrino-like ) content are selected using on - line filtering techniques . as the variety of recorded transient signals is still high , an effective classification scheme to discriminate between background and neutrino - like signalsis researched and presented here .the analysis chain incorporates a simulation of transient signals , a filter analogous to the one used on - line in the experiment , feature extraction algorithms and the signal classification based on machine learning algorithms .the goal of this research is to find a robust and well performing system to distinguish between neutrino - like and other transient signals occurring in the deep sea , like man - made and biological sources . in this section , the methods used for training and testing the classification system will be explained .a special purpose simulation was designed for testing the feature extraction and classification system , which is also trained with simulated data .the simulation is capable of generating typical deep - sea signals , waveforms present at the antares site like bipolar and multi - polar pulses , echoes of the antares acoustic positioning system or random signals .the different signal types are generated following a uniform frequency distribution .starting from random source positions within a given volume around the detector , the signals are propagated to the sensors and characteristic ambient noise of different sea levels is added . the output a continuous data stream is directed to the filter and from there to the feature extraction or directly to the classification system . as a first step ,the incoming continuous data stream is subjected to a filter system equivalent to the one used in the experiment , where it is used to reduce the amount of data stored for off - line classification and reconstruction .the filter set - up consists of an amplitude threshold for strong transient signals , which is self - adjusting to the changing ambient noise conditions , and a matched filter for bipolar signals . as reference signal for the matched filter a bipolar pulseis used according to the one , which is produced by a shower at a distance of 300 m perpendicular to the shower axis . in a next step ,the characteristics of the filtered signals are extracted .the resulting feature vector contains the time and frequency domain characteristics of the signal as well as the results of a matched filter bank , which was tuned for neutrino - like signals .the bank consists of six reference signals corresponding to angles of 90 in one degree steps to the shower axis of a shower at a distance of 300 m . in the time domain ,the number of occurring peaks and the peak - to - peak amplitude of the largest peak , its asymmetry and duration are extracted . in the frequency domain , the main frequency component and the excess over the noise backgroundare used as features . from the results of the matched filter bank, the best match is taken into account . from this matched filter outputthe number of peaks and the amplitude , the width and the integral of the largest peak are stored in the feature vector . as an independent feature vector , the filtered waveform itself can be subjected to the classification algorithm .the classification system stems from machine learning algorithms trained and tested with data from the simulation . as input , either the extracted feature vector or the filtered waveform is used ; as output , either binary class labels ( bipolar or not ) or multiple class labels ( one for each signal type in the simulation data ) are predicted .the following algorithms have been investigated for individual sensors and clusters of sensors : * nave bayes : this simple classification model is based on applying the bayes theorem and assuming that the features are conditionally independent of one another for each class . for a given feature vector , the class is selected using probabilities gained from the training data .* decision tree : this classification model stems from a tree - like structured set of rules . starting at the root , the tree splits up on each node based on the input variable with the highest information gain .the path from the root of the tree to one of the leaves , which are representing the class labels , defines one rule .* random forest : a random forest is a collection of decision trees . the classification works as follows : the random forest takes the input feature vector , makes a prediction with every tree in the forest , and outputs the class label that received the majority of votes .the trees in the forest are trained with different subsets of the original training data . * boosting trees : they combine the performance of many so - called weak classifiers to produce a powerful classification scheme .a weak classifier is only required to be better than chance .many of them smartly combined , however , result in a strong classifier .decision trees are used as weak classifiers in this boosting scheme .in contrast to a random forest , the decision trees are not necessarily full - grown trees .* support vector machine : a svm maps feature vectors into higher - dimensional space .a hyper - plane is searched so that the margin between this hyper - plane and the nearest feature vectors from both of the two labels of a binary class is maximal .the algorithms used for boosting trees and svm are restricted to binary class labels as output .the same training and testing data sets are used for the different algorithms .the predictions for the individual sensors are combined to a new feature vector and used as input in order to train and test the models of the clusters of sensors .in this section , the performance results of the classification system will be described .two indicators are used to measure the performance of the classification : the _ testing error _ , which is the error of the prediction with respect to the simulation truth , and the _ success of training _ , which is the ratio between testing error and training error and indicates whether the model is under - trained ( < 1 ) or over - trained ( > 1 ) . as an overall result , multiple class labels as output are less effective than the binary ones , by more than factor of two . the binary class labels are the standard output of results further presented .weak classifiers like nave bias and decision trees show a high testing error above 14% and are neither more robust against changing ambient noise conditions nor significantly faster than other classifiers ( cf.fig.[rejected ] ) .although the svm is a strong classifier , its high numerical complexity and missing robustness disqualifies it ( cf.fig.[rejected_success ] ) .thus the most favorable classifiers are random forest and boosting trees .in addition , the usage of clusters shows a substantial improvement over individual sensors .random forest and boosting trees are robust and produce well - trained models .the elapsed time for processing one event is less than a second . for the individual sensors and the extracted features as input , a testing error of about 5% for the boosting trees and for the random forest of about 10%is achieved , which is further improved by more than a factor of 4 by combining the sensors to clusters with errors well below 1% ( cf.fig.[testing_error_bin ] and fig.[success_of_training_bin ] ) . using the extracted waveform as input yields similar results , the randomforest achieves a testing error of about 6% and the boosting trees of about 12% .these errors are also improved by a factor 4 , when combining the individual sensors to clusters ( cf.fig.[testing_error_sigs ] and fig.[success_of_training_sigs ] ) . ] ] ] ] ] ]the results show that machine learning algorithms are a promising way to find a robust , effective and efficient classification system .the classifiers perform well under different levels of ambient noise and are able to distinguish between bipolar ( i.e.neutrino-like ) and other signals , especially to differentiate them from short multi - polar signals .this is necessary for the further analysis of neutrino - like events in the sense of searching for the specific pancake - shape of the spatial pressure distribution from a neutrino interaction .in a next step , the classification system will be tested against data from the experiment .if the performance is matched to the simulation results , it will be used to perform an analysis of the temporal and spatial distribution of the background of bipolar signals .the system will then be extended towards classifying neutrino - like events with all their features , in particular their disk - like spatial propagation .the amadeus project is part of the activities of the antares collaboration and is supported by the german government ( bmbf ) with grants 05cn5we1/7 and 05a08we1 .
|
this article focuses on signal classification for deep - sea acoustic neutrino detection . in the deep sea , the background of transient signals is very diverse . approaches like matched filtering are not sufficient to distinguish between neutrino - like signals and other transient signals with similar signature , which are forming the acoustic background for neutrino detection in the deep - sea environment . a classification system based on machine learning algorithms is analysed with the goal to find a robust and effective way to perform this task . for a well - trained model , a testing error on the level of one percent is achieved for strong classifiers like random forest and boosting trees using the extracted features of the signal as input and utilising dense clusters of sensors instead of single sensors . acoustic particle detection , neutrino detection , signal classification , feature extraction , machine learning
|
many processes in biological systems as well as in the chemical and petroleum industry involve the transport and filtration of particles in porous media with which they interact through various forces .these interactions often result in particle adsorption and/or entrapment by the medium .examples include filtration in the respiratory system , groundwater transport , in situ bioremediation , passage of white blood cells in brain blood vessels in the presence of jam-1 proteins , passage of viral particles in granular media , separation of species in chromatography , and gel permeation .the particle - medium interactions in these systems are not always optimal for particle retention .for example , the passage of groundwater through soil often happens under chemically unfavorable conditions , and as a result many captured particles ( e.g. , viruses and bacteria ) may be released back to the solution . while filtration under favorable conditions has been studied and modeled extensively , we are just beginning to understand the process occurring under unfavorable conditions .several models have been developed to describe the kinetics of particle filtration under unfavorable conditions .the most commonly used ones are , in essence , phenomenological mean - field models based on the convection - diffusion equation ( cde ) [ see eq .( [ eq : convection - diffusion ] ) and sec .[ sec : background ] ] . typically , one models the dynamics of free particles in terms of the average drift velocity and the hydrodynamic dispersivity , while the net particle deposition rate accounting for particle attachment and detachment at trapping sites is a few - parameter function of local densities of free and trapped particles . for given filtering conditions, the parameters and can be determined from a separate experiment with a tracer , while the coefficients of the function can be obtained by fitting eq .( [ eq : convection - diffusion ] ) to the breakthrough curves . despite their attractive simplicity, it is widely accepted now that the phenomenological models at the mean - field level have significant problems .first , the depth dependent deposition curves for viruses and bacteria are often much steeper than it would be expected if the deposition rates were uniform throughout the substrate .this was commonly compensated by introducing the depth - dependent deposition rates .the problem was brought to light in ref . , where it was demonstrated that the steeper - than - expected deposition rates under unfavorable filtering conditions also exist for inert colloids .second , bradford et al . pointed out that the usual mean - field models based on the cde , accounting for dynamic dispersivity and attachment and detachment phenomena , can not explain the shape of both the breakthrough curves and the subsequent filter flushing . in these experiments some particles were retained in the medium , and the authors argued for the need to include the straining ( permanent capture of colloids ) in the model .even so , these models may still be insufficient to fit the experiments .more elaborate models to describe deep - bed filtration have been proposed in refs .these models go beyond the mean - field description by simulating subsequent filter layers as a collection of multiply connected pipes with a wide distribution of radii , which results in a variation in flow speed and also of the attachment and detachment rates ( even straining in some cases ) .the disadvantage of these models is that they are essentially computer based : it is difficult to gain an understanding of the qualitative properties of the solutions , without extensive simulations .furthermore , the simulation results suffer from statistical uncertainties . in the present work ,we develop a minimalist mean - field model to investigate filtering under unfavorable conditions .the model accounts for both a convective flow and the primary attachment and detachment processes .unlike the previous mean - field models of filtration , our model contains attachment sites ( traps ) with different detachment rates [ see eq .( [ eq : nonlin - trap ] ) ] , which allows an accurate modeling of the filtration dynamics over long - time periods for a broad range of inlet concentrations . yet, the model admits exact analytical solutions for the profiles of the deposition and breakthrough curves which permit us to understand qualitatively the effect of the corresponding parameters and design a protocol for extracting them from experiment .one of the advantages of our model is that the `` shallow '' short - lived traps represent the same effect as hydrodynamical dispersivity without generating unphysically fast moving particles or requiring an additional boundary condition at the inlet of the filter .the `` deep '' long - lived traps allow to correctly simulate long - time asymptotics of the released colloids in the effluent during a washout stage .the traps with intermediate detachment rates determine the most prominent features of breakthrough curves .the effect of every trap kind is to decrease the apparent drift velocity .as attachment and detachment rate constants depend on colloid size , we can also account for the apparent acceleration of larger particles without any microscopic description as in ref .the particle - size distribution can be also used to analyze the steeper deposition profiles near the inlet of the filter .the paper is organized as follows . in sec .[ sec : background ] , we give a brief overview of colloid - transport experiments , cde models , and their analytical solutions in simple cases .the linearized multirate convection - only filtration model is introduced in sec .[ sec : linearized ] .the model is characterized by a discrete or continuous trap - release - rate distribution ; it is generally solved in quadratures , and completely in several special cases .the results support our argument that the hydrodynamic dispersivity can be traded for shallow traps .this serves as a basis for the exact solution of the full mean - field model for filtration under unfavorable conditions introduced in sec .[ sec : full ] , where we show that a large class of such models can be mapped exactly back to the linearized ones and analyze their solutions , as well as the propagation velocity , structure , and stability of the filtering front .we suggest an experimental protocol to fit the parameters of the model in sec . [sec : experiment ] and give our conclusions in sec .[ sec : conclusions ] .a typical setup of a colloid - transport experiment is shown in fig . [fig : setup ] . a cylindrical column packed with sand or other filtering materialis saturated with water running from top to bottom until the single - phase state ( no trapped air bubbles ) is achieved . at the end of this stage , colloidal particlesare added to the incoming stream of water with both the concentration of the suspended particles and the flow rate kept constant over time .this is sometimes followed by a filter washout stage in which clean water is pumped through the filter .the filtration processes are characterized by two relevant experimental quantities : the particle breakthrough and deposition profile curves . while breakthrough curve represents the concentration of effluent particles at the outlet of the column as a function of time, deposition curves illustrate the depth distribution of concentration of the particles retained throughout the column .as the suspended particles move through the filtering column , each individual colloid follows its own trajectory . consequently , even for small particles that are never trapped in the filter , the passage time through the column fluctuates . in the case of laminar flows with small reynolds numbers and sufficiently small particles , which presumably follow the local velocity lines , the passage time scales inversely with the average flow velocity along the column .the effects of the variation between the trajectories of particles as well as their speeds can be approximated by the velocity - dependent diffusion coefficient , where is the _ hydrodynamic dispersivity _ of the filtering medium . in comparison ,the actual diffusion rate of colloids in experiments is negligibly small .dispersivity is often obtained through tracer experiments in which the motion of the particles , i.e. , salt ions , which move passively through the filter medium without being trapped , is traced as a function of time .overall , the dynamics of the suspended particles along the filter can be approximated by the mean - field cde , where is the number of suspended particles per unit water volume averaged over the filter cross section at a given distance from the inlet and is the deposition rate which may include both attachment and detachment processes . the diffusion approximation employed in eq .( [ eq : convection - diffusion ] ) has two drawbacks which could seriously affect the resulting calculations if enough care is not used ._ first _ , while the diffusion approximation works well to describe the concentration of suspended particles in places where is large , it seems to significantly overestimate the number of particles far downstream where is expected to be small or zero .this is mainly due to the fact that the diffusion process allows for infinitely fast transport , albeit for a vanishingly small fraction of particles . in the simple case of tracer dynamics [ in eq .( [ eq : convection - diffusion ] ) ] , the general solutions as presented in eqs .( [ eq : tracer - convolution ] ) and ( [ eq : tracer - gf ] ) are non - zero even at very large distances .while in many instances this may not be crucial , the application of the model to , e.g. , public health and water safety issues might trigger a false alert . _second _ , for the filtering problem one expects the concentration to be continuous , with the concentration downstream uniquely determined by that of the upstream . on the other hand , eq .( [ eq : convection - diffusion ] ) contains second spatial derivative , which requires in addition to the knowledge of at the inlet , , another type of boundary condition to describe the concentration of particles along the column .this additional boundary condition could be , e.g. , the spatial derivative at the inlet , , or the outlet , , or the fixed value of the concentration at the outlet .we show below that fixing a derivative introduces an incontrollable error . on the other hand, we can not introduce a boundary condition for the function at the outlet , , as this is precisely the quantity of interest to calculate .the situation has an analogy in neutron physics . while neutrons propagate diffusively within a medium , they move ballistically in vacuum. a correct calculation of the neutron flux requires a detailed simulation of the momentum distribution function within a few mean - free paths from the surface separating vacuum and the medium .in contrast to the filtration theory , for the case of neutron scattering , where the neutron distribution is stationary it is common to use an approximate boundary condition in terms of a `` linear extrapolation distance '' ( the inverse logarithmic derivative of neutron density ) . the cde [ see eq .( [ eq : convection - diffusion ] ) ] can be solved on a semi - infinite interval ( ) with setting at and calculating the value of at as an approximation for the concentration of effluent particles .to illustrate this situation , we solve eq .( [ eq : convection - diffusion ] ) for the case of tracer particles , where the deposition rate is set to zero , .we consider a semi - infinite geometry with the initial condition and a given concentration at the inlet .the corresponding solution is presented in sec .[ sec : tracer ] .the spatial derivative at the boundary given in eq .( [ eq : tracer - step - solution - bc ] ) is non - zero , time - dependent , and rather large at early stages of evolution when the diffusive current near the boundary is large .therefore , setting an additional boundary condition for the derivative , e.g. , , is unphysical. on the other hand , the problem with the boundary condition far downstream , , , can be ill - defined numerically , as this condition is automatically satisfied to a good accuracy as long as the bulk of the colloids has not reached the end of the interval . the simplest version of the convection - diffusion equation [ eq . ( [ eq : convection - diffusion ] ) ] applies to tracer particles where the deposition rate is set to zero , , with the initial conditions , , the laplace - transformed function obeys the equation where primes denote the spatial derivatives , .the solution to the above equation is , with at semi - infinite interval , only the solution with negative does not diverge at infinity . given the laplace - transformed concentration at the inlet , , we obtain ^{1/2}\right).\ ] ] the inverse laplace transformation of the above equation is a convolution , with the tracer green s function ( gf ) in the special case , the integration results +e^{x/\lambda } \erfc\left[\frac{t v+x}{2 ( { t v \lambda } ) ^{1/2}}\right]\right),\ ] ] where is the complementary error function .we note that the spatial derivative of the solution of eq .( [ eq : tracer - step - solution ] ) at is different from zero .indeed , it depends on time and is divergent at small , implying an unphysically large diffusive component of the particle current , in the presence of the straining term , in eq .( [ eq : convection - diffusion ] ) , the gf can be obtained from eq .( [ eq : tracer - gf ] ) by introducing exponential decay with the rate , note that we wrote the straining rate as a product of the capture rate by infinite - capacity `` permanent '' traps with the concentration per unit volume of water .such a factorization is convenient for the non - linear model presented later in sec .[ sec : full ] . the same notations are employed throughout this work for consistency .in this section we discuss the linearized convection - only multitrap filtration model , a variant of the multirate cde model first proposed in ref . .our model is characterized by a ( possibly continuous ) density of traps as a function of detachment rate [ see eq .( [ eq : ntrap - density ] ) ] .generically , continuous trap distribution leads to non - exponential ( e.g. , power - law ) asymptotic forms of the concentration in the effluent on the washout stage .the main purpose of this section is to demonstrate that `` shallow '' traps with large detachment rates have the same effect as the hydrodynamic dispersivity in cde .in addition , the obtained exact solutions will be used in sec .[ sec : full ] as a basis for the analysis of the full non - linear mean - field model for filtration under unfavorable conditions . to rectify the problems with the diffusion approximation noted previously , we suggest an alternative approach for the propagation of particles through the filtering medium . instead of considering the drift with an average velocity with symmetricdiffusion - like deviations accounting for dispersion of individual trajectories , we consider the convective motion with the maximum velocity .the random twists and turns delaying the individual trajectories are accounted for by introducing poissonian traps which slow down the passage of the majority of the particles through the column . in the simplest casesuitable for tracer particles , the relevant kinetic equations read as follows : with as the auxiliary variable describing the average number of particles in a trap , as the number of traps per unit water volume , as the trapping rate , and as the release rate .the particular normalization of the coefficients is chosen to simplify the formulation of models with traps subject to saturation in sec .[ sec : full ] . to simulate dispersivity where all time scales are inversely proportional to propagation velocity, we must choose both and proportional to .the corresponding parameter in has a dimension of area and can be viewed as a trapping cross section .the length in the release rate can be viewed as a characteristic size of a stagnation region . on general groundswe expect with on the order of the grain size . to illustrate how shallow traps can provide for dispersivity in convection - only models , let us construct the exact solution of eq .( [ eq:1trap - convection ] ) .in fact , it is convenient to consider a slightly generalized model with the addition of straining , with zero initial conditions the laplace transformation gives for , the boundary value for laplace - transformed at the inlet is given by . with initially clean filter , , and a given free particle concentration at the inlet , the solution to the linear one - trap convection - only model with straining [ eq .( [ eq:1trap - straining ] ) ] is a convolution of the form presented in eq .( [ eq : tracer - convolution ] ) with the following gf : where is the clean - bed trapping rate , is the heaviside step - function , and is the modified bessel function of the first kind with the argument ^{1/2}.\ ] ] the singular term with the function in eq .( [ eq:1trap - straining - gf ] ) represents the particles at the leading edge which propagate freely with the maximum velocity without ever being trapped .the corresponding weight decreases exponentially with the distance from the origin . sufficiently far from both the origin and from the leading edge ,where the argument [ eq . ( [ eq:1trap - straining - argument ] ) ] of the bessel function is large , we can use the asymptotic form , ,\mbox{re}\,\zeta>0.\ ] ] subsequently , eq .( [ eq:1trap - straining - gf ] ) becomes where is the dimensionless retarded time in units of the release rate , and is the dimensionless distance from the origin in units of the trapping mean free path .the correspondence with the gf in eq .( [ eq : cde - straining - gf ] ) for the cde with linear straining [ or eq . ( [ eq : tracer - gf ] ) for the cde tracer model in the case of no permanent traps , can be recovered from eq .( [ eq:1trap - straining - gf - simplified ] ) by expanding the square roots in the exponent around its maximum at , or , with the effective velocity . specifically , suppressing the prefactor due to straining , [ in eq .( [ eq:1trap - straining - gf - simplified ] ) ] , we obtain for the asymptotic form of the exponent at large , with the effective dispersivity coefficient [ cf .( [ eq : tracer - gf ] ) ] the approximation is expected to be good as long as both and are large compared to the width of the bell - shaped maximum .the actual shapes of the corresponding gfs , eqs .( [ eq : tracer - gf ] ) and ( [ eq:1trap - straining - gf ] ) in the absence of permanent traps , , are compared in fig .[ fig : comp ] .while the shape differences are substantial at small , they disappear almost entirely at later times . ) ] with ( solid lines ) and the single - trap convection model [ eq . ( [ eq:1trap - convection ] ) ] ( dashed lines ) .specifically , we plot eq .( [ eq : tracer - gf ] ) and the regular part of eq .( [ eq:1trap - straining - gf ] ) with , using identical values of and and the release rate ( half the maximum value at these parameters ) at , , , , .once the maximum is sufficiently far from the origin , the two gfs are virtually identical ( see sec . [ sec:1trap - straining ] ) . ]even though the solutions of the single - trap model correspond to those of the cde [ eq . ( [ eq : tracer - cde ] ) ] , the model presented in eq . ( [ eq:1trap - straining ] ) is clearly too simple to accurately describe filtration under conditions where trapped particles can be subsequently released . at the very least ,in addition to straining and `` shallow '' traps that account for the dispersivity , describing the experiments requires another set of `` deeper '' traps with a smaller release rate. more generally , consider a linear model with types of traps differing by the rate coefficients , , the corresponding solution can be obtained in quadratures in terms of the laplace transformation . with the initial condition , and a given time - dependent concentration at the inlet , , the result for is a convolution of the form presented in eq .( [ eq : tracer - convolution ] ) with the gf given by the inverse laplace transformation formula , },\label{eq : ntrap - convection - gf}\ ] ] with the response function here we introduced the effective density of traps , corresponding to various release rates . the general structure of the concentration profile can be read off directly from eq .( [ eq : ntrap - convection - gf ] ) .it gives zero for , consistent with the fact that is the maximum propagation velocity in eq .( [ eq : ntrap - convection ] ) .the structure of the leading - edge singularity ( the amplitude of the function due to particles which never got trapped ) is determined by the large- asymptotics of the integrand in eq .( [ eq : ntrap - convection - gf ] ) .specifically , gf ( [ eq : ntrap - convection - gf ] ) can be written as where [ cf .( [ eq:1trap - straining - gf ] ) ] is the clean - bed trapping rate , and is the non - singular part of the gf . similarly , the structure of the diffusion - like peak of the gf away from both the origin and the leading edge is determined by the saddle point of the integrand in eq .( [ eq : ntrap - convection - gf ] ) at small .assuming the expansion and evaluating the resulting gaussian integral around the saddle point at we obtain the exponent near the maximum can be approximately rewritten in the form of that in eq .( [ eq : tracer - gf ] ) , with the effective dispersivity ^ 2}.\ ] ] for the case of one trap , , the expressions for the effective parameters clearly correspond to our earlier results of eqs .( [ eq:1trap - convection - gf - gaussian ] ) and ( [ eq:1trap - convection - deff ] ) .note that the precise structure of the exponent and the prefactor in eq .( [ eq : ntrap - convection - gf - max ] ) is different from those in eq .( [ eq:1trap - convection - gf - gaussian ] ) which was obtained by a more accurate calculation . the effective diffusion approximation [ eq .( [ eq : ntrap - convection - gf - max ] ) ] is accurate for large near the maximum as long as the integral in eq .( [ eq : ntrap - convection - gf ] ) remains dominated by the saddle - point in eq .( [ eq : ntrap - convection - saddle ] ) . in particular , the poles of response function ( [ eq : ntrap - convection - response ] ) must be far from .this is easily satisfied in the case of `` shallow '' traps with large release rates . on the other hand, this condition could be simply violated in the presence of `` deep '' traps with relatively small . over small time intervalscompared to the typical dwell time , these traps may work in the straining regime in which they would _ not _ contribute to the effective dispersivity .this situation may be manifested as an apparent time - dependence of the effective drift velocity and/or the dispersivity .the multitrap generalization given in eq .( [ eq : ntrap - convection ] ) for filtration is clearly a step in the right direction if we want an accurate description of the filtering experiments . indeed ,apart from the special case of a regular array of identical densely - packed spheres with highly polished surfaces , one expects the trapping sites ( e.g. , the contact points of neighboring grains ) to differ .for small particles such as viruses , even a relatively small variation in trapping energy could result in a wide range of release rates differing by many orders of magnitude . under such circumstances, it is appropriate to consider mean - field models with continuous trap distributions .here we only consider a special case of a continuous distribution of the trap parameters , and , such that the release - rate density in eq .( [ eq : ntrap - convection - response ] ) has an inverse - square - root singularity , , with the release rates ranging from infinity all the way to zero .the corresponding response function ( [ eq : ntrap - convection - response ] ) could be expressed as the inverse laplace transform [ eq .( [ eq : ntrap - convection - gf ] ) ] gives the following gf : note that , in accordance with eq .( [ eq : ntrap - convection - gf - leading ] ) , there is no leading - edge function near as the expression for the corresponding trapping rate diverges .because of the singular behavior of at , there is no saddle - point expansion of the form given in eq .( [ eq : ntrap - convection - saddle ] ) .thus , there is no gaussian representation analogous to eq .( [ eq : ntrap - convection - gf - max ] ) : at large , the maximum of the gf is located at , which is also of the order of the width of the gaussian maximum . the gf [ eq . ([ eq : inftytrap - half - gf ] ) ] for two representative values of is plotted in fig .[ fig : sqrt ] . ) ] for the model presented in eq .( [ eq : ntrap - convection ] ) with continuous distribution of trap parameters corresponding to inverse - square - root singularity in the response function [ see eqs .( [ eq : ntrap - convection - response ] ) and ( [ eq : infty - convection - response - half ] ) ) ] .dashed lines show the gf at , while solid lines present the same gf at multiplied by the factor of .we chose as indicated in the plot .unlike in fig .[ fig : comp ] , due to abundance of traps with long release time , the gfs do not asymptotically converge toward a gaussian form . ] we also note that for large at any given , eq .( [ eq : inftytrap - half - gf ] ) has a power - law tail .this property is generic for continuous trap density distributions leading to small- power - law singularities in .for example , taking the density of the release rates as a power law in , where is the corresponding exponent , , we obtain , and the large- asymptotic of the gf at a fixed finite scales as such a power law is an essential feature of continuous distribution ( [ eq : inftytrap - dos ] ) of the detachments rates ; it can not be reproduced by a discrete set of rates which always produce an _ exponential _ tail .the considered linearized filtration model presented by eq .( [ eq : ntrap - convection ] ) can be used to analyze filtration of identical particles in small concentrations and over limited time interval as long as the trapped particles do not affect the filter performance . however , unless the model is used to simulate tracer particle dynamics in which no actual trapping occurs , it is unlikely that the model remains valid as the number of trapped particles grows .indeed , one expects that a trapped particle changes substantially the probability for subsequent particles to be trapped in its vicinity . under _favorable _ filtering conditions characterized by filter ripening , the probability of subsequent particle trapping _ increases _ with time as the number of trapped particles grows . on the other hand , under _ unfavorable _ filtering conditions , where the debye screening length is large compared to the trap size , for charged particles one expects trapping probabilities to _ decrease _ with .if repulsive force between particles is large , we can assume that only one particle is allowed to be captured in each trap .subsequently , a single trap can be characterized by an attachment rate when it is empty and a detachment rate when it is occupied , and the mean - field trapping / release dynamics for a given group of trapping sites can be written as note that this equation is non - linear because it contains the product of .previously , similar filtering dynamics was considered in a number of publications ( see refs . and and references therein ) . in the present work ,we allow for a possibility of groups of traps differing by the rate parameters and .the distribution of rate parameters can also be viewed as an analytical alternative of the computer - based models describing a network of pores of varying diameter .our mean - field transport model is completed by adding the kinetic equation for the motion of free particles with concentration , which has the same form as the linearized equations [ eq .( [ eq : ntrap - convection ] ) ] considered in sec .[ subsec : continuous ] .we note that for shallow traps with large release rates , the non - linearity inherent in eq .( [ eq : nonlin - trap ] ) is not important for sufficiently small suspended particle concentrations . indeed , if is independent of time , the solution of eq .( [ eq : nonlin - trap ] ) saturates at for small free - particle concentration , or for any and large enough , the trap population is small compared to 1 , and the non - linear term in eq .( [ eq : nonlin - trap ] ) can be ignored .therefore , as discussed in relation with the linearized multitrap model [ see sec .[ sec : linearized]a and eq .( [ eq : ntrap - convection ] ) ] , the effect of shallow traps is to introduce dispersivity of the arrival times of the particles on different trajectories .for this reason , we are free to drop the dispersivity term [ cf . the cde model , eq .( [ eq : convection - diffusion ] ) ] , and use a simpler convection - only model ( [ eq : nonlin - convection ] ) with several groups of traps with density per unit water volume , characterized by the relaxation parameters and . the constructed non - linear equations [ eqs .( [ eq : nonlin - trap ] ) and ( [ eq : nonlin - convection ] ) ] describe complicated dynamics which is difficult to understand in general . here, we introduce the front velocity , a parameter that characterizes the speed of deterioration of the filtering capacity .consider a semi - infinite filter , with the filtering medium initially clean , and the concentration of suspended particles at the inlet constant .after some time , the concentration of deposited particles near the inlet reaches the dynamical equilibrium [ eq . ( [ eq : nonlin - equilibrium ] ) ] and , on average , the particles will no longer be deposited there . at a given inlet concentration , the filtering medium near the inlet is saturated with deposited particles . on the other hand , sufficiently far from the inlet , the filter is still clean . on general grounds , there should be some crossover between these two regions .the size of the saturated region grows with time [ see fig .[ fig : ff ] ] .the corresponding front velocity can be easily calculated from the particle balance equation , this equation balances the number of additional particles needed to increase the saturated region by on the left , with the number of particles brought from the inlet on the right [ see fig . [fig : ff ] ] . the same equation can also be derived if we set , and integrate eq .( [ eq : nonlin - convection ] ) over the entire crossover region .the trapped particle density saturates as given by eq .( [ eq : nonlin - equilibrium ] ) , and the resulting front velocity is this is a monotonously increasing function of : larger inlet concentration leads to higher front velocity , which implies that the filtering front is stable with respect to perturbations .indeed , in appendix we show that the velocity of a secondary filtering front with the inlet concentration ( see fig . [fig : twofront ] ) , moving on the background of equilibrium concentration of free particles , is higher than , i.e. , .thus , if for some reason the original filtering front is split into two parts , moving with the velocities and , the secondary front will eventually catch up , restoring the overall front shape .; the additional free and trapped particles in the shaded region are brought from the inlet [ see eq .( [ eq : front - balance ] ) ] .( [ eq : front - c ] ) for exact front shape . ]with two filtering fronts .the initial front moves on the background of clean filter and leaves behind the equilibrium filtering medium with . the secondary front with higher inlet concentration moving on partially saturated medium .with nonlinearity as in eq .( [ eq : nonlin - trap ] ) , the secondary front is always faster , ; the two fronts will eventually coalesce into a single front . ]we emphasize that the existence of the stable filtering front is in sharp contrast with the linearized filtering problem [ see eq .( [ eq : ntrap - convection ] ) ] , where the propagation velocity [ eq . ( [ eq : ntrap - convection - saddle ] ) ] is independent of the inlet concentration , and any structure is eventually washed out dispersively ( the width of long - time gf does not saturate with time ) . also , in the case of the filter ripening , the nonlinear term in eq .( [ eq : nonlin - trap ] ) will be negative and thus would prohibit the filtering front solutions due to the fact that the secondary fronts move slower , . the non - linear problem with saturationis thus somewhat analogous to korteweg - de vries solitons where the dispersion and nonlinearity compete to stabilize the profile .compared to the linear case presented in sec .[ sec : linearized ] , the physics behind the non - linear equations [ eqs .( [ eq : nonlin - trap ] ) and ( [ eq : nonlin - convection ] ) ] is much more complicated .however , the structure of these equations immediately indicates that non - linearity reduces filtering capacity because trapping sites could saturate in this model [ see eq .( [ eq : nonlin - equilibrium ] ) ] .while the relevant equations can also be solved numerically , a thorough understanding of the filtering system , especially with large or infinite number of traps , is difficult to achieve . to gain some insight about the role of the different parameters in the filtering process, we specifically focus on the non - linear models presented by eqs .( [ eq : nonlin - trap ] ) and ( [ eq : nonlin - convection ] ) which can be rendered into a linear set of equations , very similar to the linear multitrap model [ eq . ( [ eq : ntrap - convection ] ) ] . to this end, we consider the case where all trapping sites have the same trapping cross sections , that is , all in eq .( [ eq : nonlin - trap ] ) . if we introduce the time integral then eq .( [ eq : nonlin - trap ] ) after a multiplication by can be written as clearly , these are a set of linear equations , with the following variables note that eq .( [ eq : nonlin - convection ] ) can also be written as a set of linear equations in terms of these variables . if we integrate eq .( [ eq : nonlin - convection ] ) over time , we find where we assumed initially clean filter , . considering that and , we obtain the main difference of the linear eqs . ( [ eq : exact - trap - linearized ] ) and ( [ eq : exact - convection - linearized ] ) from eqs .( [ eq : ntrap - convection ] ) is in their initial and boundary conditions , note that with the time - independent concentration of the particles in suspension at the inlet , i.e. , , boundary condition ( [ eq : exact - boundary - condition ] ) gives a growing exponent , the derived equations can be solved with the use of the laplace transformation . denoting and eliminating the laplace - transformed trap populations , we obtain +v \tilde w'=0,\quad \sigma(p)\equiv a\sum_i { n_i\over p+b_i}.\ ] ]the response function is identical to that in eq .( [ eq : ntrap - convection - response ] ) , and for the case of continuous trap distribution we can also introduce the effective density of traps , . the solution of eq .( [ eq : exact - laplace - eqn ] ) and the laplace - transformed boundary condition [ eq . ( [ eq : exact - boundary - condition ] ) ] becomes e^{-[1+\sigma(p ) ] px / v } , \ ] ] where .employing the same notation as in eq .( [ eq : ntrap - convection - gf ] ) , the real - time solution of eqs .( [ eq : exact - trap - linearized ] ) and ( [ eq : exact - convection - linearized ] ) with the boundary conditions [ eqs . ( [ eq : exact - initial - condition ] ) and ( [ eq : exact - boundary - condition ] ) ] can be written in quadratures , \,g ( x , t').\ ] ] the time - dependent concentration can be restored from here with the help of logarithmic derivative , in the special case , the integrated concentration [ eq . ( [ eq : exact - u ] ) ] is linear in time at the inlet , , and grows exponentially [ see eq .( [ eq : exact - boundary - exponent ] ) ] .this exponent determines the main contribution to the integral in eq .( [ eq : exact - formal - solution ] ) for large and .indeed , in this case we can rewrite eq .( [ eq : exact - formal - solution ] ) exactly as , where note that is proportional to the solution of the linearized equations [ eq .( [ eq : ntrap - convection ] ) ] with time - independent inlet concentration [ see eq .( [ eq : tracer - convolution ] ) ] .the corresponding front is moving with the velocity [ eq . ( [ eq : ntrap - convection - saddle ] ) ] andis widening over time [ eqs .( [ eq : ntrap - convection - gf - max ] ) and ( [ eq : ntrap - eff - params ] ) ] .thus , for positive and sufficiently large , this contribution to is small and can be ignored . in the opposite limit of large negative , , which exactly cancels the first term in eq .( [ eq : exact - formal - solution ] ) . on the other hand , the term grows exponentially large with time . at large enough , the integration limit can be extended to infinity , and the integration in eq .( [ eq : exact - two ] ) becomes a laplace transformation , thus p_0x /v},\quadp_0\equiv ac_0.\qquad \label{eq : front - w}\end{aligned}\ ] ] this results in the following free - particle concentration [ see eq .( [ eq : exact - derivative ] ) ] , }+1},\ ] ] and the occupation of the trap [ eqs .( [ eq : exact - trap - linearized ] ) and ( [ eq : exact - substitution ] ) ] , with the front velocity note that this coincides exactly with the general case presented in eq .( [ eq : front - velocity - a ] ) if we set all .the approximation in eq .( [ eq : front - w ] ) is valid in the vicinity of the front , , as long as is positive and large .since , this implies \gg { 1\over a c_0},\ ] ] which provides an estimate of the distance from the outlet where the front structure [ eqs .( [ eq : front - c ] ) and ( [ eq : front - ni ] ) ] is formed .the exactness of the obtained asymptotic front structure can be verified directly by substituting the obtained profiles in eqs .( [ eq : nonlin - trap ] ) and ( [ eq : nonlin - convection ] ) . the exact expressions in eqs .( [ eq : exact - formal - solution ] ) and ( [ eq : exact - derivative ] ) for the free - particle concentration can be integrated completely in some special cases . herewe list two such results and demonstrate the presence of striking similarities in the profiles between different models , despite their very different rate distributions .furthermore , we show that the corresponding exact solutions [ eq . ( [ eq : exact - derivative ] ) ] converge rapidly toward the general filtering front [ eq .( [ eq : front - c ] ) ] .* single - trap model with straining . * in sec .[ sec:1trap - straining ] , we found the explicit expression [ eq .( [ eq:1trap - straining - gf ] ) ] for the gf in the case of the linear model for two types of trapping sites with rates and and permanent sites with the capture rate .the resulting gf ( with and ) can be used in eq .( [ eq : exact - formal - solution ] ) to construct the solution for the corresponding model with saturation , let us consider the special case of the inlet concentration , , constant over the interval , and zero afterwards .the function [ see eq .( [ eq : exact - boundary - condition ] ) ] is , then , \label{eq : exact - boundary - exponent - t}\ ] ] and the integration in eq .( [ eq : exact - formal - solution ] ) gives , } & & \\\lefteqn { w(t)\equiv \theta(t-\xi ) \bigl\{\left[e^{ac_0(t-\xi)}-1\right ] } & & \nonumber\\ & & \!\!\ ! + \int^t_\xi\!\!d\tau\ , e^{-b_1(\tau-\xi)}\left[e^{ac_0(t-\tau ) } -1\right]\frac{d}{d\tau}i_0(\zeta_\tau)\bigr\ } , \qquad \;\end{aligned}\ ] ] where and is given in eq .( [ eq:1trap - straining - argument ] ) .the concentration of free particles , , can be now obtained through eq .( [ eq : exact - derivative ] ) .the step function included in indicates that it takes at least for a particle to travel a distance .figure [ fig : ots ] illustrates as a function of distance , , at a set of discrete values of time , , , .the model parameters as indicated in the caption were obtained by fitting the response function at the interval to that of the model with the continuous trap distribution ( see fig . [fig : sqn ] ) .the solid lines show the curves for , while the dashed lines correspond to ; they have a drop in the concentration near the origin consistent with the boundary condition at the inlet .the exact profiles show excellent convergence toward the corresponding front profiles computed using eq .( [ eq : front - c ] ) ( symbols ) . ) ] .lines show the free - particle concentration extracted from eq .( [ eq : nonlin-1trap - straining ] ) with , , , , and , for , , , .symbols show the front solution [ eq .( [ eq : front - c ] ) ] for with the front velocity [ eq . ( [ eq : front - velocity - c0 ] ) ] . ] * model with square - root singularity . *let us now consider the non - linear model , [ eqs .( [ eq : nonlin - convection ] ) and ( [ eq : nonlin - trap ] ) ] with the inverse - square - root continuous trap distribution , producing the response function given in eq .( [ eq : infty - convection - response - half ] ) .the model is exactly solvable if we set all , while allowing the trap densities vary with appropriately .the solution for the auxiliary function corresponding to the inlet concentration constant on an interval of duration is obtained by combining eqs .( [ eq : exact - formal - solution ] ) and ( [ eq : exact - boundary - exponent - t ] ) , with the relevant gf [ eq . ( [ eq : inftytrap - half - gf ] ) ] .the resulting -dependent curves at a set of discrete time values are shown in fig .( [ fig : sqn ] ) , along with the corresponding asymptotic front profiles ( symbols ) , for a parameter set as indicated in the caption .the solid lines show the curves for .the dashed lines are for ; they display a drop of the concentration near the origin consistent with the boundary condition at the inlet .again , the time - dependent profiles show gradual convergence toward front solution ( [ eq : front - c ] ) . but for filtering model ( [ eq : nonlin - convection ] ) , eq .( [ eq : nonlin - trap ] ) with continuous inverse - square - root trap distribution [ eq .( [ eq : infty - convection - response - half ] ) ] .parameters are , .symbols show the front solution [ eq .( [ eq : front - c ] ) ] for with front velocity ( [ eq : front - velocity - c0 ] ) .the raising parts of the curves are almost identical with those in fig .[ fig : ots ] , while there are some quantitative differences in the tails , consistent with the exponential vs power - law long - time asymptotics of the corresponding solutions . ]note that the profiles in figs .[ fig : ots ] and [ fig : sqn ] are very similar even though the corresponding trap distributions differ dramatically .this illustrates that parameter fitting from a limited set of breakthrough curves is a problem ill - defined mathematically .the complexity and ambiguity of the problem grow with increasing number of traps . in sec .[ sec : experiment ] we suggest an alternative computationally simple procedure for parameter fitting using the data from several breakthrough curves differing by the input concentrations .the suggested class of mean - field models is characterized by a large number of parameters . in the discrete case , these are the trap rate constants , and the corresponding concentrations along with the flow velocity . in the continuous case ,the filtering medium is characterized by the response function [ see eq .( [ eq : ntrap - convection - response ] ) ] . in our experience ,two or three sets of traps are usually sufficient to produce an excellent fit for a typical experimental breakthrough curve ( not shown ) .this is not surprising , given the number of adjustable parameters .on the other hand , from eq .( [ eq : front - velocity - c0 ] ) it is also clear that the obtained parameters would likely prove inadequate if we change the inlet concentration .the long - time asymptotic form of the effluent during the washout stage would also likely be off .one alternative to a direct non - linear fitting is to use our result given in eq .( [ eq : front - velocity - c0 ] ) [ or eq . ( [ eq : front - velocity - a ] ) ] for the filtering front velocity as a function of the inlet concentration , . with a relatively mild assumption that all trapping rates coincide , , one obtains the entire shape of the filtering front [ eq .( [ eq : front - c ] ) ] .thus , fitting the front profiles at different inlet concentrations to determine the parameter and the front velocity can be used to directly measure the response function .the suggested experimental procedure can be summarized as follows .( * i * ) one should use as long filtering columns as practically possible in order to achieve the front formation for a wider range of inlet concentrations .( * ii * ) a set of breakthrough curves for several concentrations at the inlet should be taken .( * iii * ) for each curve , the front formation and the applicability of the simplified model with all should be verified by fitting with the front profile [ eq . ( [ eq : front - c ] ) ] . given the column length, each fit would result in the front velocity , as well as the inverse front width .( * iv * ) the resulting data points should be used to recover the functional form of and the solution for the full model .it is important to emphasize that the applicability of the model can be controlled at essentially every step .first , the time - dependence of each curve should fit well with eq .( [ eq : front - c ] ) .second , the values of the trapping rate obtained from different curves should be close .third , the computed washout curves should be compared with the experimentally obtained breakthrough curves .the obtained parameters , especially the details of for small , can be further verified by repeating the experiments on a shorter filtering column with the same medium .in this paper , we presented a mean - field model to investigate the transport of colloids in porous media .the model corresponds to the filtration under unfavorable conditions , where trapped particles tend to reduce the filtering capacity , and can also be released back to the flow .the situation should be contrasted with favorable filtering conditions characterized by filter ripening .these two different regimes can be achieved , e.g. , by changing of the media if the colloids are charged .the unfavorable filtering conditions are typical for filtering encountered in natural environment , e.g. , ground water with biologically active colloids such as viruses or bacteria .the advantages of the model are twofold .it not only fixes some technical problems inherent in the mean - field models based on the cde but also admits analytical solutions with many groups of traps or even with a continuous distribution of detachment rates .it is the existence of such analytical solutions that allowed us to formulate a well - defined procedure for fitting the coefficients .ultimately , this improves predictive capability and accuracy of the model .the need for the attachment and detachment rate distributions under unfavorable filtering conditions has already been recognized in the field .previously it has been implemented in computer - based models in terms of distributions of the pore radii .such models could result in good fits to the experimental breakthrough curves .however , we showed in sec.[sec : experiment ] that the relevant experimental curves are often insensitive to the details of the trap parameter distributions , especially on the early stages of filtering .on the other hand , our analysis of the filtering front reveals that the front velocity as a function of the inlet colloid concentration , [ eq . ( [ eq : front - velocity - a ] ) ] , is _ primarily _ determined by the distribution of the attachment and detachment rates characterizing the filtering medium .we , indeed , suggest that the filtering front velocity is one of the most important characteristics of the deep - bed filtration as it is directly related to the loss of filtering capacity .we have developed a detailed protocol to calculate the model parameters based on the experimentally determined front velocity , .we emphasize that the most notable feature of the model is its ability to distinguish between permanent traps ( straining ) and the traps with small but finite detachment rate .it is the latter traps that determine the long - time asymptotics of the washout curves .the suggested model is applicable to a wide range of problems in which macromolecules , stable emulsion drops , or pathogenic micro - organisms such as bacteria and viruses are transported in flow through a porous medium . while the model is purely phenomenological in nature , the mapping of the parameters with the experimental data as a function of flow velocity and colloid size will shed light on the nature of trapping for particular colloids .the model can also be extended to account for variations in attachment and detachment rates for various colloids as needed to explain the steep deposition profiles near the inlet of filters .this research was supported in part under nsf grants no .dmr-06 - 45668 ( r.z . ) , no .phy05 - 51164 ( r.z . ) , and no .0622242 ( l.p.p . )here we derive an inequality for the velocity of an intermediate front interpolating between free - particle concentrations and [ fig . [fig : twofront ] ] .we first write the expressions for the filtering front velocities in clean filter , with the inlet concentrations and [ cf .( [ eq : front - balance ] ) ] , the velocity of the filtering front interpolating between and [ fig .[ fig : twofront ] ] is given by .\ ] ] combining these equations , we obtain from here we conclude that the left - hand side ( lhs ) of eq .( [ eq : intermediate - front - v ] ) is positive . solving for and expressing the difference , we have for the model with saturation [ eq . ( [ eq : nonlin - trap ] ) ], we saw that , thus .
|
we study the transport and deposition dynamics of colloids in saturated porous media under unfavorable filtering conditions . as an alternative to traditional convection - diffusion or more detailed numerical models , we consider a mean - field description in which the attachment and detachment processes are characterized by an entire spectrum of rate constants , ranging from shallow traps which mostly account for hydrodynamic dispersivity , all the way to the permanent traps associated with physical straining . the model has an analytical solution which allows analysis of its properties including the long time asymptotic behavior and the profile of the deposition curves . furthermore , the model gives rise to a filtering front whose structure , stability and propagation velocity are examined . based on these results , we propose an experimental protocol to determine the parameters of the model .
|
the self - avoiding walk ( saw ) is the prototypical lattice model for polymer behavior .it is defined as the uniform distribution over the set of all fixed - length nearest - neighbor walks on some lattice , such that no site is visited more than once .self - avoidance reflects what are known as _ excluded volume _ effects in polymer science .the universal aspects of the saw have been the subject of study for decades in the physical and mathematical literature .while some of its properties are well understood and rigorously established ( a thorough account can be found in ) , it still poses some difficult ( and some seemingly impossible ) problems .mathematical and theoretical advances in the study of the saw have always been paralleled by constant efforts for devising new algorithms and numerical strategies , and since it is one of the simplest non - trivial models it can serve as a test ground for novel algorithms in polymer science .a connection has been studied in two dimensions both numerically and analytically between the critical saw ( in the limit where the number of steps goes to infinity ) and a continuum model , called schramm - loewner evolution ( sle ) .sle is a one - parameter family of stochastic processes in the complex plane producing random curves ( traces ) with conformal invariance `` built in '' .it has been conjectured that when the parameter is equal to the scaling limit of self - avoiding walks is obtained ( this is the case we will be focusing on in this paper ) . later, numerical evidence has been given in favor of this correspondence , both in the half - plane and in the whole - plane geometries .what we propose here is an algorithm for sampling self - avoiding paths in the plane , based on a discretized version of sle .essentially , discrete paths are built by iterative composition of rotations together with one simple conformal map that takes a small circle and pulls a slit out of it .thanks to the way sle works , it is possible to efficiently produce _ independent samples _ , since the algorithm is based on simple brownian motion , which is very easy to sample .self - avoiding walks on the lattice have a natural parametrization , which corresponds to counting the number of steps along the walk .as long as one considers observables that depend only on the support of the walk , the correspondence with sle curves is well - understood .but most of the quantities of interest in polymer physics do depend on the labeling of points along the chain and can not be matched with their sle analogues , since sle curves come with their own uncorrelated parametrization .actually , the problem of finding a sensible definition of _ natural parametrization _ for sle curves is still debated in the mathematical literature . from a numerical point of view, one needs an affordable way of generating sle samples with the parametrization corresponding to the proper time of lattice models .one such method was introduced and studied by kennedy and will be briefly reviewed in section [ section : thechoiceofparametrization ] .we hereby introduce a new method , based on the observation that the saw even when both the number of steps goes to infinity and the lattice spacing goes to zero is such that the euclidean distance between two consecutive points on the chain is constant throughout the chain itself .we require the same property for the sle discrete trace , trying to attain an approximately constant _ step length _ discrete sle chains are constructed by iterative composition of conformal maps , each one being responsible for producing a step .the method we propose for keeping an approximately constant step length does so by rescaling each step according to the jacobian of the conformal map that acts on the corresponding segment .we focus on whole - plane sle , so that our algorithm explicitly concerns the saw in the plane , but the same reasoning and methods could be easily translated to other geometries , such as the chordal one . for instance , minimal modification is needed in order to treat ensembles of self - avoiding walks with fixed endpoints lying on the boundary of the domain .section [ section : discretewholeplanesle ] is a brief introduction to the discrete process that we refer to as _ discrete whole - plane sle _ , which is the central object of interest lying at the heart of the algorithm ; section [ section : thechoiceofparametrization ] introduces the issues about the choice of parametrization ; section [ section : thenumericalstrategy ] describes the numerical strategy used to reproduce the natural parametrization of lattice models in the framework of discrete sle ; numerical results for the saw are presented in section [ section : results ] : we measure the _ asphericity _ of an inner portion of saw which is a highly parametrization - dependent quantity and we discuss the first correction - to - scaling exponent .we are not going to describe sle here ( the interested reader can find all the details in many excellent reviews , such as ) .the aim of this section is to present a discrete process approximating radial sle growing to infinity , i.e. a measure on curves with one end - point in and the other at , living on the complex plane minus the unit disc , .a more in - depth presentation can be found in .an analogous discrete process in the chordal geometry was introduced in , where its convergence to sle was also studied .let us consider an ordered set of points we will call such a set _ trace _ or _chain_. we are going to describe a stochastic process whose outcomes are such traces .let and be two sequences of real numbers with .consider the maps these are conformal maps of onto , whose action can be described as growing a _ slit _ inside along the real axis , of length is the inverse map of the solution at time to the _ loewner equation _ in the disc in the special case where the _ driving function _ is a constant .notice that is well - defined also on the boundary of . by complex inversionwe can then define a family of maps growing slits in the complement of in : the conformal maps onto minus a slit on the real line , whose length is controlled by for the relation between and the length of the slit . ] .let us construct a trace by intertwining such maps with rotations of the complex plane .we will consider the images of the point under the chain of maps obtained by alternately composing rotations and slit mappings , as follows : where are rotations whose angles are the parameters .let us call the -th composed map , for use in the next sections : so that . in words ,we traverse the sequences and backwards from to and for each we compose a slit mapping of parameter with a rotation of angle .refer to figure [ figure : composition ] . ,the second one ( clockwise ) to the rotation , the third one to , the fourth one to and so on .the last black arrow corresponds to the last rotation .the dashed blue arrow is the complete map .blue crosses in the last picture identify the points that constitute the trace . ] at first a single slit is grown , and the point gets mapped onto the real axis , some distance away from the disc , namely at . then the universe is rotated and another slit is grown by application of , so that the base of the previous slit which still lies on the unit circle after the rotation will be sent somewhere on the new slit . ] .notice that in general the shape of the old slit gets distorted because of the action of .the process goes on until the first map is reached ; at that time , a chain of points has been produced .notice that , since the composition in ( [ eq : tip ] ) goes backwards from to , adding a step on the tip of the trace without changing the rest of the trace itself means inserting at the rightmost place in ( [ eq : tip ] ) and then recomputing the whole chain to denote an index running from to ( labeling the points on the chain ) and letter to denote an index running from to ( labeling the incremental maps that build up the -th composed map ) , but this is not a strict distinction , since they actually label the same sequences . ] .the trace obtained of course depends on the two sequences and , so that a measure on the latter induces a measure on the former .we will draw and in such a way that their relation be that of space versus time for the one - dimensional rescaled brownian motion .the easiest way of doing so is to draw the s as bernoulli variables in the set where is a positive real number and we shall do so in the following .we still have the freedom to choose the time steps " .a careful choice of the latter is what allows us to reproduce the parametrization of the lattice models .the discrete process we have introduced here is expected to correspond to whole - plane sle with parameter in the appropriate limit ( essentially , ) , and in particular to the saw when .notice that the presence of the unit disc as a forbidden region is expected to become irrelevant if one looks at the trace sufficiently far away from the origin , so it is not surprising that the lattice counterpart is _ full - plane _ saw .the parameters defined in the previous section are the time steps of the discretization .they represent the time at which the loewner evolution with constant driving function is to be evaluated in order to produce the slit maps defined in ( [ eq : slitmap ] ) , which are the building blocks of the discrete evolution .if one is interested in reproducing the actual sle , where time is indeed a continuous variable , one will want to have these parameters scale to zero . the choice of _ how _ they do so entails a choice of parametrization on the resulting object . for instance , taking a constant and then sending to zero yields sle parametrized _ by capacity _ , which means that the curves have linearly increasing capacity in the expansion around of . ] .different definitions can give rise to different parametrizations .this is not an issue when one is interested only in parametrization - independent features of the curves , such as the fractal dimension , the multifractal spectrum , the distance of the curve from some given point , or the probability of passing on the left or right of an obstacle , to name a few .but it becomes crucial when one focuses on parametrization - dependent observables of the lattice models .such are for instance the distribution of a given point inside a saw , the gyration tensor , properties related to the detailed shape of the walks and the universal quantities that describe the approach of an -step chain to , such as the correction - to - scaling exponents ( see section [ section : results ] ) .our goal is to find a choice of capable of reproducing the natural parametrization of the lattice walk models .the first thing one notices when producing discrete chains with the algorithm described in section [ section : discretewholeplanesle ] is that the parametrization by capacity yields points that get further and further away from each other as the process goes on . the _ step size _ diverges when .one can then choose to scale the time steps in such a way as to compensate for this .it turns out ( see for more details ) that choosing at least definitively in , cancels out the drift in the distance between consecutive points .notice that this choice is _ non - random _ , meaning that the reparametrization scheme does not depend on the realization of the stochastic process . instead, the values of are chosen _ before _ the actual simulation takes place .unfortunately , this strategy does not give the correct parametrization ( see also , where the same reasoning is applied to the half - plane case ) .for instance , as far as the spatial distribution of the -th point ( for a given ) along the chain is concerned , it gives exactly the same results as the parametrization by capacity .what happens is that the scaling form ( [ eq : naivescaling ] ) only ensures that the _ average _ distance between consecutive points be constant ( the average is over the realizations of the stochastic process ) . but fluctuations around this average still retain all their correlations , since we are reparametrizing in a naive , non - random fashion .one solution to this problem was proposed and studied by kennedy , and is particularly adapted to the case when one is interested in the position of just a single point along the curve .the idea is to grow the trace with its parametrization by capacity and stop the growth when a fixed `` length '' has been reached .a definition of length for a discretized fractal object can be introduced , which turns out to be naturally dependent on a fixed length , that is the scale the fractal length ( or _ variation _ ) is measured at .some care must be taken when using this method , because of the dependence of the results on the choice of the scale .one would like to send to zero , in order to measure the variation at a finer and finer mesh , but at the same time can not become too small as compared to the step size of the discretized trace , otherwise wilder and wilder rounding problems would completely spoil the computation .moreover , one wants to send the total length of the curve to infinity , which by scale invariance amounts to shrinking the unit disc down to a point , so as to approach the truly whole - plane geometry .one is therefore confronted with a tricky double limit , which increases the effort to be put into the analysis of the numerical data , and can blur the estimation of the errors . the method based on fractal variation is especially suited for producing a _ single _ point on the chain at a given value of the parametrization .the strategy we shall adopt here is different .we aim at producing a discrete trace where the step sizes are strictly constant throughout the chain. stopping the discrete growth after a fixed number of steps will then be automatically equivalent to choosing the stopping time when a fixed value of the fractal variation is reached .the advantages of this strategy are manifold : one obtains an essentially arbitrary number of points equally spaced in the natural parametrization at the same cost as producing only the last one , and no computationally - delicate double limit is present , so that no additional scaling analysis must be performed . moreover , no prior knowledge of the fractal dimension is needed .a close relative to the slit mapping in ( [ eq : slitmap ] ) first appeared in the literature about _ diffusion - limited aggregation _ , or dla .dla introduced by witten and sander is a kinetic model where finite - sized particles perform random walks ( one at a time ) from infinity until they stick irreversibly to a cluster , which grows from a seed placed at the origin .hastings and levitov took advantage of the conformal symmetry inherent to this model and proposed an algorithm which turns out to be similar to what we use for simulating schramm - loewner evolutions .the algorithm works as follows .the seed of the growth is the unit disc . at each time - step ,an angle is chosen with the uniform distribution in .a conformal map is applied , that creates a bump of fixed area centered at .then , another is chosen , the maps are composed , and so forth .note that as is the case for equation ( [ eq : nthmap ] ) if is the map that grows the cluster up to the -th deposed grain , then the map that grows the cluster up to the -th grain is obtained by _first _ applying the incremental map and _ then _ , which is to say that the incremental maps are composed in the opposite order than usual .this growth process satisfies an even stronger version of the domain markov property , which is one of the crucial characteristics of sle , since now the growth of the cluster at a specific time does not even depend on where it last grew , so not only does the future not depend on the past , but modulo a conformal transformation it does not depend on the present either .an important technical aspect of this algorithm is that one wants to grow bumps of approximately equal size .but peripheral bumps have undergone several conformal maps and have thus changed their shape and size to a great amount . in general , by the time the whole cluster has been built , the -th bump created ( by ) has been subject to the action of . to compensate for this rescaling ,one wants to create bumps of different sizes , depending on the whole history of maps they will be subject to in the remainder of the growth process . as a first approximation ,as long as the new ( -th ) bump is sufficiently small , it is natural to try and correct only for the jacobian factor of the previous composed map , calculated at the place where the new bump is to be created , because this is the rescaling factor that will affect the shape of the bump at first order in its size .the -th bump size should then be since this strategy seems to give satisfactory results , it is very natural to try and apply it to the numerical reparametrization of sle : rescaling the step sizes of the approximated sle trace by the dilatation factor given by the jacobian ( very similar ideas were also fruitfully exploited in , where multifractal spectra for laplacian walks are computed ) . the size of the -th slit grown is a function of the time - like parameter which controls the capacity of the incremental map at step : as can be seen by inverting ( [ eq : slitlength ] ) .one wants to rescale so that gets rescaled by a factor given by the jacobian in analogy with ( [ eq : dlafirstderivative ] ) .unfortunately , there happens to be a great obstruction to this program , due to the fact that sle satisfies `` only '' domain markov property , instead of the complete independence of dla steps that we discussed above .if we look at equation ( [ eq : dlafirstderivative ] ) we see that rescaling the step - sizes is possible because of the independence of ( the space - like variable ) from ( the time - like variable ) .this independence in dla stems from the fact that the distribution of the s is flat on and does not change , so that one can operatively choose every step _ before _ performing the composition of the corresponding maps . in sle , on the contrary , despite the fact that the steps satisfy the domain markov property , the increments are drawn with a bernoulli distribution from the set , which does depend on time ,since it depends explicitly on the time - like parameter ; on the other hand the jacobian needed to rescale is to be evaluated at .therefore , the problem is that we do not really know where to compute the jacobian , until we have actually computed it !this is ultimately related to the fact that sle is driven by a non - trivial stochastic process , so that and are intertwined .( dashed blue lines ) , as and take on their allowed values .the red circle is the unit disc ; the red arrows are examples of the possible slits grown . ]the reader can find a depiction of how the length of the slit and the angle are related in figure [ figure : hearts ] , which is a polar graph ( for ) for the position of the tip of the slit as a function of the angle as can be found by inverting ( [ eq : steplengthdelta ] ) : thus , the main problem with the foregoing approach is that and depend on one another , so that one does not know where to compute the derivative .one way to overcome this problem is the following .expand the derivative of ( the map that grows the hull at step ) around its zero , which occurs at , and evaluate it at the point , which is the point where the -th slit is going to be placed : this expression is accurate when is small .on the other hand , we also want to approximate the change in length of the slit by the value of the derivative at the base , silently assuming that it does not change much along the slit .this approximation is justified by the fact that , by ( [ eq : ldelta ] ) , is proportional to for small . by expanding the exponential , taking the modulus , and remembering that for every as can be seen by taking the derivative of ( [ eq : nthmap ] ) with given by ( [ eq : wholeplanemap ] ) and ( [ eq : slitmap ] ) one obtains the jacobian ( [ eq : jacobian ] ) at order we want to rescale the length of the -th slit by , so we rewrite the equation relating and ( [ eq : steplengthdelta ] ) by substituting where is the desired step length ( as in ( [ eq : constantsteplength ] ) , which represents our goal ) , and by making use of the brownian relation we obtain an equation which ( if solved ) gives the time - step producing both the correct rescaling , at first order in , and the right relation with the space - step : the actual sign of is to be chosen at random , according to the bernoulli nature of . unfortunately , equation ( [ eq : transcendental ] ) is transcendental , and can not be solved explicitly .a little thinking shows that for large one expects a small .in fact , ( [ eq : transcendental ] ) implies that the combination be divergent when . a crude approximationis then obtained by expanding the left hand side in powers of around , the right hand side in around and matching the two behaviors at first order .this is the best one can do , since higher orders would require solving algebraic equations of order greater than in . among the solutions we choosethe positive one : numerical solution to ( [ eq : transcendental ] ) shows that ( [ eq : deltaj ] ) is off by when and by when , for and . the approximation works ( see section [ section : results ] ) because the typical values of involved are large . of course , the foregoing method can be used only if one has an effective means of computing the main ingredient : .it turns out that there is such a way .straightforward calculations show ( the details are in the appendix ) that where is defined as equation ( [ eq : secondderivativeexpression ] ) is a closed formula for the hessian , in terms of which only depends on , see ( [ eq : phisecond ] ) and the function see ( [ eq : phiprime1 ] ) and ( [ eq : phiprime2 ] ) . notice that the points must already be computed by the routine that produces the -th point on the trace as was explained in section [ section : discretewholeplanesle ] so that computing adds very little computational load . at the -th step of the algorithm i.e. when producing the -th point along the chain one can compute the factor , which will be needed for producing the -th point , simply by multiplying the constant together with all factors obtained at each composition that is performed to compute .this amounts to performing an operation taking a time for each composition , which sums up to for the -th point that requires compositions and finally to for a complete -step chain .let us schematically sum up how our algorithm for building an -step chain works : 1. set the constant ( we shall always fix ) 2 .set and 3 .[ enum : outercyclestart ] compute as with a random sign 4 .setup a temporary variable calculated as in ( [ eq : phisecond ] ) 5 .set and 6 .[ enum : innercyclestart]cycle on for computing : 1 .apply -th incremental map and rotation 2 .if multiply by calculated as in ( [ eq : phiprime1 ] ) and ( [ eq : phiprime2 ] ) 3 .if decrease by and repeat step [ enum : innercyclestart ] 7 . set 8 .compute as in ( [ eq : deltaj ] ) with given by 9 .if increase by 1 and go back to step [ enum : outercyclestart ] an example of a chain obtained by this method is presented in figure [ figure : chains ] , where it is compared with a chain obtained by non - random rescaling as in ( [ eq : naivescaling ] ) .( in blue ) , compared to a trace obtained with simple global rescaling of the steps ( in red ) .the sequences of signs in ( that is , of left / right turns ) are the same . ]given a chain both for a walk on the lattice and for a discrete sle trace one can define its _ gyration tensor _ , which encodes useful information about the shape of the walk .since the process we have defined grows a chain towards infinity , it can not be compared to a whole -step walk on the lattice , because the latter displays finite - chain corrections close to its tip .for this reason , we define the _ internal _ gyration tensor , by following the definition of gyration tensor that is used in polymer science , but by taking into account only the first monomers : where the superscripts and take values on the and coordinates of a lattice site or on the real and imaginary parts of a complex number .let and be the two ( real ) eigenvalues of the ( symmetric ) gyration tensor .these quantities are not universal , but some of their combinations are believed to be , such as the _ asphericity _ : which is a measure of how spherical the object is , being for perfectly spherical objects ( for which the two eigenvalues are equal ) and when one of the eigenvalues is ( as happens for objects lying on a line ) .the ( critical ) limit we are interested in is when the number of steps goes to infinity , because this is the limit where the saw displays its universal behavior and where the cut - off at length 1 introduced by the disc - shaped forbidden region becomes irrelevant for the discrete sle . on the saw side , moreover, we will want to let , so as to avoid corrections to scaling , but in such a way as to have , since we want to be looking at a portion deep inside the walk .we have simulated the self - avoiding walk on the square lattice using the pivot algorithm and we have measured the internal asphericity as a function of for and with .the results are in figure [ figure : asphericity ] ; the curve plotted is expected to be universal , and to our knowledge has never appeared in the literature . for each point ( about the size of the red crosses ) , but this is to be taken _cum grano salis _ , since different points on the plot are obtained from the same set of walks and are therefore not independent . ] in order to obtain the value at we perform a fit of the form on the first few values ( namely $ ] ) in the graph is changed , up to around the middle of the chain , where they start to drift . ] , obtaining } .\end{split}\ ] ] the same measure ( now with ) is performed on ensembles of discrete sle traces ( ) of lengths , ( and traces of length ) produced with the algorithm described in the previous sections .the results are in table [ table : asphericity ] ) using a numerical solution to ( [ eq : transcendental ] ) , for the sake of testing the approximation in ( [ eq : deltaj ] ) .the results agree with those in table [ table : asphericity ] [ , , for respectively ] . ] .the simulation for took approximately one hour on a 2ghz intel duo processor .ll & + 100 & 0.4957(17 ) + 200 & 0.5019(17 ) + 400 & 0.5071(17 ) + 1000 & 0.5097(18 ) + 2000 & 0.5115(17 ) + 5000 & 0.5122(18 ) + 10000 & 0.5124(37 ) + in general , for an -step chain one expects the following behavior for the expectation value of a global observable : where the leading behavior ( given by the exponent ) is corrected by analytical ( with integer exponents ) and confluent corrections .the exponents , , are those conventionally used for these quantities : they do not have anything to do with the s used in previous sections . ] ( ) are universal .the asphericity is expected to be a constant in the large- limit , so that .the scaling form ( [ eq : correctionstoscaling ] ) then suggests a fit of the form , which yields the asphericity is in perfect agreement with the one obtained for the saw .our result for agrees with the theoretical value ( ) obtained by conformal - invariance methods for polymers in good solutions .the evaluation of for saws has been the subject of debate in the past decades .in fact , the rich structure of corrections in ( [ eq : correctionstoscaling ] ) makes it difficult to extract precise values from numerical data .there is now strong evidence for the absence of a leading term with exponent on the square lattice , the first non - null confluent contribution having exponent .notice that this does not configure a violation of universality , since the amplitudes of the corrections are model - dependent . by including the first analytical correction ( at next - to - leading order ) and fixing to its theoretical value, we find where the amplitude of the term is compatible with .we have studied a stochastic process in the complex plane , based on discrete stochastic - loewner evolution , which gives rise to chains with approximately constant steps .the purpose was to build an algorithm for exactly sampling self - avoiding paths , by correctly reproducing the parametrization induced by the scaling limit of lattice models , namely self - avoiding walks .the method is based on iterative composition of conformal maps , where each map acts by building a radial slit out of the unit circle , which will eventually become one of the steps of the discrete path .each step has to be rescaled according to the jacobian of the map that evolves it .this program encounters some technical hindrances essentially due to the fact that rescaling a step actually changes the jacobian .we showed that an alternative approach is possible , by keeping track of the second derivative of the map .it turns out that the hessian can be effectively computed in the framework of iterated conformal maps , since it can be expressed as a simple function of quantities already computed by the algorithm . by exploiting the powerful correspondence existing between sle and saws, the algorithm presented here produces completely independent samples of self - avoiding paths from the origin to infinity in the plane , whose parametrization is the desired one that corresponding to saws , and does so in an affordable way , with a complexity for -step chains .this allows us to study parametrization - dependent observables of the saw such as the internal asphericity and the leading correction - to - scaling exponent , whose determination is considered a challenging problem in the numerical study of polymers .the results we obtain are very accurate .the analysis has been carried out in the whole - plane radial geometry , but very little should be changed in order to adapt it to the half - plane chordal case , or to other restricted geometries of interest in polymer science .interesting questions remain open .it is still not clear whether there exists a way of reproducing finite - chain effects by the foregoing techniques. it would be useful for instance to produce the correct distribution of the _ end - point _ of a saw , which is fixed to infinity in the present study .on the other hand , a great advance would be to translate this method to other classes of critical polymers , such as the point where the collapsing transition poses even more difficult problems to monte carlo methods due to the attractive interactions and the consequently entangled shapes .the author wishes to thank sergio caracciolo and andrea pelissetto for suggestions and encouragement .in this appendix we give a formula for the modulus of the second derivative of , which is needed in a crucial step of the algorithm .the -th composed map ( [ eq : nthmap ] ) can be written as in terms of the -th map , for .we recall that and are the -th rotation and slit mapping respectively and that implicitly depends on the parameter , while depends on .the second derivative of ( [ eq : laststepcomposition ] ) reads the latter expression simplifies when evaluating its modulus at , because a hallmark of the singularity of the loewner map in 1 and .one obtains computing the first factor is just a matter of differentiating ( [ eq : wholeplanemap ] ) two times with given by ( [ eq : slitmap ] ) , which yields ^ 2}\ ] ] with }{2z^2\sqrt{(z+1)^2 - 4{\mathrm e}^{-\delta_n}z } } , \ ] ] and for the second factor , instead , a closed recursion can be found by noting that , by differentiating ( [ eq : laststepcomposition ] ) , one has so that the seed of the recursion is given by the first slit grown so that finally from ( [ eq : modulusofsecondderivative ] ) , ( [ eq : recursion ] ) and the derivative of ( [ eq : seedofrecursion ] ) one obtains or , in a more compact form , with defined as in ( [ eq : gamma ] ) .99 des cloizeaux , j. , jannink , g. : polymers in solution : their modelling and structure .oxford university press , new york ( 1990 ) schfer , l. : excluded volume effects in polymer solutions .springer - verlag , berlin - new york ( 1999 ) madras , n. , slade , g. : the self - avoiding walk .birkhuser , boston ( 1993 ) guttmann , a.j . ,conway , a.r .: square lattice self - avoiding walks and polygons .. comb . _ * 5 * 31945 ( 2001 ) sokal , a.d .: monte carlo methods for the self - avoiding walk .b _ * 47 * 1729 ( 1996 ) janse van rensburg , e.j . :monte carlo methods for the self - avoiding walk ._ j. phys . a : math .theor . _ * 42 * 323001 lawler , g.f . , schramm , o. , werner , w. : on the scaling limit of planar self - avoiding walks .pure math ._ * 72 * vol 2 33964 ( 2004 ) kennedy , t. : the length of an sle monte carlo studies ._ j. stat .phys . _ * 128 * 126377 ( 2007 ) gherardi , m. : whole - plane self - avoiding walks and radial schramm - loewner evolution : a numerical study . _ j. stat. phys . _ * 136 * 86474 ( 2009 ) lawler , g.f .: dimension and natural parametrization for sle curves .[ math.pr ] lawler , g.f . ,sheffield , s. : the natural parametrization for the schramm - loewner evolution .[ math.pr ] lawler , g.f .: conformally invariant processes in the plane ._ mathematical surveys and monographs _ * 114 * american mathematical society ( 2005 ) werner , w. : random planar curves and schramm - loewner evolutions ( lecture notes from the 2002 saint - flour summer school ). _ l. n. math . _ * 1840 * 10795 ( 2004 ) cardy , j. : sle for theoretical physicists .phys . _ * 318 * 81 - 118 ( 2005 ) kager , w. , nienhuis , b. : a guide to stochastic loewner evolution and its applications . _ j. stat .phys . _ * 115 * 1149229 ( 2004 ) bauer , m. , bernard , d. : 2d growth processes : sle and loewner chains ._ * 432 * 115 - 221 ( 2006 ) bauer , r.o . : discrete lwner evolution . _ ann .toulouse vi _ * 12 * 43351 ( 2003 ) kennedy , t. : monte carlo comparisons of the self - avoiding walk and sle as parametrized curves. preprint .arxiv : math/0612609v2 [ math.pr ] witten , t.a . , sander , l.m .: diffusion - limited aggregation : a kinetic critical phenomenon ._ * 47 * 14003 ( 1981 ) hastings , m.b . ,levitov , l.s .: laplacian growth as one - dimensional turbulence ._ physica d _ * 116 * 24452 ( 1998 ) schramm , o. : scaling limits of loop - erased random walks and uniform spanning trees ._ israel j. math ._ * 118 * 22188 ( 2000 ) hastings , m.b . :exact multifractal spectra for arbitrary laplacian random walks .lett . _ * 88 * 055506 ( 2002 ) madras , n. , sokal , a.d .: the pivot algorithm : a highly efficient monte carlo method for the self - avoiding walk . _ j. stat ._ * 50 * 10986 ( 1988 ) kennedy , t. : a faster implementation of the pivot algorithm for self - avoiding walks ._ j. stat .phys . _ * 106 * 40729 ( 2002 ) saleur , h. : conformal invariance for polymers and percolation ._ j. phys . a : math ._ * 20 * ( 1987 ) caracciolo , s. , guttmann , a.j . , jensen , i. , pelissetto , a. , rogers , a.n ., sokal , a.d . : correction - to - scaling exponents for two - dimensional self - avoiding walks ._ j. stat .phys . _ * 120 * ( 2005 )
|
we present an algorithm , based on the iteration of conformal maps , that produces independent samples of self - avoiding paths in the plane . it is a discrete process approximating radial schramm - loewner evolution growing to infinity . we focus on the problem of reproducing the parametrization corresponding to that of lattice models , namely self - avoiding walks on the lattice , and we propose a strategy that gives rise to discrete paths where consecutive points lie an approximately constant distance apart from each other . this new method allows us to tackle two non - trivial features of self - avoiding walks that critically depend on the parametrization : the asphericity of a portion of chain and the correction - to - scaling exponent .
|
this paper describes the development and testing of a prototype drift chamber whose purpose is to evaluate the feasibility of a `` cluster - counting '' technique for implementation in a high luminosity experiment .cluster counting is expected to improve particle identification ( pid ) by reducing the effect of fluctuations in drift chamber signals .these are due to gas amplification and the fluctuation in the number of primary electrons per ionization site .there may also be improvements in tracking resolution , but this is left for a later study .the requirement of fast electronics and larger data sizes may make the technique impractical in terms of capital costs , available space near the detector , and computing power. to date the technique has not been deployed in an operating experiment .this work demonstrates that a cluster - counting drift chamber is a feasible option for an experiment such as super-.1em__b__ . super-.1em__b__.2emwascancelled after the experiments described in this paper , but the results are applicable to any drift chamber that is used for particle identification .the design of our prototype chambers was strongly influenced by the demands of super-.1em__b__.2em , which are described in the technical design report .drift chambers are general - purpose detectors that can track and identify charged particles .they consist of a large volume of gas with instrumented wires held at different voltages . when charged particles move through the chamber they ionize the gas particles .the electrons from these primary ionizations drift towards the wires held at high positive voltage , while the ions drift towards the grounded wires .the sense wires are very thin ( {\mu m} ] ) charged particle from primary ionizations depends on its speed , as given by the bethe formula and various corrections .the speed measurement is combined with the independent momentum measurement from tracking , giving the particle s mass , which is a unique identifier . to measure speed , we measure or estimate a quantity proportional to the number of primary ionizations .a traditional drift chamber accomplishes this by measuring the total ionization per unit length of the track , which is proportional to the integral of the electronic signal on the sense wires belonging to a track .the theoretical probability distribution function for the total ionization is a landau distribution , which has an infinite mean and standard deviation .the consequence is that if one takes the average of a number of samples ( e.g. 40 measurements of deposited charge in a track ) , the resulting distribution is non - gaussian and is dependent on the number of samples taken . instead of the mean of the distribution, one can use the most probable value for the total ionization .this is accessed by a truncated mean technique .our truncated mean procedure is described in sec .[ bootstrapping ] .the conventional technique described above is sensitive to gas gain fluctuations as well as the statistical fluctuations in the number of primary electrons produced in each ionization event .moreover , the truncated mean procedure that is typically used discards a substantial fraction of the available information .none of these disadvantages exist if the number of primary ionizations can be measured more directly .the cluster - counting technique involves resolving the cluster of avalanching electrons from each primary ionization event .this is done by digitizing the signal from the sense wire in each cell and applying a suitable algorithm .the rise time of the signal from a cluster is approximately {ns} ] ) single - cell drift chambers , called chamber and chamber ( fig .[ m11_photo ] ) .the only difference between the two chambers is the diameter of the sense wires : {\mu m} ] for chamber .more details about the wires are given below .the wire layout creates a square cell wide in a {cm} ] ) .figure [ isochrones ] shows a cell diagram including the dimensions and wire locations .the aluminium casing of the chambers has five large windows on two sides of the cell to allow particles to enter and exit unimpeded .the windows are made of thin ( {\mu m} ] .runs were taken with and without termination , to see the effect of reflected signals on pid performance .a circuit diagram showing our termination is in fig .[ termination ] .( -0.5,0 ) node[]hv ( -0.25,0 ) ( 0,0 ) ; ( 0,0 ) to[r=10 < > ] ( 2,0 ) ; ( 2,0 ) to[r=1.5 < > ] ( 4,0 ) ; ( 4,0 ) ( 4.25,0 ) ; ( 5.25,0 ) node[]sense wire ; ( 2,0 ) to[c=1000 < > ] ( 2,-2 ) ; ( 2,-2 ) node[ground ] ; ( 2,0 ) ( 2,1 ) to[r=390 < > ] ( 4,1 ) ( 4,0 ) ; ( -0.5,0 ) node[]hv ( -0.25,0 ) ( 0,0 ) ; ( 0,0 ) to[r=10 < > ] ( 2,0 ); ( 2,0 ) to[r=1.5 < > ] ( 4,0 ) ; ( 4,0 ) ( 4.25,0 ) ; ( 5.25,0 ) node[]sense wire ; ( 2,0 ) to[c=1000 < > ] ( 2,-2 ) ; ( 2,-2 ) node[ground ] ; runs were taken with chambers and strung with {\mu m} ] gold - plated tungsten sense wires , respectively , and gold - plated aluminium field wires .for some later runs , chamber was re - strung with a {\mu m} ] for the whole unit ) and a bandwidth of {ghz} ] input and output impedance , and a fixed gain of .the simplest configuration that we investigated was with two ad8354s in cascade .this provides very good bandwidth performance , but the input impedance of {\omega} ] ) and the signal to noise ratio is not optimal .so , an emitter follower stage was added at the input , using a low noise rf transistor ( bfg425 ) .this was configured either with {\omega} ] , as a compromise between impedance matching and tolerance to stray capacitance .we also tried a configuration with an additional low gain ( ) inverting stage ( with a bfg425 transistor ) , having {\omega} ] configuration gave the best overall results .a schematic of the amplifier setup is shown in fig .[ preamschema ] . in our final analysis , only the {\omega} ] amplifiers are considered .the data runs using the {\omega} ] . the resulting voltage for chamber a ( {\mu m} ] amplifiers is {v} ] .for some of the runs we varied the type of signal cable used to connect the output of the amplifiers to the data acquisition system .we used two different types of sub - miniature rg-59/u cables ( models 1855a and 179dt from belden ) and miniature coax ( model 1282 from belden ) , all with {\omega} ] , which is the distance between the amplifiers and digitizers for super-.1em__b__. from the signal - propagation perspective , the 1855a is a better cable than the 179dt , having less signal attenuation ( {db/100m} ] at {ghz} ] versus {mm} ] pin spacing .only two pins are used in the connector to connect the ground and signal parts of an additional {cm} ] .we block residual protons from upstream using a slab of polypropylene at the mouth of the beam pipe ( {mm} ] ) .we can determine the beam populations using the time - of - flight system described in sec .[ tofsection ] .the prototypes were mounted on a rotating and moveable table , which allowed us to take runs at different dip angles and positions along the length of the sense wires .a schematic of the beam test setup is in fig .[ beam_test_schematic ] and a photo of the test hall is in fig .[ m11_photo ] .most of the data were collected at {mev / c} ] .this is confirmed by our simulations at both momenta , described in sec .[ simulations ] .high - efficiency separation of pions and kaons at {gev / c} ] apart , one upstream of the prototypes and one downstream ( fig . [ beam_test_schematic ] ) .the counters are {mm} ] pores .each of the 64 channels in the mcps have an active region of {mm} ] per mcp . for a {mev / c} ] ranges used to identify particles in our track composition process described in sec .[ bootstrapping ] .a sample trace of the actual tof signal is shown in fig .[ sampletraces ] , where the first four pulses are from the mcps .we fit the tof distribution with the sum of three gaussians and count how many particles are within {\sigma} ] thick was placed between the prototypes and the downstream counter ( fig .[ beam_test_schematic ] ) , instrumented with photomultiplier tubes .the coincidence of the three ( upstream , downstream , strip ) was required for a physical trigger .this additional requirement removed the extraneous tof population and many of the events with no drift chamber signals .part of the trigger signal can be seen in fig .[ sampletraces ] in the upper trace .the third scintillator was not digitized and thus is not visible in the figure .the coincidence rate is {hz} ] spacing , for a trace duration of {\mu s} ] .we used the midas data acquisition system to automatically record temperature and atmospheric pressure as well as the current in a small monitoring chamber .the monitoring chamber was connected in series with the primary chambers on the gas line , and was exposed to an source .the monitoring chamber wire voltages were held fixed , allowing us to monitor the gas and environmental conditions by tracking changes in the gas gain .we used a gaseous ionization detector simulation package called garfield to simulate tracks through our prototypes .we did not simulate the electronics chain and the data acquisition system , but we are able to get predicted charge depositions and cluster counts for our specific gas mixture and wire configuration . the charge deposition is not reported directly , but is proportional to the energy lost by charged particles passing through the gas . it is plotted in fig .[ garfield_dedx ] for muons , pions , and kaons .the momentum scale is chosen to illustrate the fact that the difference in energy loss between pions and muons at {mev / c} ] ( sec .[ beam ] ) .the number of primary ionizations is reported directly by the simulation software and can be treated as a `` true '' number of clusters .it does not depend on the choice of electronics , algorithms , and it does not count -rays ( sec .[ cc_pid ] ) .the distribution of primary ionizations for muons , pions and kaons is shown in fig .[ garfield_clusters ] and also shows the similarity between muon - pion separation at our beam momentum and pion - kaon separation at higher momenta .it is also important to point out that the absolute number of clusters for muons and pions at {mev / c} ] , not just the difference .the absolute value is important because it is related to our ability to actually resolve the clusters .( image ) at ( 0,0 ) ;the data were taken during august and september 2012 . approximately 200 runs of 30000 events were acquired .a run is a contiguous data - collection period during which no setup parameters are changed . on average , {\%} ] of the physical triggers did not leave signals in the prototypes .various parameters were changed from run to run .these were : the sense wire voltages , amplifiers , signal cable types , beam momentum , angle of incidence of the beam with the chamber , beam position along the sense wire length and presence of a proper termination resistor on the sense wire . in the end , many runs turned out to be recorded using unsuccessful amplifier prototypes and could not be used for a detailed analysis .this analysis uses 20 runs , for a total of 633050 recorded events .the analysis of the test - beam data is performed in two steps , both of which are done offline ( after the data for that run has been fully collected ) .the first step involves analyzing the signals ( voltage as a function of time ) from the three oscilloscope channels .the first channel is connected to the time - of - flight ( tof ) system , with voltage pulses corresponding to a particle crossing the scintillators before and after the drift chambers .the second and third oscilloscope channels are connected to the amplifiers on the sense wires of the two drift chambers .the second step of analysis involves constructing multi - cell `` tracks '' from the single - cell events using a composition process .single - cell events are taken from the same run , same chamber , and having a tof consistent with the same particle type .forty of these are used to build up a track as if it were traversing a full super-.1em__b__-size drift chamber ( sec .[ bootstrapping ] ) .this section describes in detail the first stage of analysis in which we deal with single - cell events .the time - of - flight is measured , the signal is adjusted for baseline drift and basic quality controls are imposed . in this stagewe also perform the charge integration and use cluster - counting algorithms to count clusters on the drift chamber signals .the time - of - flight is determined by applying a simple threshold - over - baseline algorithm to the oscilloscope trace from the channel connected to our scintillator mcps and pmts .a valid tof signal consists of four identified pulses , while an asynchronous trigger has zero pulses .events with one , two , or three tof pulses are rejected , and represent the small fraction of events from asynchronous triggers with a pulse in one of the tof counters .the baseline voltage for each drift chamber is simply the average voltage of the entire signal from the previous asynchronous trigger .the rms deviation from this baseline is also measured .the mean of these rms deviations is {mv} ] , as shown in fig .[ integralduration ] .the optimization of the integration time is described in sec .[ chargeintegration ] . from the integrated charge we subtract a pedestal calculated from the previous asynchronous trigger .this pedestal is a charge integration with the same integration time , but a fixed starting time .the result is a baseline - subtracted charge , which should have a smaller systematic error than the raw charge integral .the distribution of integrated charges for physical triggers and asynchronous triggers is shown in fig .[ cellcharge ] .the physical triggers are shown separately for each species in fig .[ cellchargeperspecies ] .cluster - counting algorithms can vary in complexity , efficiency , and in their rate of reporting fake clusters . herewe briefly describe the various algorithms , but precise definitions can be found in appendix [ cc_appendix ] .the algorithms involve two forms of smoothing of the oscilloscope traces ( fig . [fig : smoothings ] ) .the first is a `` boxcar smoothing '' where each sample is replaced with the average of itself and the previous samples .the second is a true averaging procedure , where the number of points in a trace is reduced and each point is the average of points .all of the algorithms involve some kind of transformation of the smoothed signal , and a threshold - crossing criterion .the transformed signals for the various algorithms are shown in fig .[ cc_cut_quantities ] .one of the most basic cluster - counting algorithms is the `` threshold above average '' .it subtracts the non - smoothed signal at time from the boxcar - smoothed signal at time , then applies a threshold .a more general algorithm ( of which the previous is a special case ) is the `` smooth and delay '' algorithm .it involves smoothing two copies of the signal by different amounts , delaying one of the copies by a certain number of frames , then taking the difference and applying a threshold .this algorithm has four parameters , and is thus more difficult to optimize .the two algorithms above essentially implement a first - derivative method .we also implemented a second - derivative method .this one uses the true averaging procedure rather than the `` boxcar smoothing '' .the first derivative is first calculated by taking the difference between consecutive smoothed samples .the second derivative is then calculated by taking the difference between consecutive first derivative values .each time , we divide by the time interval represented by a sample , to keep the units consistent . the number of clusters counted using the second derivative is shown for each particle species in fig . [ cellclustersperspecies ] .all of the threshold algorithms in principle trigger on the leading edge of cluster signals .however it is noticeable that real cluster pulses have a very sharp leading edge ( approximately {ns} ] ) .fake clusters are more symmetric , returning to the baseline voltage faster than the signal from a real cluster . thus an algorithm was devised that takes cluster candidates from the above algorithms , but requires the pulse to last a minimum duration in order to be confirmed .pulses that return to baseline too quickly are discarded as fake clusters .this `` timeout booster '' allows the use of smaller thresholds , which while increasing the efficiency of finding real clusters also admit more fakes .the timeout criterion removes most of the fakes but keeps the real clusters . as mentioned before ,each of the cluster - counting algorithms can return not only the number of clusters , but the actual time at which each cluster was found .we investigated the use of this information , in the form of an average time separation between clusters in each cell .the prototypes have only a single cell .the traditional method of identifying particles using the truncated mean requires many cells forming a track .thus we construct tracks from the single - cell events . to compose a track for a given species of particle , we select ( with replacement ) random single - cell events that have been identified with the time - of - flight information .we positively identify particles with tof values within 3 standard deviations of the central values of the three gaussian peaks corresponding to the particle species . for a typical run with e.g. 3500 single muon events, the number of possible muon tracks is astronomical ( ) , and the likelihood of a given track being composed of multiple copies of the same single - cell event is low ( ) .we also form empty tracks by combining the signals from asynchronous events .the information from each event is combined to form the track information .the track information is the particle species , total number of clusters found per of track , and the truncated mean of the charge integrals from each cell .the truncated mean is performed by sorting the list of charge integrals and taking {\%} ] was roughly optimized to give better separation , for comparison {\%} ] truncated mean was thus done by rejecting the largest 12 integrated charge values from the cells . in the case of tracks formed from asynchronous events ,the list is not sorted , since these values are already gaussian , but still the same fraction of values is discarded .the distribution of truncated mean charge and clusters for the composed tracks is shown in figs .[ trackchargeperspecies ] and [ trackclustersperspecies ] , respectively .we also form the track - wise average time separation between clusters by doing a weighed average of the cell - wise average cluster separation for the events in the track .the weights are the number of clusters in the cells .it is worth noting that the relative separations of the muon and pion peaks shown in figs .[ trackchargeperspecies ] and [ trackclustersperspecies ] are very different . for the truncated mean of the integrated charges ,the relative separation between the peaks ( difference in the location of the peaks , divided by the average of the two ) is {\%} ] .navely this should mean that the cluster counting technique is less effective .however because the widths of these peaks is also very different , the two techniques turn out to be of comparable power ( fig .[ r_and_dedx ] ) . in order to combine the information from the truncated mean and the cluster count , we form likelihoods based on fits to the two quantities .these quantities are reasonably gaussian ( for non - empty tracks ) , so we fit them with gaussian distributions , for particle species and measured quantity . for a given track ,the likelihood of the track coming from a particle is found by evaluating the product of the fitted distribution functions for both at the measured values .thus if the measured truncated mean charge for a track is and the clusters per of track are , the combined likelihood is this combined likelihood ignores any correlation between the two quantities .the correlation is indeed non - zero but is somewhat weak ( ) .possibly combined likelihood models which make use of the correlation would be more effective , but we did not investigate this . as mentioned in sec .[ beam ] , the ability to identify muons and pions at {mev / c} ] pion selection efficiency , or vice - versa .these figures of merit are easy to interpret physically and correspond to how detector performance is typically quantified in past experiments .an alternative figure of merit turns out to better differentiate between algorithm parameter choices , but has a much less intuitive physical meaning .it is the maximum excursion on the muon rejection and pion efficiency plot from the origin of the graph .the curves on the graph approach and in the limits of cut values of 0 and 1 respectively , but the curves can lie above that inscribed by a circle of unit radius . the length of the longest straight line joining and the efficiency curveis taken as the figure of merit . in certain casesthe performance is bad enough that the lines lie below that inscribed by a circle , in this case the alternative figure of merit is not meaningful , as it is identically 1 .all three figures of merit can be shown to be equivalent , in the sense that local maxima and minima lie in the same regions of parameter space .the maximum - excursion - from - origin figure gives better separation for those runs where it is meaningful ( the majority ) .it is used for the optimization of algorithms , but the results are presented using the more intuitive figure of merit of pion selection efficiency at {\%} ] for the rest of the study .the various cluster - counting algorithms have parameters that must be tuned empirically . by iterating this procedure many times using the same run ,a `` map '' of the figure of merit can be created in the algorithm parameter space , the maxima of which are optimal values for the algorithms ( fig . [ cc3_optimization ] ) . while the figure of merit includes the pid performance from and cluster counting , the contribution is essentially constant even with the randomness introduced by the track composition process .the optimal parameters vary from algorithm to algorithm and depend on the run used to optimize the parameters . in an operational experiment , only one set of parameters can be chosen , so some compromise will be necessary .nevertheless , to compare the algorithms themselves , we may compare the performance of each algorithm when optimized on the same data run .the chosen run has the following parameters : degree dip angle , window {mm} ] sense wire .a {\omega} ] .a total of 30784 triggers were recorded of which 7720 are asynchronous , and 680 , 3649 , and 13579 are positively identified as positrons , muons , and pions respectively .the remainder have tof values more than {\sigma} ] , which indicates that extremely high sampling rate and bandwidth are not necessary to improve pid with cluster counting .the smoothing times correspond to nyquist frequencies of {mhz} ] & 6.5 & & 0.64 + & {mv / ns^2} ] muon rejection . here and in later plots , it is difficult to give a good estimate of the systematic uncertainty as many factors were not taken into account .for example the temperature of the gas in the chamber plays no role in our calculations , though the temperature did change during the data taking period .the track composition process involves drawing random numbers , so a contribution to the uncertainty from this can be estimated by composing multiple sets of tracks and seeing the distribution of results .running the code 100 times yields an rms deviation from the mean of .the mean is what is reported in table [ cc_table ] . in the table, only algorithm uses the `` timeout booster '' technique .we also tried applying the technique to the other algorithms , but it was noticed that if the algorithm already has reasonable performance , the improvement from the timeout is negligible . indeed the optimal timeout duration for the `` smooth and delay '' algorithm is zero , yielding the same performance as the bare algorithm .overall the best algorithm is the two - pass second derivative algorithm , but it is only marginally better than the other algorithms . the difference is less than the typical variation due to the track composition process .it is fortuitous that even the simple algorithms have good performance , as they are reasonable to implement using a field - programmable gate array ( fpga ) or even analog hardware . in some sections that follow ,the pid performance with optimized cluster counting refers to the use of a cluster - counting algorithm where the parameters were chosen to give the best figure of merit for that run .the optimal parameters vary from run to run , so in each case , we also run the algorithm on a given run using parameters that were optimal for a set of other runs .the other runs each vary in only a single parameter : the window , the hv settings , and the momentum .the average performance using these non - optimal parameters is labelled `` sub - optimal cluster counting '' in later figures . in each cell, we take the average of the time intervals between consecutive clusters . in the track composition process, we form a weighted average of the cell - wise averages , with the weights given by the number of clusters in each track .the resulting quantity gives a reasonable separation for each particle type ( fig .[ tracksepperspecies ] ) .unfortunately the performance is not as good as either the traditional charge integration or cluster counting ( fig .[ r_and_dedx ] ) .in addition , if we form a tripartite combined likelihood , the improvement relative to the bipartite charge integration and cluster counting combination is negligible .given the increased computational complexity of calculating the average separations , it is unlikely that the timing information will be useful for pid purposes in a real particle physics experiment .the gas gain of the prototypes depends on the choice of sense wire voltage and on the gas .we tested only one gas , a mixture of helium and isobutane in a ratio of .a nominal voltage was selected as described in sec .[ wire_voltages ] .the actual gas gain for our gas mix and voltages is on the order of , measured offline using an source .the procedure aims to obtain oscilloscope signals with roughly the same amplitude with all the amplifiers .the dependence of gas gain on sense wire voltage is approximately exponential . in our casea {v} ] ) . when choosing a gas gain for an experiment the most important features are more often the tracking performance , ageing issues , and operational issues .this is more likely to influence the choice of specific gain , regardless of the pid performance .however , if pid performance is also highly valued , lower gains should be explored . as shown in fig .[ momentumvariation ] , the difference of ionization between pions and muons is greater at lower momenta .this is in agreement with theoretical expectations and simulations .as expected , the improvement from adding cluster counting is most noticeable at the momentum where the overall performance is worst , making the detector response more uniform .the prototypes have five windows at five thin aluminium positions along their {m} ] .the centres of the five windows are and {mm} ] from the amplifiers , but a sequence of runs was taken to determine the effect of the signal propagating along the sense wire .the sense wire voltages were chosen as described in sec .[ wire_voltages ] at the middle position , but left unaltered for the other windows in the sequence .thus the oscilloscope and amplifier saturations may change as a function of beam position .the tungsten wire is very thin and has a non - negligible dc resistance ( {\omega} ] diameter wire ) , so it was expected that the performance would be better at the windows closer to the amplifiers .indeed the runs taken at the two windows closest to the amplifiers have slightly higher efficiencies ( fig .[ windowvariation ] ) than at the two furthest windows , but the difference is not large .the variation for this small data set is also not monotonic , the second - closest window to the amplifiers shows inexplicably better performance than the closest . as mentioned in sect .[ cables ] , we tested two different cable types , and the effect of adding an additional header connector to simulate needing to feed through a bulkhead . unlike the previous sections , we did not compare the performance of the cluster - counting algorithms using parameters optimized on the single run with non - optimal parameters .thus the individual performance numbers may be optimistic , but the comparison between cable types can still be done . in fig .[ cablevariation ] we show the result from several runs using an amplifier with {\omega} ] , while the high gain columns are at {v} ] , the low gain columns have about {\%} ] .it is tempting to see that the 179n columns are the highest between the two sets , but the difference is not nearly as dramatic as the variation due to gas gain or the additional contribution of cluster counting itself . as described in sec .[ amplifiers ] , we tested several types of amplifiers , mostly distinguished by their input impedance and gain .we remind the reader that the sense wire voltages used are different for the various amplifiers , and were chosen to get approximately constant signal amplitude as described in sec .[ gasgainresults ] . in fig .[ amplifiervariation ] , the results from three different amplifiers at two different positions along the sense wire are shown .the input impedance of each amplifier is indicated , and the amplifiers with the same labels are the same for the two different positions .the {\omega} ] amplifiers give the best results .this indicates the importance of matching the amplifier input impedance with the impedance and termination of the drift chamber itself .unfortunately the indication of the best amplifier is not very strong , as a proper study of the optimal gas gain for each amplifier was not done in this experiment .the variation between the amplifiers in figure [ amplifiervariation ] is of the same order as the variation with gas gain for a single amplifier shown in figure [ gainvariation ] .it is possible that the variations seen here are mostly due to gain effects rather than the impedance and implementation details of the amplifiers .the studies undertaken attempt to explore a multidimensional parameter space , so the results are difficult to summarize concisely . herewe restate the lessons learned from each study described above .the various cluster counting algorithms all perform roughly equivalently ( sec .[ cc_results ] ) .their parameters must be optimized for good performance , but the regions of good performance in parameter - space are quite large .even sub - optimal parameters only give slightly worse performance .more advanced techniques ( such as the timeout booster ) can compensate for a less - optimized algorithm , but are unnecessary when the algorithm is optimized properly .optimal smoothing for the cluster - counting algorithms is on the order of a few nanoseconds , indicating that a higher sampling rate is unnecessary .the corresponding nyquist frequency is on the order of hundreds of .this means that the successful implementation of cluster counting does not depend on getting overly expensive or customized hardware .indeed the best algorithm studied simply applies a threshold to the second - derivative of the signal , a process that can be done with analog electronics or in an fpga .cluster timing gives results that are slightly poorer than cluster counting used alone ( sec .[ timing_results ] ) . when combined with charge integration and cluster counting however ,the improvement is minor compared to charge integration and cluster counting without the cluster timing . given the additional complexity of storing and calculating average cluster timings , this technique is unlikely to be worth exploring further .pid performance depends strongly on having the proper wire voltages and thus gas gains ( sec .[ gasgainresults ] ) . in some configurations ,higher gain is not necessarily better , but this is dependent on the choice of amplifier .thus for a given amplifier and equipment configuration , the optimal gas gain must be carefully determined .there is not much variation in pid performance as a function of the beam position along the sense wire length ( sec .[ window_results ] ) .since the signal is attenuated while travelling along the sense wire , this effect is coupled with the gain of the amplifier and the choice of wire voltages .the choice of cable types and additional connectors seems to have a negligible effect on the pid performance ( sec .[ cablevariationsection ] ) .performance is very sensitive to the choice of amplifier ( sec .[ amplifier_results ] ) , but this is coupled with the sense wire voltage .there is a weak indication that matching the amplifier input impedance with the impedance and termination of the chamber itself gives better performance .the general result is clear : implementing cluster counting increases the particle identification capability of a drift chamber .we make no claim of having found the optimal equipment and analysis techniques in the multidimensional parameter space that we explored .thus we can state that cluster counting improves pid performance even in sub - optimal conditions .the absolute improvement in the pion selection efficiency at {\%} ] ( e.g. from {\%} ] , and see fig .[ r_and_dedx ] ) .the improvement is greatest when the pid performance from charge integration only is poorest , thus making the detector pid response more uniform .the optimal smoothing times for cluster - counting algorithms are on the order of a few nanoseconds , corresponding to a nyquist frequency of hundreds of . thus successfulcluster counting can be accomplished even with modest hardware .all future particle physics experiments that use a drift chamber for pid should strongly consider a cluster - counting option .this study shows that performance gains can be obtained that justify the additional complexity and cost of a cluster - counting drift chamber .this work was supported by the natural sciences and engineering research council of canada and triumf .we thank jerry vavra for lending us the mcps for our tof system , and hirosiha tanaka for lending us the oscilloscope for our data acquisition . hereare contained precise definitions of the cluster - counting and smoothing algorithms used in this work .we define a signal or trace as a series of voltage samples indexed by a discrete time variable .though the time variable has units ( in our raw format the units are {ps}$ ] ) , here we treat it as an integer index .in general , a signal will have samples indexed with integer running from to .two types of smoothing are used in the algorithms .one involves replacing each element of the signal by the average of itself and its neighbours , without reducing the total number of elements .the other reduces the total number of elements , and each element s value is the average of a set of elements in the original signal .the so - called `` boxcar smoothing '' with frames substitutes each sample with the average of itself and the previous samples .the first samples ( to ) are a boundary case , replaced simply by . the so - called `` true averaging '' procedure produces a signal with a reduced number of samples . for an -frame averaging ,the result is a series of voltages ( floored division ) , indexed with the integer running from to .this averaging has the potential to `` divide '' cluster signals if the averaging bin edges lie on top of a cluster ( fig .[ cc_cut_quantities ] ) .thus it is useful to also shift the smoothing bins by adding to the argument of inside the sum .if the smoothing is done with and without the shift , it is less likely that the same cluster will be divided in both cases , compared to doing the smoothing only one way .this algorithm has two parameters : a number of frames for smoothing and a threshold . from the non - smoothed signal at time subtracted the -frame smoothed signal at time .if the resulting quantity crosses the threshold downwards , a cluster is identified at that time .two copies of the original signal are smoothed by different amounts ( and frames ) using the `` boxcar smoothing '' .the -frame smoothed copy is then delayed by frames , and the two copies are then subtracted .if the resulting quantity crosses the threshold downwards , a cluster is counted at that time .the cluster times found by this algorithm are those in that satisfy the `` signal above average '' algorithm is a special case with , , and .another special case can be constructed with with the denominator set to .it can be shown that if the two smoothing times are equal ( ) , the quantity computed with smoothing and delay is identical to that computed with smoothing and delay .thus the parameter range can be restricted to without loss of generality .this algorithm has two parameters : a smoothing time and a threshold .it uses the true averaging procedure rather than the `` boxcar smoothing '' , so the time is labelled as in sec .[ averagingsection ] .simply put , the second derivative is calculated and compared with a threshold . the second derivative is calculated as follows : - [ \bar{v}(\bar{t}+1 ) - \bar{v}(\bar{t})]\bigr)\ ] ] where is the time interval corresponding to the samples that were averaged to do the smoothing .because this algorithm uses the true averaging , it suffers from the problem of potentially `` dividing '' cluster signals between smoothing bins ( fig .[ cc_cut_quantities ] ) .thus we also implemented a two - pass second - derivative algorithm that looks for clusters a second time on the averaged signal with a delay applied as described in sec .[ averagingsection ] .the numbers of clusters found in each pass are added together .it is understood that the resulting cluster count is inflated because many clusters will be double - counted , but nevertheless it is an appropriate variable for identifying particles . for a given cluster candidate ,the voltage and time in the original waveform at which the cluster - finding algorithm was triggered is recorded .then following the waveform forward , the voltage is checked to see when it has recovered above the recorded value ( the pulses are negative ) . if the voltage recovered within the timeout window , it is a short - lived pulse and thus rejected as a fake . if the timeout is reached without the voltage recovering , it is long - lived and kept as a real cluster . for a list of potential clusters , real clusters satisfy where is the chosen timeout .the rejection of fake clusters by the timeout procedure permits the use of lower thresholds in the original algorithm .the lower threshold increases the efficiency of finding real clusters ( smaller miss rate ) but increases the rate of detecting fake clusters .the timeout procedure then eliminates most of the fake clusters , keeping the real ones .g. charpak , r. bouclier , t. bressani , j. favier , and .zupaniv c , `` the use of multiwire proportional counters to select and localize charged particles , '' _ nucl ._ , vol .62 , no . 3 , pp .262268 , 1968 .
|
single - cell prototype drift chambers were built at triumf and tested with a {mev / c} ] corresponding to a {mhz}$ ] nyquist frequency . cluster counting , drift chamber , gaseous ionization detector , detector , superb 07.77.ka , 29.40.cs
|
the purpose of this lecture note is to illustrate a route for the definition of entropy using our experience with computers . in the processthe connection between statistical physics and computations comes to the fore .this is a question that plagues almost all especially beginning physics students .there are several correct ways to answer this . 1 .it is the perfect differential that one gets by dividing the heat transfered by a quantity that gives us the hot - cold feeling ( i.e. temperature ) .it is the log of the number of states available .it is something proportional to where is the probability that the system is in state i. ( ) 4 .it is just an axiom that there exists an extensive quantity , obeying certain plausible conditions , from which the usual thermodynamic rules can be obtained .( ) but the colloquial link between disorder or randomness and entropy remains unexpressed though , agreeably , making a formal connection is not easy .our plan is to establish this missing link _ a la _ kolmogorov . besides these conceptual questions, there is a practical issue that bugs many who do computer simulations where different configurations are generated by some set of rules . in the end one wants to calculate various thermodynamic quantities which involve both energy and entropy .now , each configuration generated during a simulation or time evolution has an energy associated with it . _ but does it have an entropy ? _ the answer is of course blowing in the wind .all thermodynamic behaviours ultimately come from a free energy , say , where , the energy , generally known from mechanical ideas like the hamiltonian , enters as an average , denoted by the angular brackets , _ but no such average for . as a result , one can not talk of `` free energy '' of a configuration at any stage of the simulation .all the definitions mentioned above associate to the ensemble , or distributions over the phase space .they simply forbid the question what is the entropy of a configuration " .too bad ! over the years we have seen the size of computers shrinking , speed increasing and power requirement going down .centuries ago a question that tickled scientists was the possibility of converting heat to work or finding a perfect engine going in a cycle that would completely convert heat to work . a current version of the same problem would be : can we have a computer that does computations but at the end does not require any energy . or , we take a computer , draw power from a rechargeable battery to do the computation , then do the reverse operations and give back the energy to the battery .such a computer is in principle a perpetual computer . _is it possible ? _what we mean by a computer is a machine or an object that implements a set of instructions without any intelligence .it executes whatever it has been instructed to do without any decision making at any point . at the outset , without loss of generality ,we choose binary ( 0,1 ) as the alphabet to be used , each letter to be called a bit .the job of the computer is to manipulate a given string as per instructions .just as in physics , where we are interested in the thermodynamic limit of infinitely large number of particles , volumes etc , we would be interested in infinitely long strings .the question therefore is can bit manipulations be done without cost of energy ? "the problem that a configuration can not have an entropy has its origin in the standard statistical problem that a given outcome of an experiment can not be tested for randomness .e.g. , one number generated by a random number generator can not be tested for randomness . for concreteness ,let us consider a general model system of a magnet consisting of spins arranged on a square lattice with representing a lattice site .if necessary , we may also use an energy ( or hamiltonian ) where the sum is over nearest neighbours ( i.e. bonds of the lattice ) .suppose the temperature is so high that each spin can be in anyone of the two states with equal probability .we may generate such a configuration by repeated tossing of a fair coin . if we get ( :h,:t )is it a random configuration ? or can the configurations of spins as shown in fig .[ fig:1 ] be considered random ? are represented by arrows pointing up or down .( a ) a ferromagnetic state , ( b ) an antiferromagnetic state , and ( c ) a seemingly random configuration . ] with spins ( or bits ) , under tossing of a fair coin , the probability of getting fig .[ fig:1](a ) is and so is the probability of ( b ) or ( c ) .therefore , the fact that a process is random can not be used to guarantee randomness of the sequence of outcomes .still , we do have a naive feeling .all heads in coin toss experiments or strings like 1111111 ... ( ferro state of fig . [ fig:1](a ) ) or 10101010 ... ( anti - ferro state of fig [ fig:1](b ) ) are never considered random because one can identify a pattern , but a string like 110110011100011010001001 ... ( or configuration of fig [ fig:1](c ) ) may be taken as random ._ but what is it that gives us this feeling ? _ the naive expectation can be quantified by a different type of arguments , not generally emphasized in physics . suppose i want to describe the string by a computer programme ; or rather by an algorithm .of course there is no unique programming " language nor there is a `` computer - but these are not very serious issues .we may choose , arbitrarily , one language and one computer and transform all other languages to this language ( by adding ' ' translators " ) and always choose one particular computer .the two strings , the ferro and the anti - ferro states , can then be obtained as outputs of two very small programmes , .... ( a ) print 1 5 million times ( ferro state ) ( b ) print 10 2.5 million times ( antiferro state ) .... in contrast , the third string would come from .... ( c ) print 110110011100 ... ( disordered state ) ....so that the size of the programme is same as the size of the string itself .this example shows that the size of the programme gives an expression to the naive feeling of randomness we have .we may then adopt it for a quantitative measure of randomness ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ definition : let us define _ randomness _ of a string as the size of the _ minimal _ programme that generates the string ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the crucial word is minimal " . in computer parlancewhat we are trying to achieve is a compression of the string and the minimal programme is the best compression that can be achieved .another name given to what we called `` randomness '' is _ complexity _ , and this particular measure is called kolmogorov algorithmic complexity . the same quantity , randomness , is also called information , because the more we can compress a string the less is the information content .information and randomness are then two sides of the same coin : the former expressing a positive aspect while the 2nd a negative one !let be a programme for the string of configuration and let us denote the length of any string by .the randomness or complexity is we now define _ a string as random _ , if its randomness or complexity is similar to the length of the string , or , to be quantitative , if randomness is larger than a pre - chosen threshold , e.g , say , .the choice of is surely arbitrary here and any number would do .a few things need to be mentioned here . _( i ) _ by definition , a minimal programme is random , because its size can not be reduced further . _( ii ) _ it is possible to prove that a string is _ not _ random by explicitly constructing a small programme , but it is not possible to prove that a string _ is _ random .this is related to gdel s incompleteness theorem .for example , the digits of may look random ( and believed to be so ) until one realizes that these can be obtained from an efficient routine for , say , .we may not have a well - defined way of constructing minimal algorithms , but we agree that such an algorithm exists . _ ( iii ) _ the arbitrariness in the choice of language leads to some indefiniteness in the definition of randomness which can be cured by agreeing to add a translator programme to all other programmes .this still leaves the differences of randomness of two strings to be the same .in other words , randomness is defined upto an arbitrary additive constant .entropy in classical thermodynamics also has that arbitrariness . _( iv ) _ such a definition of randomness satisfies a type of subadditivity condition , where the term can not be ignored .accepting that this kolmogorovian approach to randomness makes sense and since we connect randomness in a physical system with entropy , let us associate this randomness with the entropy of that string or configuration . for an ensemble of strings or configurations with probability for the -th string or configuration , the average entropy will be defined by ( taking the boltzmann constant ) .we shall claim that this is the thermodynamic entropy we are familiar with . since the definition of entropy in eq .( [ eq:2 ] ) looks ad hoc , let us first show that this definition gives us back the results we are familiar with . to complete the story , we then establish the equivalence with the gibbs definition of entropy .consider the ising problem .let us try to write the free energy of a state with + spins and spins with .the number of such configurations is an ordered list ( say lexicographical ) of all of these configurations is then made .if all of these states are equally likely to occur then one may specify a state by a string that identifies its location in the list of configurations .the size of the programme is then the number of bits required to store numbers of the order of .let be the number of bits required . for general , is given by stirling s approximation then gives , \end{aligned}\ ] ] with , the probability of a spin being up .resemblance of eq .( [ eq:4 ] ) with the boltzmann formula for entropy ( sec .[ sec : introduction ] ) should not go unnoticed here .( [ eq:1 ] ) is the celebrated formula that goes under the name of entropy of mixing for alloys , solutions etc .it is important to note that no attempt has been made for minimalizations " of the algorithm or in other words we have not attempted to compress . for example , no matter what the various strings are , all of the n spin configurations can be generated by a loop ( algorithm represented schematically ) .... i = 0 10 i = i+1 l = length of i in binary print 0 ( n - l ) times , then " i " in binary if ( i < n ) go to 10 stop .... by a suitable choice of ( _ e.g. _ , ) the code for representation of can be shortened enormously by compressing .this shows that one may generate all the spin configurations by a small programme though there are several configurations that would require individually much bigger programmes .this should not be considered a contradiction because it produces much more than we want .it is fair to put a restriction that the programmes we want should be self delimiting ( meaning it should stop without intervention ) and should produce just what we want , preferably no extra output .such a restriction then automatically excludes the above loop .secondly , many of the numbers in the sequence from to can be compressed enormously .however , what enumeration scheme we use , can not be crucial for physical properties of a magnet , and therefore , we do need bits to convey an arbitrary configuration .it is also reassuring to realize that there are random ( i.e. incompressible ) strings in possible -bit strings .the proof goes as follows .if an -bit string is compressible , then the compressed length would be .but there are only such strings . nowthe compression procedure has to be one to one ( unique ) or otherwise decompression will not be possible .hence , for every , there are strings which are not compressible and therefore random .a related question is the time required to run a programme .what we have defined so far is the `` space '' requirement .it is also possible to define a `` time complexity '' defined by the time required to get the output . in this notewe avoid this issue of time altogether . in the kolmogorov approachwe can now write the free energy of any configuration , as with the thermodynamic free energy coming from the average over all configurations , if we now claim that obtained in eq .( [ eq:1 ] ) is the entropy of any configuration , and since no compression is used , it is the same for all ( this is obviously an approximation ) , we may use .the average energy may be approximated by assuming random mixture of up and down spins with an average value . if is the number of nearest neighbours ( for a square lattice ) , the free energy is then given by .\ ] ] note that we have not used the boltzmann or the gibbs formula for entropy . by using the kolmogorov definitionwhat we get back is the mean field ( or bragg - williams ) approximation for the ising model .as is well - known , this equation on minimization of with respect to , gives us the curie - weiss law for magnetic susceptibility at the ferro - magnetic transition .no need to go into details of that because the purpose of this exercise is to show that the kolmogorov approach works .a more elementary example is the sckur - tetrode formula for entropy of a perfect gas .we use cells of small sizes such that each cell may contain at most one particle . for n particles we need numbers to specify a configuration , because each particle can be in one of cells .the size in bits is so that the change in randomness or entropy as the volume is changed from to is the indistinguishability factor can also be taken into account in the above argument , but since it does not affect eq .( [ eq:6 ] ) , we do not go into that . similarly momentum contribution can also be considered .it may be noted here that the work done in isothermal expansion of a perfect gas is where is the pressure satisfying and is defined in eq .( [ eq:6 ] ) . both eqs .( [ eq:6 ] ) and ( [ eq:7 ] ) are identical to what we get from thermodynamics .the emergence of is because of the change in base from to .it seems logical enough to take this route to the definition of entropy and it would remove much of the mist surrounding entropy in the beginning years of a physics student .for the computer problem mentioned in the introduction , one needs to ponder a bit about reality . in thermodynamics ,one considers a reversible engine which may not be practical , may not even be implementable . but a reversible system without dissipation can always be justified .can one do so for computers ? to implement an algorithm ( as given to it ) , one needs logic circuits consisting of say and and nand gates ( all others can be built with these two ) each of which requires two inputs ( a , b ) to give one output ( c ) . by construction ,such gates are irreversible : given c , one can not reconstruct a and b. however it is possible , at the cost of extra signals , to construct a reversible gate ( called a toffoli gate ) that gives and or nand depending on a third extra signal .the truth table is given in appendix [ sec : toffoli - gate ] .reversibility is obvious .a computer based on such reversible gates can run both ways and therefore , after the end of manipulations , can be run backwards because the hardware now allows that . just like a reversible engine, we now have a reversible computer .all our references to computers will be to such reversible computers .let us try to formulate a few basic principles applicable to computers .these are rephrased versions of laws familiar to us ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * law i * : it is not possible to have perpetual computation . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in other words , we can not have a computer that can read a set of instructions and carry out computations to give us the output _ without any energy requirement_. proving this is not straight forward but this is not inconsistent with our intuitive ideas .we wo nt pursue this .this type of computer may be called perpetual computer of type i. first law actually forbids such perpetual computers . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * law ii * : it is not possible to have a computer whose sole purpose is to draw energy from a reversible source , execute the instructions to give the output and run backward to deliver the energy back to source , and yet leave the memory at the end in the original starting state . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a computer that can actually do this will be called a perpetual computer of second kind or type ii . in order to see the importance of the second law, we need to consider various manipulations on a file ( which is actually a string ) .our interest is in long strings ( length going to infinity as in thermodynamic limit in physics ) .now suppose we want to edit the file and change one character , say , in the 21st position .we may then start with the original file and add an instruction to go to that position and change the character . as a resultthe edit operation is described by a programme which is almost of the same length ( at least in the limit of long strings ) as the original programme giving the string .therefore there is no change in entropy in this editing process .suppose we want to copy a file .we may attach the copy programme with the file .the copy programme itself is of small size .the copy process therefore again does not change the entropy .one may continue with all the possible manipulations on a string and convince oneself that all ( but one ) can be performed at constant entropy .the exceptional process is _ delete or removal of a file_. there is no need of elaboration that this is a vital process in any computation . when we remove a file , we are replacing the entire string by all zeros - a state with negligible entropy .it is this process that would reduce the entropy by for characters so that in conventional units the heat produced at temperature is ( see eq .( [ eq:7 ] ) ) . we know from physics that entropy reduction does not happen naturally ( we can not cool a system easily ) .we can have a reversible computer that starts by taking energy from a source to carry out the operations but to run it backward ( via toffoli gates ) it has to store many redundant information in memory .even though the processes are iso - entropic and can be reversed after getting the output to give back the energy to the source ,we * no longer * have the memory in the same `` blank '' state we started with .to get back to that `` blank '' state , we have to clear the memory ( remove the strings ) .this last step lowers the entropy , a process that can not be carried out without help from outside .if we do not want to clear the memory , the computer will stop working once the memory is full .this is the second law that prohibits perpetual computer of second kind .the similarity with thermodynamic rules is apparent . to complete the analogy ,a computer is like an `` engine '' and memory is the fuel . from a practical point of view, this loss of entropy is given out as heat ( similar to latent heat on freezing of water ) .landauer in 1961 pointed out that the heat produced due to this loss of entropy is per bit or for bits .for comparison , one may note that is the total amount of entropy lost when an ising ferromagnet is cooled from a very high temperature paramagnetic phase to a very low temperature ferromagnetic phase .if the process of deletion on a computer occurs very fast in a very small region of space , this heat generation can create problem .it therefore puts a limit on miniaturization or speed of computation .admittedly this limit is not too realistic because other real life processes would play major roles in determining speed and size of a computer .see appendix [ sec : heat - generated - chip ] for an estimate of heat generated .let us now look at another aspect of computers namely transmission of strings ( or files ) or communication .this topic actually predates computers . to be concrete ,let us consider a case where we want to transmit images discretized into small cells of four colours , with probabilities the question in communication is : `` what is the minimal length of string ( in bits ) required to transmit any such image ? ''there are two possible ways to answer this question .the first is given by the kolmogorov entropy (= randomness = complexity ) while the second is given by a different powerful theorem called shannon s noiseless coding theorem . given a long string of say characters , if we know its kolmogorov entropy then that has to be the smallest size for that string .if we now consider all possible character strings with as the probability of the string , then is the average number we are looking for .unfortunately it is not possible to compute for all cases .here we get help from shannon s theorem .the possibility of transmitting a signal that can be decoded uniquely is guaranteed with probability 1 , if the average number of bits per character where s are the probabilities of individual characters .a proof of this theorem is given in appendix [ sec : proof - shann - theor ] .since the two refer to the same object , they are the same with probability 1 , _i.e. _ , the applicability of the shannon theorem is now shown for the above example . to choose a coding scheme , we need to restrict ourselves to _ prefix _ codes ( i.e. codes that do not use one code as the `` prefix '' of another code .as an example , if we choose , decoding can not be unique .e.g. what is 010 ? or ?nonuniqueness here came from the fact that ( ) has the code of ( ) as the first string or prefix .a scheme which is prefix free is to be called a prefix code . for our original example, we may choose as a possible coding scheme to find that the average length required to transmit a colour is it is a simple exercise to show that any other method would only increase the average size .what is remarkable is that an expression we are familiar with from the gibbs entropy and also see in the shannon theorem . in casethe source changes its pattern and starts sending signals with equal probability we may adopt a different scheme with for which the average length is this is less than what we would get if we stick to the first scheme . such simple schemes may not work for arbitrary cases as , e.g. , for in the first scheme we get while the second scheme would give . in the limit of , we can opt for a simpler code one way to reduce this length is then to make a list of all possible strings , where in some particular order and then transmit the item number of the message .this can not require more than bits per character .we see the importance of the gibbs formula but it is called the shannon entropy . it is to be noted that the shannon theorem looks at the ensemble and not at each string independently . therefore the shannon entropy is ensemble based , but as the examples of magnet or noninteracting gas showed , this entropy can be used to get the entropy of individual strings . given a set , like the colours in the above example , we can have different probability distributions for the elements .the shannon entropy would be determined by that distribution . in the kolmogorov case, we are assigning an `` entropy '' to the long string or state but is determined by the probabilities s of the long strings which are in turn determined by the s of the individual characters . since both refer to the best compression on the average , they have to be equivalent . it should however be noted that this equivalence is only in the limit and is a probability 1 statement meaning that there are configurations which are almost not likely to occur and they are not counted in the shannon entropy . instead of the full list to represent all the configurations ( as we did in eqs .( [ eq:3 ] ) and ( [ eq:4 ] ) ) , it suffices to consider a smaller list consisting of the relevant or typical configurations .they are in number ( see appendix [ sec : proof - shann - theor ] for details ) , typically requiring bits per character .a physical example may illustrate this . even though all configuration of molecules in a gas are allowed and should be taken into account , it is known that not much harm is done by excluding those configurations where all the molecules are confined in a small volume in one corner of a room .in fact giving equal weightage to all the configurations in eq .( [ eq:4 ] ) is one of the sources of approximations of meanfield theory .we now try to argue that statistical mechanics can also be developed with the above entropy picture . to do so, we consider the conventional canonical ensemble , i.e. , a system defined by a hamiltonian or energy in contact with a reservoir or bath with which it can exchange only energy . in equilibrium, there is no net flow of energy from one to the other but there is exchange of energy going on so that our system goes through all the available states in phase space .this process is conventionally described by appropriate equations of motions but , though not done generally , one may think of the exchange as a communication problem . in equilibrium ,the system is in all possible states with probability for the state and is always in communication with the reservoir about its configuration .the communication is therefore a long string of the states of the system each occurring independently and identically distributed ( that s the meaning of equilibrium ) .it seems natural to make the hypothesis that nature picks the optimal way of communication .we of course assume that the communication is noiseless .the approach to equilibrium is just the search for the optimal communication .while the approach process has a time dependence where the `` time '' complexity would play a role , it has no bearing in equilibrium and need not worry us . with that in mind , we may make the following postulates : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \(1 ) in equilibrium , the energy remains constant .+ ( 2 ) the communication with the reservoir is optimal with entropy .+ ( 3 ) for a given average energy , the entropy is maximum to minimize failures in communication . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the third postulate actually assures that the maximum possible number of configurations ( ) are taken into account in the communication process .no attempt has been made to see if these postulates can be further minimized . with these sensible postulates ,we have the problem of maximizing with respect to s keeping =constant and .a straight forward variational calculation shows that with being the standard partition function .the parameter is to be chosen properly such that one gets back the average energy .the usual arguments of statistical mechanics can now be used to identify with the inverse temperature of the reservoir .we have tried to show how the kolmogorov approach to randomness may be fruitfully used to define entropy and also to formulate statistical mechanics .once the equivalence with conventional approach is established , all calculations can then be done in the existing framework .what is gained is a conceptual framework which lends itself to exploitation in understanding basic issues of computations .this would not have been possible in the existing framework .this also opens up the possibility of replacing `` engines '' by `` computers '' in teaching of thermodynamics .* acknowledgments * this is based on the c. k. majumdar memorial talks given in kolkata on 22nd and 23rd may 2003 .i was fortunate enough to have a researcher like prof .chanchal kumar majumdar as a teacher in science college .i thank the ckm memorial trust for organizing the memorial talk in science college , kolkata .the truth table of the toffoli gate is given below . with three inputs a , b , c ,the output in c is the and or nand operation of a and b depending on c=0 or 1 .a1 : toffoli gate [ cols="^,^,^,^,^,^",options="header " , ]the statement of shannon s noiseless coding theorem is : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if is the minimal average code length of an optimal code , then where ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the adjective `` noiseless '' is meant to remind us that there is no error in communication .a more verbose statement would be _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if we use bits to represent strings of characters with shannon entropy , then a reliable compression scheme exists if .conversely , if , no compression scheme is reliable ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the equivalence of the two statements can be seen by recognizing that need not be an integer but better be .let us first go through a heuristic argument to motivate shannon s coding theorem .suppose a source is emitting signals independently and identically distributed with two possible values with probability , and with probability .for a long enough string the probability is },\end{aligned}\ ] ] because for large the number of expected is and is .this expression shows that the probability of a long string is determined by ,\ ] ] the `` entropy '' for this particular problem .note the subtle change from eq .( [ eq:11 ] ) to eq .( [ eq:11b ] ) .this use of expectation values for large led to the result that most of the strings , may be called the `` typical '' strings , belong to a subset of strings ( out of total strings ) .let us define a typical string more precisely for any distribution .a string of symbols will be _ called typical _ ( or better -typical ) if for any given .( [ eq:12 ] ) may also be rewritten as - s \le \epsilon\ ] ] now , for random variables , s , defined by , are also independent identically distributed random variables . itis then expected that , the average value of s , averaged over the string for large , should approach the ensemble average , namely , .this expectation comes from the law of large numbers that { \buildrel{n\rightarrow\infty}\over\longrightarrow } \1,\ ] ] for any .this means that given an we may find a so that the above probability in eq .[ eq:14 ] is greater than . recognizing that eq .( [ eq:14 ] ) implies \ge 1 - \delta.\ ] ] we conclude that the probability that a string is typical as defined in eqs .( [ eq:12 ] ) and ( [ eq:16 ] ) is .let us now try to estimate the number , the total number of typical strings .let us use a subscript for the typical strings with going from to .the sum of probabilities s of the typical strings has to be less than or equal to one , and using the definition of eq .( [ eq:12 ] ) , we have one inequality this gives .let us now get a lower bound for .we have just established that the probability for a string to be typical is . using the other limit from eq .( [ eq:12 ] ) we have which gives .the final result is that the total number of typical strings satisfies where can be chosen small for large .hence , in the limit now let us choose a coding scheme that requires number of bits for the string of characters .our aim is to convert a string to a bit string and decode it - the whole process has to be unique . representing the coding and decoding by ``operators '' and respectively , and any string by , what we want can be written in a familiar form where the last line is the equivalent `` pipeline '' in a unix or gnu / linux system .let s take .we may choose an such that .it is a trivial result that . here is the total number of possible bit strings .hence all the typical strings can be encoded .nontypical strings occur very rarely but still they may be encoded . if , then and obviously all the typical strings can not be encoded .hence no coding is possible .this completes the proof of the theorem .as per a report of 1988 , the energy dissipation per logic operation has gone down from joule in 1945 to joule in 1980 s .( ref : r. w. keyes , ibm j. res .devel . * 32 * , 24 ( 1988 ) url : http://www.research.ibm.com/journal/rd/441/keyes.pdf ) for comparison , thermal energy at room temperature is of the order of joule . a more recent example . for a pentium 4 at 1.6ghz ,if the cpu fan ( that cools the cpu ) is kept off , then during operations the cpu temperature may reach ( yes celsius ) as monitored by standard system softwares on an hcl made pc ( used for preparation of this paper ) .
|
a definition of entropy via the kolmogorov algorithmic complexity is discussed . as examples , we show how the meanfield theory for the ising model , and the entropy of a perfect gas can be recovered . the connection with computations are pointed out , by paraphrasing the laws of thermodynamics for computers . also discussed is an approach that may be adopted to develop statistical mechanics using the algorithmic point of view .
|
the importance of equilibrium magnetic field reconstruction in tokamaks is well understood throughout fusion science. indeed , it is the geometry of the equilibrium magnetic field that provides a canonical coordinate , via indexing of nested flux surfaces , which is needed for a wide variety of post shot theoretical and diagnostic data analysis. equilibrium reconstruction also gives the outer boundary of the plasma : a key element to many open - circuit , real - time control methodologies. while schemes exist for plasma control using only classical electrostatics to determine the boundary reconstruction , the vast majority of reconstructions of the internal magnetic geometry rely upon solving kinetic force - balance equations with a single solution being chosen as the best fit to available diagnostic data .this approach to internal reconstruction is most famously implemented through the efit code ( or variants thereof ) that uses picard iteration to find solutions of the grad - shafranov ( gs ) force - balance equation , which best fit data observed from equilibrium magnetic diagnostics ( e.g. fluxloops and pickup coils). while this approach of leveraging the gs equation to perform equilibrium reconstruction has been successfully utilised throughout the field , the accuracy of the method is intrinsically linked to how accurately the gs equation accounts for all the equilibrium forces in the plasma .indeed , factors such as flow and isotropy need to be explicitly added into the underpinning force - balance equations to be correctly accounted for in the equilibrium reconstruction. moreover , solutions to equilibrium reconstruction are not generally unique ; and thus , experiment - specific numerical schemes are frequently employed to guarantee that the picard iteration converges to a physical solution . in parallel to the inclusion of more physics in equilibrium solvers ,there has been the improvement in the diversity , accuracy and resolution of plasma diagnostics .interpretation , however , often requires a detailed knowledge of the plasma equilibrium .for example , inference of the toroidal current profile from line of sight measurements of the polarisation angle requires a knowledge of the poloidal flux across the plasma .formally , diagnostic forward functions relate the vector of plasma parameters to the measurement vector . for a linear system , such as toroidal current inference in a double null configuration , and normally related through a response matrix with additional contributions , such that .inference , or parameter estimation , involves inverting this relationship to give plasma parameters that are consistent with the data .a widespread technique used is least - square fitting , used for instance in efit , in which prior assumptions are included via a penalty term in the fit .given the large data - sets and complicated models , an arguably more rigorous approach to the integrated data - modelling challenge is the bayesian approach to inference in fusion plasmas .in contrast to least square fitting , the bayesian approach to inference in fusion plasmas , developed by multiple authors , involves the specification of an initial prior probability distribution function ( pdf ) , , which is then updated by taking into account information that the measurements provide through the likelihood pdf .the result is the posterior distribution given by bayes formula the advantage of the bayesian approach over traditional inversion techniques is two - fold : ( i ) prior knowledge , including known parameter inter - dependencies is made explicit , and ( ii ) as the formulation is probabilistic , random errors , systematic uncertainties and instrumental bias are an integral part of the analysis rather than an afterthought .the application of bayesian approach to inference and parameter estimation in complex physics problems is not new , with fields ranging from astronomy to nuclear reaction analysis .a topical illustrative example comes from parameter estimation in the climate science community , in modelling land - surface - atmosphere processes and global carbon dioxide concentrations in the atmosphere. the models for carbon dioxide exchange are complex and span more equations of state than a plasma .the community atmosphere biosphere land exchange model ( cable ) is a land surface model , used to calculate the fluxes of momentum , energy , water and carbon between the land surface and the atmosphere and to model the major biogeochemical cycles of the land ecosystem .it solves radiation , heat and mass flow transport on a global scale , accounting for many different land ecosystems .data is disparate and vast , and comes from an flux towers , carbon stock , carbon in biomass , litter falls , meteorological data , stream flow and satellite imagery. in this community , the challenge of model and data integration , also called model data fusion or model data synthesis , is defined as combining models and observations by varying some properties of the model , to give the optimal combination of both. the topic of model data fusion is crucial to give credibility to the calculation of carbon dioxide fluxes and processes in the atmosphere , and thus provide a reliable basis for public policy on climate change .bayesian inference , together with other model - data fusion techniques , is extensively utilised .in contrast to climate science , the systematic inclusion of uncertainties in both data and models has , to - date , not been a strength of the fusion community .several facets are driving change .iter discharges will be extremely expensive , and so it will be crucial to maximise the value of acquired data .the challenging environment of a fusion reactor will mean fusion power plants will operate with a very much reduced set of diagnostics .finally , as more physics is added to force - balance descriptions , there is a need to validate physics models .once validated , such models may be able to be used as a constraint in equilibrium reconstruction to infer additional information about the plasma , and thereby create `` model diagnostics '' .these aspects have motivated the recent development of a bayesian approach to equilibrium reconstruction , with one line of research producing a code called the bayesian equilibrium analysis and simulation tool ( beast ) , which is able to quantify fit degeneracies and infer spatially - localised discrepancies from a force - balance solution .this paper presents further research advancements since the introduction of beast by the authors and that have subsequently been used to advance the code .the paper is structured as follows : [ sec1 ] gives a brief overview of bayesian inference and its application to equilibrium reconstruction .this is followed by a general discussion on the computational challenges surrounding bayesian equilibrium reconstruction and how these have been addressed by recent advancements , coded into beast .state - of - the - art results coming from the use of beast to analyse discharges on the mega - ampere spherical tokamak ( mast ) are then presented , followed up be a concluding remarks encompassing future research endeavours and a summary of the current status of beast . finally , two appendices detail specifics on recent advancements surrounding posterior optimisation and integration .bayesian inference offers an alternate approach to equilibrium modelling in fusion plasmas , and a pathway to validate different equilibrium model descriptions .some understanding can be gleaned by understanding the application of bayes theorem to a single observation with and . in this case ,bayes formula becomes where has been dropped to simplify the notation ; this convention will be maintained throughout the remainder of the paper .as and are given and thus assumed to be constant , so is , which is reflected by the proportionality in eq.([eq2 ] ) . the _ forward model _ , ,is implicitly contained within and is a deterministic mapping from the space of model parameters to the space of associated diagnostic observations .that is , the forward model generates a prediction of what the diagnostic observations would be , given a set of model parameters . in most treatments likelihoodsare assumed to be of the form where is represents a gaussian distribution over pair - wise independent variables .the first argument of the gaussian distribution represents the mean vector , with the second being the entries in a diagonal covariance matrix .the justification for the form of the likelihood is discussed elsewhere . using the likelihood in eq.([eq2a ] ) , the following form can be written for the posterior : from eq.([eq3 ] ) , it is clear that the posterior represents a probability distribution over model parameters , if given a set of diagnostic observations and uncertainties .equation ( [ eq3 ] ) is the form which is ultimately integrated to find statistical moments of model parameters and various marginalisations thereof .hole _ et ._ have implemented bayesian inversion on mast using the minerva framework. within this framework , probabilistic graphical models are used to project the dependence of the posterior distribution function on the prior , the data , and the likelihood .an advantage of this approach is that it visualises the complex interdependency between data and model , and thus expedites model development .the techniques of bayesian inference have also been inverted to provide a tool to check data consistency. various authors have developed bayesian inference techniques for fusion plasmas that combine information from a wealth of diagnostics to enable probabilistic calculation of plasma configuration , provide automatic identification of faulty diagnostics, and developed a validation tool for generalised force - balance models .critically , bayesian techniques propagate experimental uncertainty correctly , and enable the relative uncertainty between acceptable physical models to be quantified . in von nessi _ et .al._ , a new method , based on bayesian analysis , is presented which unifies the inference of plasma equilibria parameters in a tokamak with the ability to quantify differences between inferred equilibria and gs solutions . at the heart of this techniqueis the new concept of weak observation , which allows multiple forward models to be associated with a single diagnostic observation .this new idea subsequently provides a means by which the space of gs solutions can be efficiently characterised via a prior distribution . the posterior evidence ( a normalisation constant of the inferred posterior distribution ) is also inferred in the analysis and is used as a proxy for determining how relatively close inferred equilibria are to force - balance for different discharges / times .figure [ fig:22254 ] shows expectation values of the toroidal current density inferred from ( a ) a toroidal current beam model , ( b ) a gs constraint , in which is computed from gs from a surface , together with fits to the pressure and toroidal flux function , and forward models for magnetics , total plasma current and mse predictions , and ( c ) the difference between the two .the difference in can give some indication to physical effects neglected in the gs equation , and/or reflect diagnostic disagreement .in this case the discrepancy is largest at the outboard mid - plane , and of order of 10% . using nested sampling , it is possible to integrate over the evidence , and thus compute of the inferred hyper - parameter , , which is the average current variance between gs and toroidal current beam values . the smaller the value of , the larger the degree of freedom necessary to predict diagnostic observations relative to other cases . for 22254 at 350 ms ( ka) and , while ( ka) and for adjacent discharge # 24600 at 265ms .this meant # 22254 was much closer to gs , and/or had fewer diagnostics in conflict , than # 24600 .expectation values of and inferred for mast discharge # 22254 at 350 ms , as calculated from 1800 samples of the posterior , using pickup coils , flux loops , mse and rogowski coil data .the inferred last closed flux surface is indicated in white on each figure .flux loop locations are indicated by stars outside the plasma region ; position and orientation of pickup coils are indicated via heavy bars on the out - board edge of the first wall and as a vertically oriented column line along the solenoid ; and mse observation positions are indicated by the stars across the mid - plane inside the plasma region .panel ( a ) shows current density data , with the current densities in ( b ) reflecting that of . note that the number and size of beams representing and are allowed to differ in beast inferences .( c ) shows the magnitude of the current density difference as averaged across each 2d rectangular step corresponding to .reproduced with permission from fig .2 of von nessi and hole .the equilibria inference described in [ sec1 ] poses a number of unique computational challenges when it comes to analysing the associated , high - dimensional ( i.e. having more than 1000 dimensions ) posterior distribution .this section discusses emergent points and recent research pursuits surrounding the computational aspects of bayesian equilibrium reconstruction , some of which have led to recent advances in the beast code beyond its original introduction in von nessi _ et .al._ the beam model used to represent the toroidal plasma current in the beast code , typically uses 524 model parameters to simulate a mast discharge. this high dimensionality alone constitutes a significant computational challenge in analysing the associated posterior distribution , as no efficient , general means exist to sample from such distributions. however , the plasma beam model obviously imposes no intrinsic spatial correlation between cross - sectional points contained within different beams , i.e. without the presence of an informative prior .alternative , more compact ( i.e. potentially more computationally efficient ) , representations for the beam currents have been trialled .specifically , both a 2d fourier and 2d bessel - fourier representations have been investigated .neither produced a computational procedure that could achieve the levels of accuracy of the beam model .this outcome is not surprising , given the non - linear nature of the force - balance constraint and the fact that there are no strict symmetries in the plasma current .indeed , these two points eliminate many paths by which a more compact representation of the beam currents could be achieved , under a force - balance constraint . between flux loops , pickup coils and mse , there are about 150 diagnostic observations available for equilibrium reconstruction for a typical mast discharge .thus , fitting the parameters of the beam model above constitute an underdetermined problem , neglecting any priors .this underdetermined nature affords many screening " solutions to exist , where only currents nearest to diagnostic observation points need to be adjusted to compensate for any , otherwise arbitrary , configuration of beam currents .this translates into the posterior having many local maxima , with the global maxima called the maximum of the posterior ( map)generally not corresponding to a physically realistic plasma configuration . while the addition of a force - balance prior serves to greatly reduce the number of these local minima ( in addition to making the global maxima correspond to a physically realistic plasma configuration ) , finding this global maxima is still computationally difficult and constitutes the majority of the computational time in beast inferences. indeed , even with the inclusion of a force - balance constraint , many screening solutions ( i.e. local maxima ) are still present , with many lying in close proximity to the global maxima .a significant increase in accuracy in inferring the map has been achieved through the development of a new non - linear optimisation algorithm , outlined in appendix [ smo3 ] .this optimiser is based on the hookes and jeeves algorithm but has been heavily modified to avoid screening solutions , when exploring the posterior .thus , we call this algorithm the `` screening mitigation optimiser '' ( smo ) .the posterior distribution associated with beast equilibrium reconstruction is high - dimensional and non - gaussian , having the majority of the probability `` mass '' in a highly - localised region of model parameter space. sampling from such distributions is inherently problematic and extremely computationally intensive. indeed , markov - chain monte - carlo ( mcmc ) methods are too inefficient to employ , as there is little chance for the chain to find ( and subsequently stay in ) the region of high probability density . moreover , it is difficult to find bounds on the accuracy of an analytic approximation of the posterior .thus , beast uses a statistical quadrature to build up moments of the posterior directly , rather than approximating these moments through sampling statistics .the method currently employed by beast to integrate the posterior is a generalisation of the modified nested sampling ( ns ) algorithm presented in von nessi _ et .al._ , called the stochastic lebesgue quadrature ( slq ) .slq was developed to alleviate inefficiencies in employing mcmc techniques to generate prior samples under a likelihood constraint , which is intrinsically required by ns .moreover , slq is not affected by possibly ambiguities in classifying a pdf in the inference as part of the likelihood or a prior .slq is general enough to work with inferences that even use uninformative priors . generally , the method works be approximating the set for any given by a collection of pairwise disjoint hypercubes .these hypercubes are generated from an evolving swarm of model parameter vectors , each of which is already guaranteed to satisfy the given posterior constraint . extracting a uniform sample from the union of hypercubes is a fast computation that is not only leveraged to evolve the swarm at each step but also provides the statistical basis for the construction of any posterior quadrature .the details of this method are explained in appendix [ slq ] .finally , slq has recently been deployed in beast and has resulted in more thorough exploration of the posterior during quadrature construction , which has ultimately lead to more consistent results coming from the computations ( see [ results ] for more details ) .this has been achieved while maintaing the same to slightly shorter computational times , relative to those reported in von nessi _ et .expectation and standard deviation values of , along with the expectation values of inferred for mast discharge # 22254 at 350ms , as calculated from beast , using pickup coils , flux loops , mse and rogowski coil data . in each subfigurethe lcfs , as inferred from beast , is drawn in black .flux loop locations are indicated by stars outside the plasma region ; position and orientation of pickup coils are indicated via heavy bars on the out - board edge of the first wall and as a vertically oriented column line along the solenoid ; and mse observation positions are marked by the stars across the mid - plane inside the plasma region .( a ) shows the expectation of : the current density data , with ( b ) presenting the magnitude of one standard deviation thereof .( c ) shows the magnitude of the expectation ( introduced in von nessi _ et .al._ ) , which directly correspond to local deviations from force - balance , as dictated by the gs equation , with larger magnitudes reflecting a larger deviation . ] here we present results from two mast discharges , which demonstrate beast s growth in capabilities since being initially introduced .the discharges analysed were # 22254 at 350ms and # 24600 at 280ms .both are dnd plasmas , with the former being in h - mode and the latter in l - mode .discharge # 22254 was part of a hybrid scenario study carried out in mast and is heated with 3.13mw of nbi power .contrasting this is # 24600 that was part of an l - mode study being injected with 3.35mw of nbi power .discharge # 22254 was studied in von nessi _ et .al._ and is revisited here to show how the inference has been improved with recent advancements in beast .we look at # 24600 at 280ms , a time shortly after one of the two nbi beams disrupts , to study the impact of nbi disruption on the equilibrium .the following results are obtained from 76 pickup coils , 24 flux loops and 31 mse observations .finally , additive bias corrections and conducting surface currents are inferred in every beast inference ; however , these are treated as nuisance parameters , as they do not typically impact the physics interpretation of the results and thus , will not be reported here . to interpret the results below, we note that beast outputs a cross - sectional quantity , , which indicates how close an associated configuration is to axisymmetric force - balance , with smaller values indicating configurations being relatively closer to force - balance. .qualitatively , reflects the level of discrepancy between the toroidal current density , calculated from the gs equation ( which ultimately uses pressure , poloidal current and toroidal current model parameters ) and that calculated directly from the plasma beam model ; for more details see von nessi _ et .thus , relatively large values of can be viewed as an indicator for missing physics in the force - balance model . in [ sec2 ]the smo algorithm was introduced , which has consistently found diagnostic fits that were closer to force - balance than results coming from other optimisers .this is exemplified in fig.[fig::22254beamdata ] , where a fit for # 22254 is found with a with values times smaller than the initial results presented in von nessi _ et .al._ , which are reproduced in fig.[fig:22254 ] .in particular , a force - balance solution was able to be much better reconciled on the outboard edge of the plasma around the mse measurements .ultimately this has resulted in a retraction of the plasma boundary compared to the efit lcfs ( shown in fig.[fig::22254poloidalflux ] for comparison ) , which is only constrained by flux loops and inboard pickup coils , not mse .the difference in plasma volume accounts largely for the discrepancy of between efit and beast : and , respectively , with confidence intervals on the beast result .this small uncertainty in the beast result coincides with the inference being over - determined ( i.e. a very small degree of degeneracy ) , when a force - balance prior is leveraged against the unbiased space of model parameters .this makes sense , as the gs equation is an elliptic , semi - linear pde having unique solutions , which is paramaterised only by pre - defined representations of the pressure and poloidal current profiles ( polynomials of degree 3 and 5 respectively for these results ) .thus , the space of all configurations is biased toward an eight - dimensional submanifold , on which the problem becomes over determined , when reconciled against over 100 diagnostic observations .one may argue that the boundary paramaterization also needs to be accounted for ; but this can be determined independently of solving the gs equation and does not embody genuine degrees of freedom in the inference .poloidal flux function expectation and standard deviation as calculated by beast for # 22254 at 350ms .positions of magnetics and mse observation points are indicated as they were in fig.[fig::22254beamdata ] . in both subfiguresthe efit lcfs is plotted in white with the beast lcfs overlaid in black . ]figure [ fig::22254beamdata](b ) shows the magnitude of a single standard deviation for the current density distribution , which are noted to be uniformly much smaller compared to the expectation values in fig.[fig::22254beamdata](a ) .this is also consistent with a small uncertainty in the for the beast result .figure [ fig::22254poloidalflux ] shows the poloidal current expectation and first standard deviation magnitude . hereagain , the uncertainties are much smaller , as compared to those presented in von nessi _ et .in addition to , the uncertainty being on the order of five times smaller , the area of greatest uncertainty is larger , being spread across the outboard edge of the plasma , as opposed to be consolidated around the pf coils in von nessi _ et .the very small uncertainty in the poloidal flux reflects a very high precision in flux - surface positions for a gs model of force - balance .the expectation value of the poloidal flux function is very similar to the results in von nessi _ et .al._ , with the biggest difference being that the outboard lcfs has slightly migrated toward the core of the plasma . for # 22254 at 350ms , the inferred pressure , poloidal current and q - profiles were all inferred with very similar expectations and uncertainties , compared to previous results . in general , these profiles exhibit expectations that are in good agreement with efit and have extremely small uncertainties .moreover , these profile appear to be close to gaussian marginalisations , showing symmetric uncertainties and having their expectations coincide with their respective maps .expectation and standard deviation values of , along with the expectation values of inferred for mast discharge # 24600 at 280ms , using the same diagnostics as in fig.[fig::22254poloidalflux ] .the lcfs and diagnostic positions are also likewise indicated .( a ) shows the expectation of , with ( b ) again showinging the magnitude of one standard deviation thereof .( c ) shows the magnitude of the expectation . ]reflecting 280ms immediately following a nbi disruption , fig.[fig::24600beamdata ] shows an equilibrium inference that is significantly out of force - balance .the force - balance discrepancy peaks out around at four , spatially separated point , clearly indicated in fig.[fig::24600beamdata](a ) .moreover , the uncertainties on the toroidal current are generally one to two orders magnitude greater than those for # 22254 at 350ms .this relative increase in uncertainty is due to an increase in fit degeneracy , as more degrees of freedom will emerge the farther away from force - balance the inference gets . interpreting between beast and efit is challenging in this context , as the beast inference is not in force - balance .however , comparing the efit and beast values of and , respectively , for this quantity show that both agree that this value be less about the same amount , when compared to the results from # 22254 .moreover , the uncertainties on the beast result are about an order of magnitude greater , which is consistent with the arguments put forth above .poloidal flux function and q - profile expectations calculated by beast for # 24600 at 280ms .positions of magnetics and mse observation points in ( a ) are indicated as they were in fig.[fig::22254beamdata ] . in ( a ), the efit lcfs is plotted in white with the inferred lcfs overlaid in black .( b ) displays the q - profile as calculated by beast ( the expectation ) and efit , represented by the purple and green lines , respectively .uncertainties on the beast q - profile are too small to visually resolve on the scale of the figure and have thus been suppressed . ]figure [ fig::24600poloidalflux](a ) shows a good agreement between both the efit an beast lcfs .again , this can be explained by the arguments in the preceding paragraph regarding the growth of degeneracy uncertainty and recalling that the boundary can be inferred independently of force - balance constraints .however , fig.[fig::24600poloidalflux](b ) shows a discrepancy of about between the q - profiles of efit and beast .this is mostly due to the fact that beast s q - profile is strongly constrained to mse measurements , while the efit reconstruction is not .this discrepancy in q - profile coincides with the difference between the beast and efit inferences of ( despite both plasmas having similar volumes ) , as depends on magnetic field geometry . inferred poloidal current for # 24600 at 280ms , right after the southwest neutral beam disrupts .the dotted line indicates the map profile , with the thick line being the expectation .the thin lines represent upper / lower confidence intervals of 95% , with the shading posterior probabilities of quadrature points . ] in fig.[fig::24600pandf ] , the profile for the poloidal current is shown , demonstrating the non - gaussian nature of the quantity .indeed , the plot shows the map of the profile lying outside the confidence intervals , surrounding the expectation , implicating the profile as highly non - gaussian in the core region of the plasma .this result demonstrates beast s ability to resolve non - gaussian structures in even high - dimensional marginalisation of the posterior . echoing the discussion put forth in von nessi _ et .al._ , we ascribe no rigorous physical interpretation to the kinetic pressure , as the inference is far from force - balance and there exists no direct constraint on the kinetic pressure in the inference .thus , we do not present the pressure profile for this inference here . beast routinely outputs various information theoretic scalars , such as the evidence and relative entropy between posterior and prior distributions . however , interpreting the meaning of these quantities , outside the realm of model comparison , becomes difficult for the following reasons .first , it is well easily understood that likelihoods are not probability distributions ; and even in the form of eq.([eq2a ] ) , likelihoods still enjoy a gauge freedom corresponding to an arbitrary scalar multiplier , which will directly affect the value of the evidence .moreover , the number of observations itself will also have an obvious impact on the evidence ( c.f .eq.([eq3 ] ) ) . when leveraging implicit techniques to construct priors , like the methods employed in beast to bias toward force - balance , faithfully calculating the relative entropy between prior and posterior distributions is difficult , as the prior is not normalised during the quadrature construction .indeed , we only need to leverage relative probabilities from the prior to construct the posterior quadratures , when using a technique like slq .it is possible to classify the force - balance prior as part of the likelihood in this situation , but this leads to ambiguities as to how to classify distributions as priors or likelihoods . given this , we instead report the relative entropy , , between posterior and the approximating uniform distribution used in the slq calculation ( see appendix [ slq ] ) . to give some context to the meaning of , the volume of the approximating uniform distribution , , is reported via its natural logarithm , , to be consistent with the notation in appendix [ slq ] . for # 22254 at 350ms the relative entropy between the posterior and the initial approximating uniform distribution having was bits , with the uncertainty being the confidence interval .discharge # 24600 at 280ms had bits relative to an initial uniform distribution having . generally speaking ,these values reflect how much information was provided by both diagnostic observations and the force - balance prior in the inference . as uncertainties were generally higher for # 24600 at 280ms, a higher relative entropy means that the observations and prior were more effective at excluding outlier configurations , relative to those for # 22254 .thus , while s posterior had less degeneracy around its expectation , it had relatively heavier `` wings '' , as compared to .this interpretation is reinforced by the fact that # 24600 started out with a more informed uniform distribution , as compared to # 22254 , but still maintained a higher relative entropy despite this .research in the area of bayesian equilibrium reconstruction has rapidly advanced since the work of svesson and werner , which has gone from analytic inversion leveraging very few physical assumptions to the current state - of - the - art where complex force - balance models can be seamlessly folded into a non - analytic , robust inference on over 1000 model parameter dimensions .today , bayesian equilibrium reconstruction compensates for broken diagnostics _ _ in situ__ in addition to being able to marginalise out uncertainties due to conducting surface currents , all while preserving the integrity of the inference results .this paper presents the most recent advancements in the area , which surround the computational aspects of analysing the posterior .the end result being that the equilibrium for a high - performance mast discharge has been shown to be consistent with static gs force - balance , implying that the current selection of diagnostics used in this analysis will need to be expanded , if one wishes to resolve physics not already represented in the gs equation . developing research endeavours in this area include adding in a toroidal flow component into the force - balance relation , along with more diagnostic data , and seeing how this affects the inference on mast discharges . work is also progressing on deploying beast on the kstar experiment , where both 2d mse and diamagnetic loop data can be leveraged to better constrain the equilibrium inference . on the computational end of research ,the possibility of deploying machine learning techniques to generate better initial guesses for posterior optimisation is being explored , as it is now the search for the map which takes up the majority of computational time ( as opposed to the construction of posterior quadratures ) .in this section we briefly outline the directional search algorithm developed for use in beast s optimisation of the posterior .the direction search starts from an initial guess , , a given , scalar increment , , and proceeds as follows . 1 . at the target functionis evaluated , with the value stored ( denoted ) .2 . if is smaller than a pre - defined threshold , the algorithm terminates .otherwise , the procedure continues onto the next step .3 . evaluate the target function at , where is the unit vector for the coordinate , for all coordinate directions .sign / direction combinations showing no improvement over ( in the case of the posterior , are less than ) are discarded , with all other combinations being recorded and ranked according to which ones gave the largest improvement over .we label each improving coordinate increment as , with lower indices having greater improvement over ; i.e. will generally have the form , with and uncorrelated . if no direction is found that improves , the value of is scaled down ( in beast is scaled down by a factor of ) and the algorithm returns to step 2 .4 . count the number of s and record this value as .5 . evaluate the target at 6 .if the evaluation at produces a result better than , a line search is performed along from the point .the result of this search replaces the value of and the algorithm returns to step 1 , with being set to it s initial value .otherwise , is decremented by one and the algorithm returns to step 5 . in beast , a golden section line search is used in the above ; but any line search method could be applied .the key point to tho above approach in that it is a `` breadth - first '' algorithm in that it will try to change as many model parameter coordinates as possible in each step , as opposed to accepting possibly better gains by moving along just a few coordinates . indeed , moving along one coordinate at any given step , may indeed produce a better immediate result ; but this has a tendency to drive the optimiser into local maxima presented by the screening solutions discussed in [ smo2 ] .this is the same problem has also been found with both steepest descent and conjugate gradient optimisers , when deployed in beast .this last point is unsurprising , as one would nt expect such algorithms to be effective on functions with many local maxima .the above algorithm is designed specifically to avoid these local maxima and has proven to be extremely robust in beast inferences and has the added advantage that it does not require gradient calculations .one can argue that all of bayesian inference can be reduced to posterior quadrature calculations .indeed , any statistical moments of model parameters or marginalisations thereof can be represented as where being the number of model parameters , with diagnostic observations implicitly held as parameters within . with this , we seek to develop a numerical scheme to integrate for possibly large values of . first , we assume , to guarantee the existence of a bounded set such that for any given . thus , we are able to make the following approximation where denotes the -dimensional volume of and one will note that the co - area formula has been employed in the last step of eq.([slq3 ] ) .it is clear that is normalised to one by definition and thus , constitutes a uniform probability distribution .the motivation for this particular factorisation is embodied in the following definition which hints at a way in which uniform sampling may be employed to obtain the desired integral . to fully realise this , we note eq.([slq4 ] ) directly indicates that may be statistically inverted via ordering uniform samples of with respect to their evaluations .indeed , given a collection of uniform samples of , denoted , indexed according to then eq.([slq4 ] ) indicates that to use this insight , we employ the definition in eq.([slq4 ] ) and make the substitution to reduce the expression in eq.([slq3 ] ) : \nonumber\\ & = & |\mathcal{b}|\left[-\inf_\mathcal{{\overline{\lambda}}\in b}\mathcal{q({\overline{\lambda}})}+\int_0 ^ 1\xi^{-1}(v)\,dv\right]\nonumber\\ & \approx & |\mathcal{b}|\int_0 ^ 1\xi^{-1}(v)\,dv,\label{slq7}\end{aligned}\ ] ] where we have used integration by parts and the assumption that can be chosen to make small enough to satisfy the desired level of accuracy for the quadrature .one will note that the assumption of automatically implies that . while the above is very similar to the development presented in von nessi _ et .al._ , it differs from that derivation in that the quadrature transformation has no intrinsic reliance on the demarkations of likelihoods and priors .indeed , the above result is quite general in that need not be related to a probability distribution .moreover , the definition of needed to be altered to accommodate the transform s reliance on uniform distributions , which ultimately leads to the addition of the term in the final expression .the expression in eq.([slq6 ] ) indicates that a graph of can be statistically constructed by taking an ordered set of evaluations coming from uniform samples of with an abscissa constructed of ordered uniform samples taken on ] indexed so that to approximate one can refine the resolution of any part of this graph by adding , say , abscissa values coming from uniform samples on $ ] and values coming from uniform samples taken from and reordering both sets of values according to eq.([slq5])eq.([slq8 ] ) .this is a general prescription for graph refinement , for which skilling s ns is a particular instance of. indeed , this method can be directly leveraged to design both single - threaded and multi - threaded generalisations of ns .however , for the results presented in this paper , we retain the original ns methodology for refining the graph in eq.([slq8 ] ) , which is detailed elsewhere .once a refinement of sufficient accuracy has been achieved , is simply evaluated via eq.([slq7 ] ) , eq.([slq8 ] ) and the application of a trapezoidal quadrature rule .the computational tractability of slq relies on the ability to generate uniform samples from the set for any efficiently . clearly , _ab initio _ uniform sampling of will be rendered unacceptably inefficient as a proxy for sampling on for values of approaching .one approach to dealing with this is to approximate by a collection of dimensional , pair - wise disjoint hypercubes , denoted , each with their coordinate axis corresponding to the collection of model parameters in the problem .these cubes need not be of the same volume or proportion .each cube is then assigned a relative probability based on its volume : to uniformly sample from , one first needs to randomly select a particular hypercube s index according to the probability in eq.([slq9 ] ) , then one can perform uniform gibb s sampling on the selected cube to finally generate the next uniform sample from .indeed , this prescription can be viewed as a gibb s sampling over dimensions , plus one discretised dimension corresponding to the indexing on the hypercube approximation .this sampling over a union of hypercubes can be carried out very quickly , even in high - dimensions with many cubes in the collection ; and thus offers an appealing foundation on which to build a statistical quadrature .the next point to be addressed is how to create and maintain a collection of hypercubes which closely approximates . to this enda collection of points is first created and evolved to directly correspond to the pool of samples used in the ns quadrature construction .in addition to these points , the map and a collection of _ ab initio _ uniform samples from are initially added to the collection of .once any of these points fail to meet the constraint in the ns progression , skilling s multi - state leapfrog algorithm ( see 30.4 in mackay for details ) is employed on the collection of to find new points to replace those that no longer meet the -constraint . if a fixed number of attempts fails to produce points that satisfy the new -constraint , then effort is abandoned and the collection of is reduced accordingly . 1 . establish a minimal hypercube with axes corresponding to model parameters in the inference which contains all .we denote this hypercube .2 . for each create a hypercube , of the same size and orientation as .3 . perform a pairwise comparison between all , going through each dimensions to see if they are disjoint .note that the cubes need only be non - overlapping in one dimension to be disjoint .if a pair of hypercubes , and , are overlapping , establish the coordinate that constitutes their greatest separation .we will label this the coordinate for convenience .2 . if both and have their current bounding hyperplane along the coordinate lying between and , these hyperplanes are adjusted to have their coordinate be the average of the coordinates of both their previous positions .if only one of or has their current bounding hyperplane along the coordinate lying between and , then the other s bounding hyperplane s coordinate is adjusted to coincide with that of the one that separates and .4 . if neither nor has their current bounding hyperplane along the coordinate lying between and , then then both bounding hyperplanes are adjusted so their coordinate coincides with the coordinate of the average position between and .once the collection of approximating hypercubes is first established , changes , additions and removals ( corresponding to the evolution of ) can be made in accordance with the above pseudo - code in time , where is the current number of cubes .this is achieved primarily by tracking the coordinates along which each pair of cubes is separated and by noting which bounding hyperplanes correspond to those of for each and subsequently using this information to minimise the number of comparisons made on hypercube insertions and deletions .the above algorithm ensures that contains all with no overlaps between cubes ; although this union will not be a cover for , in general .as will typically be a poor approximation of , a global scaling factor , , along with a family of linear mappings on the collection of , having the properties and is also introduced .ultimately , new uniform samples are drawn from where is dynamically adjusted on the interval to achieve a desired level of efficiency for uniform samples having -evaluations greater than .this work was jointly funded by the australian government through international science linkages grant cg130047 , the australian research council grant ft0991899 , the australian national university , the united kingdom engineering and physical sciences research council under grant ep / g003955 , and by the european communities under the contract of association between euratom and ccfe . the views and opinions expressed herein do not necessarily reflect those of the european commission .d. mazon , j. blum , c. boulbe , b. faugeras , a. boboc , m. brix , p. de - vries , s. e. sharapov , and l. zabeo . .4th international conference on physics and control _ , catania , italy , 2010 .world scientific .
|
we present recent results and technical breakthroughs for the bayesian inference of tokamak equilibria using force - balance as a prior constraint . issues surrounding model parameter representation and posterior analysis are discussed and addressed . these points motivate the recent advancements embodied in the bayesian equilibrium analysis and simulation tool ( beast ) software being presently utilised to study equilibria on the mega - ampere spherical tokamak ( mast ) experiment in the uk ( von nessi _ et . al._2012 _ j. phys . a _ * 46 * 185501 ) . state - of - the - art results of using beast to study mast equilbria are reviewed , with recent code advancements being systematically presented though out the manuscript .
|
one of the simplest ways of creating fractals is by means of _ iterated function systems _ ( ifss ) .generally , the ifs can be defined on any complete metric space , but for the purposes of this paper we will restrict our discussion to the two - dimensional real space . in this sectionwe will go through the basic definitions , given in .the transformation defined on of the form where is real matrix and is a two - dimensional real vector , is called a ( two - dimensional ) _ affine transformation_. the finite set of affine contractive transformations , with respective contractivity factors together with the euclidean space is called _ ( hyperbolyc ) iterated function system ( ifs)_. its notation is and its _ contractivity factor _ is .since is a complete metric space , such is , where is the space of nonempty compact subsets of , with the hausdorff metric derived from the euclidean metric ( ) .it can be shown that if the hyperbolic ifs with the contractivity factor is given , then the transformation defined on , by is a contraction mapping with the contractivity factor . according to the fixed - point theorem, there exist a unique fixed point of , , called the _attractor _ of the ifs , that obeys , for any .( note that there are weaker conditions under which the attractor of an ifs can be obtained - the ifs need not to be contractive , see . )two algorithms are used for the visualization of an ifs attractor , deterministic and random ( the last is also called chaos game algorithm ) . in this paperwe construct the images of the attractors with the second , more efficient algorithm. our primary target in this paper will be fractal attractors , since there is no interactive , real - time tool for their modeling , to the best of our knowledge .. the rest of the paper is organized in four sections : related articles , theoretical grounds of the tool , the tool and its application , conclusions and future work .barnsley et al . , in define the fractal transformation as mapping from one attractor to another , using the top " addresses of the points of both attractors and illustrate the application in digital imaging . although not user friendly , not real - time and not continuous ( they are continuous under certain conditions ) , these transformations , have a lot of potential for diverse applications . in the coordinates of the points that compose the fractal image are determined from the ifs code and then the ifs code is modified to obtain translated , rotated , scaled or sheared fractal attractor , or attractor transformed by any affine transformation as the composition of aforementioned transformations . to make a new transformation, another ifs code has to be constructed . in ,darmanto et al .show a method for making weaving effects on tree - like fractals .they make a local control of the branches " of the tree , by changing a part of the ifs code . the control over the change of the attractor depends on the experience of the user for predicting how the modification of the coefficients in the ifs code will change the tree - like fractal .compared to the methods for transforming fractals in the cited papers , our tool is user - friendly , real time , relatively fast and relatively low memory consuming .it enables a continuous affine transformation of an arbitrary ifs attractor .the idea for modeling fractals , i.e. their predictive , continuous transformation , where barycentric coordinates are involved , is exposed in , , , by means of , so - called _ affine invariant iterated function systems - aifs_. the ifs code is transformed into aifs code which involves barycentric coordinates .the method that we propose is also based on barycentric coordinates , but no transformation of the ifs code is needed . also , kocic et al . in the use of , so called _ minimal canonical simplex _ , for better control of the attractor ( for 2d case , minimal canonical simplex is the isosceles right - angled triangle with the minimal area that contains the attractor and whose catheti are parallel to the _ x _ and _ y _ axes ) . in authors prove a theorem for existence and uniqueness of such minimal canonical simplex .it is the limit of the cauchy sequence of the minimal simplexes of the -th preattractor when approaches to infinity , in the complete metric space .( the -th preattractor in this paper is defined as the set obtained after iterations of the chaos game algorithm . )barycentric coordinates of points ( and vectors ) promote global geometrical form and position rather than exact coordinates in relation to the origin / axes .thus , the geometric design done by the use of barycentric coordinates can be called _ coordinate - free geometric design_. the set of the three points is said to be _ affine basis _ of the affine space of points if the set is a vector base of , observed as a vector space ( ) .we say that the point has _ barycentric coordinates _ _ relative to the basis _ , where , and we write if and only if one of the following three equivalent conditions holds : we will give the relations between the barycentric coordinates relative to the given basis and the rectangular coordinates of an arbitrary point from .suppose that the arbitrary point and the three basis points and have rectangular coordinates , , and , respectively .then the relation ( [ osnovna ] ) can be rewritten in the following form = a \left[\begin{array}{c}a_1\\a_2\end{array}\right ] + b \left[\begin{array}{c}b_1\\b_2\end{array}\right ] + c \left[\begin{array}{c}c_1\\c_2\end{array}\right],\ ] ] or , in matrix form = \left[\begin{array}{ccc}a_1&b_1&c_1\\a_2&b_2&c_2\end{array}\right]\left[\begin{array}{c}a\\b\\c\end{array}\right].\ ] ] by rearranging the last matrix equation , we obtain more suitable form for further manipulations : = \left[\begin{array}{ccc}a_1&b_1&c_1\\a_2&b_2&c_2\\1&1&1\end{array}\right]\left[\begin{array}{c}a\\b\\c\end{array}\right].\ ] ] the relation ( [ bar - vo - dek ] ) defines the _ conversion of the barycentric coordinates relative to the affine basis into the rectangular coordinates_. let us denote the matrix by to indicate triangle " .the inverse conversion can be easily get , by simple multiplication of ( [ bar - vo - dek ] ) by the inverse of ( which exists , since is an affine basis ) .that is , the _ conversion of the rectangular coordinates into the barycentric coordinates relative to the basis _ is defined by the relation = \mathbf{t}^{-1 } \cdot \left[\begin{array}{c}x\\y\\1\end{array}\right],\ ] ] which after calculating the inverse of , can be expressed in its explicit form , = \frac{1}{\det { \mathbf t}}\left[\begin{array}{ccc}b_2-c_2&c_1-b_1&b_1c_2-b_2c_1\\c_2-a_2&a_1-c_1&a_2c_1-a_1c_2\\a_2-b_2&b_1-a_1&a_1b_2-a_2b_1\end{array}\right ] \cdot \left[\begin{array}{c}x\\y\\1\end{array}\right],\ ] ] where .barycentric coordinates have two other names : _ affine _ and _ relative _ coordinates .affine , since they are related to an affine basis , and relative , since they define relative positions in the plane / space .namely , if we consider a set of points with given barycentric coordinates relative to the basis and we affinly transform the basis into the basis , then the set of points having the same barycentric coordinates , but now relative to the basis , will keep the relative geometry , i.e. will be transformed by the same affine transformation . note that any change of one affine basis to another , defines a unique affine transformation .since the image of the attractor , generated by the random algorithm is a finite set of points , we will use the aforementioned property of the barycentric coordinates to affinly transform the ifs fractal .our real time , user - friendly tool is written in c programming language using the visual studio 2013 .it allows both on - click definition of the triangle and definition by specifying the triangle s vertices directly in the code , with their rectangular coordinates .when the affine base is defined , the image of the fractal is created ( relative to the triangle ) and ready for transformation .triangle s vertices are moved by drag and drop option of the cursor . when the cursor is placed over the vertices, it has a `` hand '' shape , otherwise its shape is `` cross '' .the affine transformations of the triangle , immediately followed by the same affine transformation of the fractal image , are visible in real time , in the coordinate system of the window .we will show the superiority of our tool on two examples of fractal attractors , the so called flower " and maple " . in order to get clearer image of the attractor, we neglect the first 14 points and start to plot from the 15-th point .note that , by default , the origin of the coordinate system of the window is located at the top - left corner of the window , with positive directions : to the right for the -axis and down for the -axis .example 1 .the ifs , whose attractor is the flower " fractal , is defined by the following contractive mappings , : , \,\,\ , \mathbf{b}_1= \left [ \begin{array}{c } 0.37\\ 1.74 \end{array}\right];\ ] ] ,\,\,\ , \mathbf{b}_2= \left [ \begin{array}{c}-0.34\\ 1.75 \end{array}\right].\ ] ] figure [ flower ] ( a ) , b ) , c ) ) depicts that the arbitrary control triangle is defined on - click . when moving its vertexes by click - and - drag , the fractal attractor appropriately responds to the transformations done over the triangle .example 2 .the attractor of the ifs , where the mappings , are defined by , \,\,\ , \mathbf{b}_1= \left [ \begin{array}{c } -0.08\\ 0.26 \end{array}\right];\ ] ] ,\,\,\ , \mathbf{b}_2= \left [ \begin{array}{c}0.07\\ 3.5 \end{array}\right];\ ] ] ,\,\,\ , \mathbf{b}_3= \left [ \begin{array}{c}0.74\\ 0.39 \end{array}\right];\ ] ] ,\,\,\ , \mathbf{b}_4= \left [ \begin{array}{c}-0.56\\ 0.60 \end{array}\right];\ ] ] is the fractal called maple " .we computed the coordinates of the vertices of the minimal canonical simplex for this fractal and used the vertices as an affine basis .such affine basis ensures better control over the attractor ( , ) , see figure [ maple ] .we examined creation of the shadow or wind effect for the maple tree , throughout d ) to f ) on this figure .we succeeded in creating a real - time and user - friendly tool for interactive modeling of ifs attractors , with focus on fractal attractors . to the best of our knowledge ,this is the first real - time interactive tool for modeling fractals .the tool is very efficient because it is relatively low memory and time consuming .we foresee a great application of our tool , since all images coded by an ifs code can be affinly transformed according the user needs , only by click - and - drag .these are the first steps in fractal modeling with our tool .there is much more to be done to enhance the tool s performances .for instance , including a control polygon of more than three points . moreover , it would be very efficient to use the convex hull of the attractor as a control polygon . in the last casethe control over the global changes of the attractor will be maximal . including local controlwould be also useful for satisfying the sophisticated needs of the users .finally , expanding the tool over 3d ifs attractors will bring higher practical importance to the tool .e. babace : iterated function system with affine invariance property ( in macedonian ) , phd thesis , faculty of natural sciences and mathematics , university `` ss .cyril and methodius '' , skopje , r. macedonia , 2009 e. babace , lj .m. koci ' c : minimal simplex for ifs fractal sets , naa 2008 , lecture notes in computer science 5434 , pp.168 - 175 , eds . : s. margenov , l. g. vulkov , j. wasniewski , springer - verlag berlin heidelberg , 2009 t. darmanto , i. s. suwardi , r. munir : weaving effects in metamorphic animation of tree- like fractal based on a family of multi - transitional iterated function system code , computer , doi : 10.1109/ic3ina.2013.6819150 conference : the international conference on computer , control , informatics and its applications 2013 ( ic3ina 2013 ) , at jakarta , indonesia , volume : i lj .m. koci ' c , a. c. simoncelli : towards free - form fractal modelling , in : mathematical methods for curves and surfaces ii , m. daehlen , t. lyche and l. l. schumaker ( eds . ) , pp .287 - 294 , vanderbilt university press , nashville ( tn . ) , 1998 lj . m. koci ' c , l. stefanovska , e. babace : aifs and the minimal simplex problem , proceedings of the international conference of differential geometry and dynamical systems , 5 - 7.10.2007 , bucharest , romania , pp.119 - 128 , 2008
|
this work introduces a novel tool for interactive , real - time transformations of two dimensional ifs fractals . we assign barycentric coordinates ( relative to an arbitrary affine basis of ) to the points that constitute the image of a fractal . the tool uses some of the nice properties of the barycentric coordinates , enabling any affine transformation of the basis , done by click - and - drag , to be immediately followed by the same affine transformation of the ifs fractal attractor . in order to have a better control over the fractal , as affine basis we use a kind of minimal simplex that contains the attractor . we give theoretical grounds of the tool and then the software application .
|
in this study we have dated two human fossil remains found in romania , by the method of radiocarbon dating using the accelerator mass spectrometry ( ams ) technique , performed at the pelletron accelerator in lund university , sweden .these are the most ancient dated human fossil remains from romania , attributed by some archaeologists to the upper paleolithic , the aurignacian period .the first skull , and scapulum and tibia remains were found in 1952 in baia de fier , in the woman s cave , in hateg , gorj county in the province oltenia , by constantin nicolaescu - plopsor .another skull was found in cioclovina cave , near commune bosorod , hunedoara county in transylvania , found by a worker at the exploitation of phosphate deposits , in the year 1941 .the skull arrived at francisc rainer , anthropologist , and ioan simionescu , geologist , who published a study of this skull. the absence of stratigraphical observation information made the cultural and chronological attribution of these skulls very difficult , and a number of archaeologists questioned the paleolithic character of these fossil remains . for this reason ,the dating of the two skulls by a physical analysis is decisive .samples of bone were taken from the scapulum and tibia remains from the woman s cave , from baia de fier , and from the skull from the cioclovina cave .the content was determined in the two samples by using the ams system at lund university .normally , sufficient collagen for ams measurements can be extracted from bone fragments with masses of 1 g , or more provided that at least 5 to 10% of the original collagen content is present .but for the presently studied bone remains , because of the small available quantity of very old bone samples , the determination of the radiocarbon content in the bones was difficult .we have essentially applied the longin method for the extraction of _ collagen _ from the bone structure .we use the _ collagen _ to refer to collagen that has undergone a degree of diagenesis .the next step is the transformation of the _ collagen _ into pure carbon in an experimental set - up for the preparation of samples for ams technique .the pure carbon , placed in a copper holder , is arranged in a wheel , together with two standards of oxalic acid and one anthracite background sample .the wheel with the samples and standards is put into the ion source of the accelerator .the central part of the lund ams system is a pelletron tandem accelerator .the accelerator is run at a terminal voltage of 2.4 mv during ams experiments .the particle identification and measuring system consists of a silicon surface barrier detector of diameter of 25 mm .the computer system alternately analyses the data of the current received from a current integrator and the counts arriving from the particle detector , to obtain , finally , the ratio / for each sample .each sample is measured 7 times .the precision of the measurements for samples close to modern is around 1 % .dating of the two sampes by the ams - technique gave the following results : +
|
in this study we have dated two human fossil remains found in romania , by the method of radiocarbon using the technique of the accelerator mass spectrometry . the human fossil remains from woman s cave , baia defier , have been dated to the age 30150 800 years bp , and the skull from the cioclovina cave has been dated to the age 29000 700 years bp . these are the most ancient dated till now human fossil remains from romania , possibly belonging to the upper paleolithic , the aurignacian period .
|
bayesian nonparametrics is the area of bayesian analysis in which the finite - dimensional prior distributions of classical bayesian analysis are replaced with stochastic processes .while the rationale for allowing infinite collections of random variables into bayesian inference is often taken to be that of diminishing the role of prior assumptions , it is also possible to view the move to nonparametrics as supplying the bayesian paradigm with a richer collection of distributions with which to express prior belief , thus in some sense emphasizing the role of the prior . in practice , however , the field has been dominated by two stochastic processes the gaussian process and the dirichlet process and thus the flexibility promised by the nonparametric approach has arguably not yet been delivered . in the current paperwe aim to provide a broader perspective on the kinds of stochastic processes that can provide a useful toolbox for bayesian nonparametric analysis . specifically , we focus on _ combinatorial stochastic processes _ as embodying mathematical structure that is useful for both model specification and inference .the phrase `` combinatorial stochastic process''comes from probability theory , where it refers to connections between stochastic processes and the mathematical field of combinatorics .indeed , the focus in this area of probability theory is on random versions of classical combinatorial objects such as partitions , trees and graphs and on the role of combinatorial analysis in establishing properties of these processes .as we wish to argue , this connection is also fruitful in a statistical setting .roughly speaking , in statistics it is often natural to model observed data as arising from a combination of underlying factors . in the bayesian setting , such models are often embodied as latent variable models in which the latent variable has a compositional structure . making explicit use of ideas from combinatorics in latent variable modelingcan not only suggest new modeling ideas but can also provide essential help with calculations of marginal and conditional probability distributions .the dirichlet process already serves as one interesting exhibit of the connections between bayesian nonparametrics and combinatorial stochastic processes . on the one hand ,the dirichlet process is classically defined in terms of a partition of a probability space , and there are many well - known connections between the dirichlet process and urn models ( ; ) . in the current paper , we will review and expand upon some of these connections , beginning our treatment ( nontraditionally ) with the notion of an _ exchangeable partition probability function _ ( eppf ) and , from there , related urn models , stick - breaking representations , subordinators and random measures . on the other hand , the dirichlet process is limited in terms of the statistical notion of a `` combination of underlying factors '' that we referred to above .indeed , the dirichlet process is generally used in a statistical setting to express the idea that each data point is associated with one and only one underlying factor .in contrast to such _ clustering models _ , we wish to also _ featural models _ , where each data point is associated with a set of underlying features and it is the interaction among these features that gives rise to an observed data point . focusing on the case in which these features are binary , we develop some of the combinatorial stochastic process machinery needed to specify featural priors .specifically , we develop a counterpart to the eppf , which we refer to as the _ exchangeable feature probability function _ ( efpf ) , that characterizes the combinatorial structure of certain featural models .we again develop connections between this combinatorial function and suite of related stochastic processes , including urn models , stick - breaking representations , subordinators and random measures .as we will discuss , a particular underlying random measure in this case is the _ beta process _, originally studied by as a model of random hazard functions in survival analysis , but adapted by for applications in featural modeling . for statistical applicationsit is not enough to develop expressive prior specifications , but it is also essential that inferential computations involving the posterior distribution are tractable .one of the reasons for the popularity of the dirichlet process is that the associated urn models and stick - breaking representations yield a variety of useful inference algorithms .as we will see , analogous algorithms are available for featural models .thus , as we discuss each of the various representations associated with both the dirichlet process and the beta process , we will also ( briefly ) discuss some of the consequences of each for posterior inference . the remainder of the paper is organized as follows .we start by reviewing partitions and introducing feature allocations in section [ sec : partition_feature ] in order to define distributions over these models ( section [ sec : epf ] ) via the eppf in the partition case ( section [ sec : epf_eppf ] ) and the efpf in the feature allocation case ( section [ sec : epf_efpf ] ) . illustrating these exchangeable probability functions with examples, we will see that the well - known _ chinese restaurant process _( crp ) corresponds to a particular eppf choice ( example [ ex : epf_crp ] ) and the _ indian buffet process _( ibp ) ( griffiths and ghahramani , ) corresponds to a particular choice of efpf ( example [ ex : epf_ibp ] ) . from here , we progressively build up richer models by first reviewing stick lengths ( section [ sec : stick ] ) , which we will see represent limiting frequencies of certain clusters or features , and then subordinators ( section [ sec : sub ] ) , which further associate a random label with each cluster or feature .we illustrate these progressive augmentations for both the crp ( examples [ ex : epf_crp ] , [ ex : cond_crp ] , [ ex : stick_crp ] , [ ex : sub_crp ] and [ ex : sub_crp_sticks ] ) and ibp examples ( examples [ ex : epf_ibp ] , [ ex : cond_ibp ] , [ ex : stick_ibp ] and [ ex : sub_ibp ] ) .we augment the model once more to obtain a random measure on a general space of cluster or feature parameters in section [ sec : crm ] , and discuss how marginalization of this random measure yields the crp in the case of the dirichlet process ( example [ ex : crm_dp ] ) and the ibp in the case of the beta process ( example [ ex : crm_bp ] ) .finally , in section [ sec : conclusion ] , we mention some of the other combinatorial stochastic processes , beyond the dirichlet process and the beta process , that have begun to be studied in the bayesian nonparametrics literature , and we provide suggestions for further developments .while we have some intuitive ideas about what constitutes a cluster or feature model , we want to formalize these ideas before proceeding .we begin with the underlying combinatorial structure on the data indices .we think of : = \{1,\ldots , n\} ] . in particular ,a partition of ] called _ blocks _ ; that is , for some number of partition blocks . an example partition of ] is defined to be a multiset of nonempty subsets of ] is . just as the blocks of a partitionare sometimes called _ clusters _ , so are the blocks of a feature allocation sometimes called _we note that a partition is always a feature allocation , but the converse statement does not hold in general ; for instance , given above is not a partition . in the remainder of this sectionwe continue our development in terms of feature allocations since partitions are a special case of the former object .we note that we can extend the idea of random partitions to consider _ random feature allocations_. if is the space of all feature allocations of ] is a random element of this space .we next introduce a few useful assumptions on our random feature allocation . just as exchangeability of observationsis often a central assumption in statistical modeling , so will we make use of _ exchangeable feature allocations_. to rigorously define such feature allocations , we introduce the following notation .let be a finite permutation .that is , for some finite value , we have for all . further , for any block , denote the permutation applied to the block as follows : . for any feature allocation , denote the permutation applied to the feature allocation as follows : .finally , let be a random feature allocation of ] is a _ restriction _ of a feature allocation of ] whose restriction to ] : \dvtx a \in{f}_{\infty}\} ] . a characterization of distributions for is provided by , where a similar treatment of the introductory ideas of this section also appears . in what follows ,we consider particular useful ways of representing distributions for exchangeable , consistent random feature allocations with emphasis on partitions as a special case .once we know that we can construct ( exchangeable and consistent ) random partitions and feature allocations , it remains to find useful representations of distributions over these objects .consider first an exchangeable , consistent , random partition . by the exchangeability assumption, the distribution of the partition should depend only on the ( unordered ) sizes of the blocks .therefore , there exists a function that is symmetric in its arguments such that , for any specific partition assignment , we have the function is called the _ exchangeable partition probability function _ ( eppf ) .[ ex : epf_crp ] the chinese restaurant process ( crp ) ( blackwell and macqueen , ) is an iterative description of a partition via the conditional distributions of the partition blocks to which increasing data indices belong .the chinese restaurant metaphor forms an equivalence between customers entering a chinese restaurant and data indices ; customers who share a table at the restaurant represent indices belonging to the same partition block . to generate the label for the first index , the first customer entersthe restaurant and sits down at some table , necessarily unoccupied since no one else is in the restaurant .a `` dish '' is set out at the new table ; call the dish `` 1 '' since it is the first dish .the customer is assigned the label of the dish at her table : .recursively , for a restaurant with _ concentration parameter _ , the customer sits at an occupied table with probability in proportion to the number of people at the table and at a new table with probability proportional to . in the former case, takes the value of the existing dish at the table , and , in the latter case , the next available dish ( equal to the number of existing tables plus one ) appears at the new table , and . by summing over all possibilitieswhen the customer arrives , one obtains the normalizing constant for the distribution across potential occupied tables : .an example of the distribution over tables for the customer is shown in figure [ fig : crp ] . to summarize ,if we let , then the distribution of table assignments for the customer is \\[-8pt ] \nonumber & & \quad= ( n-1+{\theta})^{-1 } \cases{\#\{m\dvtx m < n , { z}_{m } = j\},\vspace*{2pt}\cr \quad\hspace*{11pt } \mbox{for } j \le k_{n-1 } , \vspace*{2pt}\cr \theta,\quad\mbox{for } k = k_{n-1}+1.}\end{aligned}\ ] ] we note that an equivalent generative description follows a plya urn style in specifying that each incoming customer sits next to an existing customer with probability proportional to 1 and forms a new table with probability proportional to .next , we find the probability of the partition induced by considering the collection of indices sitting at each table as a block in the partition .suppose that individuals sit at table so that the set of cardinalities of nonzero table occupancies is with .that is , we are considering the case when customers have entered the restaurant and sat at different tables in the specified configuration .we can see from equation ( [ eq : crp ] ) that when the customer enters ( ) , we obtain a factor of in the denominator . using the following notation for therising and falling factorial we find a factor of must occur in the denominator of the probability of the partition of ] for any by construction , we have that equation ( [ eq : crp_eppf ] ) satisfies the consistency condition .it follows that equation ( [ eq : crp_eppf ] ) is , in fact , an eppf . just as we considered an exchangeable , consistent , random partition above, so we now turn to an exchangeable , consistent , random feature allocation .let be any particular feature allocation .in calculating , we start by demonstrating in the next example that this probability in some sense undercounts features when they contain exactly the same indices : for example , for some .for instance , consider the following example .[ ex : two_bern ] let represent the frequencies of features and . draw and , independently . construct the random feature allocation by collecting those indices with successful draws : then the probability of the feature allocation is but the probability of the feature allocation is the difference is that in the latter case the features can be distinguished , and so we must account for the two possible pairings of features to frequencies .now , instead , let be with a uniform random ordering on the features .there is just a single possible ordering of , so the probability of is again however , there are two orderings of , so the probability of is and the same holds for the other ordering . for reasons suggested by the previous example , we will find it useful to work with the random feature allocation after uniform random ordering , .one way to achieve such an ordering and maintain consistency across different is to associate some independent , continuous random variable with each feature ; for example , assign a uniform random variable on ] .next , customer chooses new dishes to try . if , then the dishes receive unique labels . here, represents the number of sampled dishes after customers : .an example of the first few steps in the indian buffet process is shown in figure [ fig : ibp ]. indicates customer has sampled dish , and a white box indicates the customer has not sampled the dish . in the example, the second customer has sampled exactly those dishes indexed by 2 , 4 and 5 : . ] with this generative model in hand , we can find the probability of a particular feature allocation .we discover its form by enumeration as for the crp eppf in example [ ex : epf_crp ] . at each round , we have a poisson number of new features , , represented . the probability factor associated with these choicesis a product of poisson densities : let be the round on which the dish , in order of appearance , is first chosen. then the denominators for future dish choice probabilities are the factors in the product .the numerators for the times when the dish is chosen are the factors in the product .the numerators for the times when the dish is not chosen yield .let represent the collection of indices in the feature with label after customers have entered the restaurant. then .finally , let be the multiplicities of unique feature blocks formed by this model .we note that there are \bigg/ \biggl [ \prod _ { h=1}^{h } { \tilde{k}}_{h } !\biggr]\ ] ] rearrangements of the features generated by this process that all yield the same feature allocation .since they all have the same generating probability , we simply multiply by this factor to find the feature allocation probability . multiplying all factors together and taking yields \\ & & \qquad { } \cdot \biggl [ \prod_{k=1}^{k_{n } } \frac{\gamma({{\theta}}+ m_{k})}{\gamma({{\theta}}+ n ) } \gamma(n_{n , k } ) \frac{\gamma({{\theta}}+ n - n_{n , k})}{\gamma({{\theta}}+m_{k}-1 ) } \biggr ] \\ & & \quad= \biggl ( \prod_{h=1}^{h } { \tilde{k}}_{h } !\biggr)^{-1 } \biggl [ \prod _ { n=1}^{n } ( { { \theta}}{\gamma})^{k^{+}_{n } } \exp \biggl ( -\frac { { { \theta}}{\gamma}}{{{\theta}}+ n -1 } \biggr ) \biggr ] \\ & & \qquad{}\cdot \biggl [ \frac{\prod_{k=1}^{k_{n } } ( \theta+ m_{k } - 1)}{\prod_{n=1}^{n } ( \theta+ n - 1)^{k_{n}^{+ } } } \biggr ] \\ & & \qquad { } \cdot \biggl [ \prod_{k=1}^{k_{n } } \frac{\gamma(n_{n , k } ) \gamma({{\theta}}+ n - n_{n , k})}{\gamma({{\theta}}+ n ) } \biggr ] \\ & & \quad= \biggl ( \prod_{h=1}^{h } { \tilde{k}}_{h } ! \biggr)^{-1 } ( { { \theta}}{\gamma})^{k_{n } } \\ & & \qquad{}\cdot\exp \biggl ( -{{\theta}}{\gamma}\sum_{n=1}^{n } ( { { \theta}}+ n - 1)^{-1 } \biggr)\\ & & \qquad{}\cdot \prod_{k=1}^{k_{n } } \frac{\gamma(n_{n , k } ) \gamma ( n - n_{n , k}+{{\theta}})}{\gamma(n+{{\theta}})}.\end{aligned}\ ] ] it follows from equation ( [ eq : order_mult ] ) that the probability of a uniform random ordering of the feature allocation is the distribution of has no dependence on the ordering of the indices in ] yields an exchangeable probability function , for example , the eppf in the crp case ( example [ ex : epf_crp ] ) and the efpf in the ibp case ( example [ ex : epf_ibp ] ) .this conditional distribution is often called a _ prediction rule _ , and study of the prediction rule in the clustering case may be referred to as _ species sampling _ ( ; ; ) .we will see next that the prediction rule can conversely be recovered from the exchangeable probability function specification and , therefore , the two are equivalent . in examples [ ex :epf_crp ] and [ ex : epf_ibp ] above , we formed partitions and feature allocations in the following way . for partitions , we assigned labels to each index .then we generated a partition of ] from the sequence by first letting be the set of unique values in . then the features are the collections of indices with shared labels : . the resulting feature allocation is called the _ induced feature allocation _ given the labels .similarly , given label collections , where each has finite cardinality , we can form an induced feature allocation of .as in the partition case , given a sequence , we can see that the induced feature allocations of the subsequences will be consistent . in reducing to a partition or feature allocation from a set of labels , we shed the information concerning the labels for each partition block or feature .conversely , we introduce _ order - of - appearance _ labeling schemes to give partition blocks or features labels when we have , respectively , a partition or feature allocation . in the partition case, the order - of - appearance labeling scheme assigns the label 1 to the partition block containing index 1 .recursively , suppose we have seen indices in different blocks with labels . and suppose the index does not belong to an existing block .then we assign its block the label .in the feature allocation case , we note that index 1 belongs to features .if , there are no features to label yet . if , we assign these features labels in . unless otherwise specified , we suppose that the labels are chosen uniformly at randomlet .recursively , suppose we have seen indices and different features with labels .suppose the index belongs to features that have not yet been labeled .let .if , there are no new features to label .if , assign these features labels in , for example , uniformly at random .we can use these labeling schemes to find the prediction rule , which makes use of partition block and feature labels , from the eppf or efpf as appropriate .first , consider a partition with eppf .then , given labels with , we wish to find the distribution of the label . using an order - of - appearance labeling , we know that either or . let be the partition induced by .let .let be the indicator of event ; that is , equals 1 if holds and 0 otherwise .let for , and set for completeness . is the number of partition blocks in the partition of ] .using an order - of - appearance labeling , we know that , if , the new features have labels . let be the feature allocation induced by .let be the size of the feature .so , where we let for all of the features that are first exhibited by index : .further , let the number of features , including new ones , be written .then the conditional distribution satisfies as we assume that the labels are consistentacross , the probability of a certain labeling is just the probability of the underlying ordered feature allocation times a combinatorial term .the combinatorial term accounts first for the uniform ordering of the new features among themselves for labeling and then for the uniform ordering of the new features among the old features in the overall uniform random ordering : \nonumber\\ & & \qquad { } \cdot\frac { { { p}}(n , n_{n+1,1},\ldots , n_{n+1,k_{n+1 } } ) } { { { p}}(n , n_{n,1},\ldots , n_{n , k_{n } } ) } \nonumber\\ & & \quad= \frac{1}{k_{n+1}^{+ } ! }\cdot \frac{k_{n+1}!}{k_{n}!}\nonumber\\ & & \qquad { } \cdot\frac { { { p}}(n , n_{n+1,1},\ldots , n_{n+1,k_{n+1 } } ) } { { { p}}(n , n_{n,1},\ldots , n_{n , k_{n } } ) } .\end{aligned}\ ] ] [ ex : cond_ibp ] just as we derived the chinese restaurant process prediction rule [ equation ( [ eq : pred_crp_derived ] ) ] from its eppf [ equation ( [ eq : crp_eppf ] ) ] in example [ ex : cond_crp ] , so can we derive the indian buffet process prediction rule from its efpf [ equation ( [ eq : ibp_efpf ] ) ] by using equation ( [ eq : pred_from_efpf ] ) . substituting the ibp efpf into equation ( [ eq : pred_from_efpf ] ), we find \\ & & \qquad\bigg/\biggl\{\biggl(\frac{1}{k_{n}!}\biggr ) ( { { \theta}}{\gamma})^{k_{n } } \\ & & \hspace*{30pt}\quad{}\cdot\exp \biggl ( -{{\theta}}{\gamma}\sum_{n=1}^{n } ( { { \theta}}+ n - 1)^{-1 } \biggr)\\ & & \hspace*{32pt}\quad{}\cdot \biggl[\prod_{k=1}^{k_{n } } { \gamma ( n_{n , k } ) \gamma(n - n_{n , k}+{{\theta}})}\\ & & \hspace*{112pt}\qquad{}/{\bigl(\gamma(n+{{\theta}})\bigr)}\biggr]\biggr\ } \\ & & \quad= \biggl [ \frac{1}{k_{n+1}^{+ } ! } \exp \biggl(- \frac{{{\theta}}{\gamma } } { \theta+ ( n+1 ) - 1 } \biggr ) \\ & & \hspace*{57pt}{}\cdot\biggl(\frac{{{\theta}}{\gamma } } { \theta+ ( n+1 ) - 1 } \biggr)^{k_{n+1}^{+ } } \biggr ] \\ & & \qquad { } \cdot\bigl({{\theta}}+ ( n+1 ) - 1\bigr)^{k_{n+1}^{+}}\\ & & \qquad { } \cdot \biggl [ \prod _ { k = k_{n}+1}^{k_{n+1 } } \bigl({{\theta}}+ ( n+1 ) - 1 \bigr)^{-1 } \biggr ] \\ & & \qquad { } \cdot\prod_{k=1}^{k_{n } } \frac{n_{k}^{{\mathbh{1}}\{k \in z\ } } ( n - n_{n , k } + { { \theta}})^{{\mathbh{1}}\{k \notin z\ } } } { n+{{\theta } } } \\ & & \quad= { \operatorname{pois}}\biggl ( k_{n+1}^{+ } \big| \frac{{{\theta}}{\gamma } } { \theta+ ( n+1 ) - 1 } \biggr)\\ & & \qquad { } \cdot\prod_{k=1}^{k_{n } } { \operatorname{bern}}\biggl ( { \mathbh{1}}\{k \in z\}\big | \frac { n_{n , k}}{n + { { \theta } } } \biggr).\end{aligned}\ ] ] the final line is exactly the poisson distribution for the number of new features times the bernoulli distributions for the draws of existing features , as described in example [ ex : epf_ibp ] .the prediction rule formulation of the eppf or efpf is particularly useful in providing a means of inferring partitions and feature allocations from a data set . in particular, we assume that we have data points generated in the following manner . in the partition case, we generate an exchangeable , consistent , random partition according to the distribution specified by some eppf .next , we assign each partition block a random parameter that characterizes that block . to be precise , for the partition block to appear according to an order - of - appearance labeling scheme , give this block a new _ random _label , for some continuous distribution . for each , let where is the order - of - appearance label of index .finally , let for some distribution with parameter .the choices of both and are specific to the problem domain . without attempting to survey the vast literature on clustering ,we describe a stylized example to provide intuition for the preceding generative model . in this example , let index an animal observed in the wild ; indicates that animals and belong to the same ( latent , unobserved ) species ; is a vector describing the ( latent , unobserved ) height and weight for that species ; and is the observed height and weight of the animal . need not even be directly observed , but equation ( [ eq : likelihood ] ) together with an eppf might be part of a larger generative model . in a generalization of the previous stylized example , indicates the dominant species in the geographical region ; indicates some overall species height and weight parameters ( for the species ) ; indicates the height and weight parameters for species in the region .that is , the height and weight for the species may vary by region .we measure and observe the height and weight of some animals in the region , believed to be i.i.d. draws from a distribution depending on . note that the sequence is sufficient to describe the partition is the collection of blocks of ] , such that each random label is drawn independently from the rest .this construction is the same as the one used for parameter generation in section [ sec : epf_infer ] , and is exchangeable by the same arguments used there .let equal exactly when belongs to the partition with this label .if we apply de finetti s theorem to the sequence and note that has at most countably many different values , we see that there exists some random sequence such that ] as follows .let represent a sequence of values in ] such that .we generate feature collections independently for each index as follows .start with . for each feature , add to the set , independently from all other features , with probability .let be the induced feature allocation given .exchangeability of follows from the i.i.d .draws of , and consistency follows from the induced feature allocation construction .the finite sum constraint ensures each index belongs to a finite number of features a.s .it remains to specify a distribution on the partition or feature frequencies .the frequencies can not be i.i.d .due to the finite summation constraint in both cases . in the partition case ,any infinite set of frequencies can not even be independent since the summation is fixed to one .one scheme to ensure summation to unity is called _ stick - breaking _ ( ; ; ; ) . in stick - breaking , the stick lengths are obtained by recursively breaking off parts of the unit interval to return as the atoms ( cf . figure [ fig : stick_illus ] ) .in particular , we generate stick - breaking proportions as ] for each and .if the do not decay too rapidly , we will have .in particular , the partition block proportions sum to unity a.s .iff there is no remaining stick mass : .we often make the additional , convenient assumption that the are independent . in this case, a necessary and sufficient condition for is = -\infty ] .finally , with the vector of table frequencies , each customer sits independently and identically at the corresponding vector of tables according to these frequencies .this process is summarized here : to see that this process is well - defined , first note that ] , so by the discussion before this example , we must have .the feature case is easier .since it does not require the frequencies to sum to one , the random frequencies can be independent so long as they have an a.s .finite sum .[ ex : stick_ibp ] as in the case of the crp , we can recover the stick lengths for the indian buffet process using an argument based on an urn model .recall that on the first round of the indian buffet process , features are chosen to contain index .consider one of the features , labeled . by construction, each future data point belongs to this feature with probability .thus , we can model the sequence after the first data point as a plya urn of the sort encountered in example [ ex : stick_crp ] with initially gray balls , white balls and replacement balls .as we have seen , there exists a random variable such that representation of this feature by data point is chosen , i.i.d . across all , as .since the bernoulli draws conditional on previous draws are independent across all , the are likewise independent of each other ; this fact is also true for in future rounds . draws according to such an urnare illustrated in each of the first four columns of the matrix in figure [ fig : polya_ibp ] .now consider any round .according to the ibp construction , new features are chosen to include index .each future data point ( with ) represents feature among these features with probability . in this case, we can model the sequence after the data point as a plya urn with initial gray balls , initial white balls and replacement balls .so there exists a random variable such that representation of feature by data point is chosen , i.i.d . across all , as . finally , then , we have the following generative model for the feature allocation by iterating across : \\[-8pt ] \eqntext { k = k_{n-1 } + 1,\ldots , k_{n } , } \\ \nonumber { i}_{n , k } & \stackrel{\mathrm{indep } } { \sim } & { \operatorname{bern } } ( { v}_{k}),\quad k = 1 , \ldots , k_{n}.\end{aligned}\ ] ] is an indicator random variable for whether feature contains index .the collection of features to which index belongs , , is the collection of features with . as we have seen above ,the exchangeable probability functions of section [ sec : epf ] are the marginal distributions of the partitions or feature allocations generated according to stick - length models with the stick lengths integrated out .it has been proposed that including the stick lengths in mcmc samplers of these models will improve mixing ( ishwaran and zarepour , ) . while it is impossible to sample the countably infinite set of partition block or feature frequencies in these models ( cf .examples [ ex : stick_crp ] and [ ex : stick_ibp ] ) , a number of ways of getting around this difficulty have been investigated . examine two separate finite approximations to the full crp stick - length model : one uses a parametric approximation to the full infinite model , and the other creates a truncation by setting the stick break at some fixed size to be 1 : .there also exist techniques that avoid any approximations and deal instead directly with the full model , in particular , retrospective sampling ( papaspiliopoulos and roberts , ) and slice sampling .while our discussion thus far has focused onmcmc sampling as a means of approximating the posterior distribution of either the block assignments or both the block assignments and stick lengths , including the stick lengths in a posterior analysis facilitates a different posterior approximation ; in particular , _ variational methods _ can also be used to approximate the posterior .these methods minimize some notion of distance to the posterior over a family of potential approximating distributions . the practicality and , indeed , speed of these methods in the case of stick - breaking for the crp ( example [ ex : stick_crp ] )have been demonstrated by .a number of different models for the stick lengths corresponding to the features of an ibp ( example [ ex : stick_ibp ] ) have been discovered .the distributions described in example [ ex : stick_ibp ] are covered by , who build on work from , .a special case of the ibp is examined by , who detail a slice sampling algorithm for sampling from the posterior of the stick lengths and feature assignments . yet another stick - length model for the ibpis explored by , who show how to apply variational methods to approximate the posterior of their model .stick - length modeling has the further advantage of allowing inference in cases where it is not straightforward to integrate out the underlying stick lengths to obtain a tractable exchangeable probability function .an important point to reiterate about the labels and label collections is that when we use the order - of - appearance labeling scheme for partition or feature blocks described above , the random sequences and are not exchangeable . often , however , we would like to make use of special properties of exchangeability when dealing with these sequences .for instance , if we use markov chain monte carlo to sample from the posterior distribution of a partition ( cf .section [ sec : epf_infer ] ) , we might want to gibbs sample the cluster assignment of data point given the assignments of the remaining data points : given .this sampling is particularly easy in some cases if we can treat as the last random variable in the sequence , but this treatment requires exchangeability . a way to get around this dilemmawas suggested by and appeared above in our motivation for using stick lengths .namely , we assign to the partition block a uniform random label ) ] .we can see that in both cases , all of the labels are a.s .now , in the partition case , let be the uniform random label of the partition block to which belongs . andin the feature case , let be the ( finite ) set of uniform random feature labels for the features to which belongs .we can recover the partition or feature allocation as the induced partition or feature allocation by grouping indices assigned to the same label .moreover , as discussed above , we now have that each of and is an exchangeable sequence .if we form partitions or features according to the stick - length constructions detailed in section [ sec : stick ] , we know that each unique partition or feature label is associated with a frequency . we can use this association to form a random measure : where is a unit point mass located at . in the partition case , , so the random measure is a random probability measure , and we may draw . in the feature case ,the weights have a finite sum but do not necessarily sum to one . in the feature case ,we draw by including each for which yields a draw of 1 .another way to codify the random measure in equation ( [ eq : rand_meas ] ) is as a monotone increasing stochastic process on ] . or , in general , the normalized jumps may be used as partition block frequencies .we can see from the right - hand side of figure [ fig : subordinator ] that the jumps of a subordinator partition intervals of the form , as long as the subordinator has no drift component . in either the feature or cluster case ,we have substituted the condition of independent and identical distribution for the partition or feature frequencies ( i.e. , the jumps ) with a more natural continuous - time analogue : independent , stationary intervals .just as the laplace transform of a positive random variable characterizes the distribution of that random variable , so does the laplace transform of the subordinator which is a positive random variable at any fixed time point describe this stochastic process ( ) .[ thm : lk ] if is a subordinator , then for we have with where is called the drift constant and is a nonnegative , lvy measure on .the function is called the _laplace exponent _ in this context .we note that a subordinator is characterized by its drift constant and lvy measure .using subordinators for feature allocation modeling is particularly easy ; since the jumps of the subordinators are formed by a poisson point process , we can use poisson process methodology to find the stick lengths and efpf . to set up this derivation ,suppose we generate feature membership from a subordinator by taking bernoulli draws at each of its jumps with success probability equal to the jump size .since every jump has strictly positive size , the feature associated with each jump will eventually score a bernoulli success for some index with probability one .therefore , we can enumerate all jumps of the process in order of appearance ; that is , we first enumerate all features in which index appears , then all features in which index appears but not index , and so on . at the iteration , we enumerate all features in which index appears but not previous indices .let represent the number of indices so chosen on the round .let so that recursively is the number of subordinator jumps seen by round , inclusive .let for be the distribution of a particular subordinator jump seen on round .we now turn to connecting the subordinator perspective to the earlier derivation of stick lengths in section [ sec : stick ] .[ ex : sub_ibp ] in our earlier discussion , we found a collection of stick lengths to represent the featural frequencies for the ibp [ equation ( [ eq : beta_proc_stick_lengths ] ) of example [ ex : stick_ibp ] in section [ sec : stick ] ] . to see the connection to subordinators , we start from the _ beta process subordinator _ with zero drift ( ) and lvy measure we will see that the mass parameter and concentration parameter are the same as those introduced in example [ ex : epf_ibp ] and continued in example [ ex : stick_ibp ] .[ thm : sub_sticks ] generate a feature allocation from a beta process subordinator with lvy measure given by equation ( [ eq : beta_levy ] ) .then the sequence of subordinator jumps , indexed in order of appearance , has the same distribution as the sequence of ibp stick lengths described by equations ( [ eq : beta_proc_num_sticks ] ) and ( [ eq : beta_proc_stick_lengths ] ) .-axis values of the filled black circles , emphasized by dotted lines , are generated according to a poisson process .the ] .the `` thinned '' points are the collection of -axis values corresponding to vertical axis values below and are denoted with a symbol . ]recall the following fact about poisson thinning , illustrated in figure [ fig : thinning ] .suppose that a poisson point process with rate measure generates points with values .then suppose that , for each such point , we keep it with probability ] ; a random measure with finite total mass is not sufficient in the partition case .hence , we must compute the stick lengths and eppf using partition block frequencies from these normalized jumps instead of directly from the subordinator jumps . in the eppf case, we make use of a result that gives us the exchangeable probability function as a function of the laplace exponent .though we do not derive this formula here , its derivation can be found in ; the proof relies on , first , calculating the joint distribution of the subordinator jumps and partition generated from the normalized jumps and , second , integrating out the subordinator jumps to find the partition marginal .[ thm : sub_eppf ] form a probability measure by normalizing jumps of the subordinator with laplace exponent . let be a consistent set of exchangeable partitions induced by i.i.d. draws from . for each exchangeable partition of ] belong , and suppose there are clusters among ] and the uniform distribution on ] and the uniform distribution on ]-valued coordinate ( leftmost axis ) as the atom weights , we find the measure ( a beta process ) on from equation ( [ eq : beta_ppp_crm ] ) in the bottom plane . ] now consider sampling a collection of atom locations according to bernoulli draws from the atom weights of a beta process and forming the induced feature allocation of the data indices .theorem [ thm : sub_sticks ] shows us that the distribution of the induced feature allocation is given by the indian buffet process efpf . in this section we finally study the full model first outlined in the context of inference of partition and feature structures in section [ sec : epf_infer ] .the partition or feature labels described in this section are the same as the block - specific parameters first described in section [ sec : epf_infer ] .since this section focuses on a generalization of the partition or feature labeling scheme beyond the uniform distribution option encoded in subordinators , inference for the atom weights remains unchanged from sections [ sec : epf_infer ] , [ sec : stick_infer ] and [ sec : sub_infer ] .however , we note that , in the course of inferring underlying partition or feature structures , we are often also interested in inferring the parameters of the generative model of the data given the partition block or the feature labels .conditional on the partition or feature structure , such inference is handled as in a normal hierarchical model with fixed dependencies .namely , the parameter within a particular block may be inferred from the data points that depend on this block as well as the prior distribution for the parameters .details for the dirichlet process example inferred via mcmc sampling are provided by , , ; work out details for the dirichlet process using variational methods . in the beta process case , , , describe mcmc sampling , and describe a variational approach .in the discussion above we have pursued a progressive augmentation from ( 1 ) simple distributions over partitions and feature allocations in the form of exchangeable probability functions to ( 2 ) the representation of stick lengths encoding frequencies of the partition block and feature occurrences to ( 3 ) subordinators , which associate random -valued labels with each partition block or feature , and finally to ( 4 ) completely random measures , which associate a general class of labels with the stick lengths and whose labels we generally use as parameters in likelihood models built from the partition or feature allocation representation . along the way, we have focused primarily on two vignettes .we have shown , via these successive augmentations , that the chinese restaurant process specifies the marginal distribution of the induced partition formed from i.i.d. draws from a dirichlet process , which is in turn a normalized completely random measure .and we have shown that the indian buffet process specifies the marginal distribution of the induced feature allocation formed by i.i.d .bernoulli draws across the weights of a beta process .there are many extensions of these ideas that lie beyond the scope of this paper .a number of extensions of the crp and dirichlet process exist in either the eppf form ( ; ) , the stick - length form ( dunson and park , ) or the random measure form ( pitman and yor , ) . likewise , extensions of the ibp and beta process have been explored ( ; ; ) .more generally , the framework above demonstrates how alternative partition and feature allocation models may be constructed either by introducing different eppfs ( ; ) or efpfs , different stick - length distributions or different random measures . finally , we note that expanding the set of combinatorial structures with useful bayesian priors from partitions to the superset of feature allocations suggests that further such structures might be usefully .for instance , the _ beta negative binomial process _( ; ) provides a prior on a generalization of a feature allocation where we allow the features themselves to be multisets ; that is , each index may have nonnegative integer multiplicities of features . models on trees ( ; ; ) , graphs and permutations provide avenues for future exploration . and therelikely remain further structures to be fitted out with useful bayesian priors .t. broderick s research was funded by a national science foundation graduate research fellowship .this material is supported in part by the national science foundation award 0806118 combinatorial stochastic processes and is based upon work supported in part by the office of naval research under contract / grant number n00014 - 11 - 1 - 0688 .
|
one of the focal points of the modern literature on bayesian nonparametrics has been the problem of _ clustering _ , or _ partitioning _ , where each data point is modeled as being associated with one and only one of some collection of groups called clusters or partition blocks . underlying these bayesian nonparametric models are a set of interrelated stochastic processes , most notably the dirichlet process and the chinese restaurant process . in this paper we provide a formal development of an analogous problem , called _ feature modeling _ , for associating data points with arbitrary nonnegative integer numbers of groups , now called features or topics . we review the existing combinatorial stochastic process representations for the clustering problem and develop analogous representations for the feature modeling problem . these representations include the beta process and the indian buffet process as well as new representations that provide insight into the connections between these processes . we thereby bring the same level of completeness to the treatment of bayesian nonparametric feature modeling that has previously been achieved for bayesian nonparametric clustering . ,
|
quantum state estimation ( qse ) the methods , procedures , and algorithms by which one converts tomographic experimental data into an educated guess about the state of the quantum system under investigation provides just that : an estimate of the _state_. for high - dimensional systems , such a state estimate can be hard to come by .but one is often not even interested in all the details the state conveys and rather cares only about the values of a few functions of the state .for example , when a source is supposed to emit quantum systems in a specified target state , the fidelity between the actual state and this target could be the one figure of merit we want to know .then , a direct estimate of the few properties of interest , without first estimating the quantum state , is more practical and more immediately useful .the full state estimate may not even be available in the first place , if only measurements pertinent to the quantities of interest are made instead of a tomographically complete set , the latter involving a forbidding number of measurement settings in high dimensions .furthermore , even if we have a good estimate for the quantum state , the values of the few properties of interest computed from this state may not be , and often are not , the best guess for those properties ( see an illustration of this point in sec .[ sec : scpr ] ) .therefore , we need to supplement qse with spe state - property estimation , that is : methods , procedures , and algorithms by which one directly arrives at an educated guess for the few properties of interest .several schemes have been proposed for determining particular properties of the quantum state .these are prescriptions for the measurement scheme , and/or estimation procedure from the collected data .for example , there are schemes for measuring the traces of powers of the statistical operator , and then perform separability tests with the numbers thus found .alternatively , one could use likelihood ratios for an educated guess whether the state is separable or entangled .other schemes are tailored for measuring the fidelity with particular target states , yet another can be used for estimating the concurrence .schemes for measuring other properties of the quantum state can be found by paris s method .many of these schemes are property specific , involving sometimes ad - hoc estimation procedures well - suited for only those properties of interest . here , in full analogy to the _ state _ error regions of ref . for qse , we describe general - purpose optimal error intervals for spe , from measurement data obtained from generic tomographic measurements or property - specific schemes like those mentioned above . following the maximum - likelihood philosophy for statistical inference , these error intervals give precise `` error bars '' around the maximum - likelihood ( point ) estimator for the properties in question consistent with the data .according to the bayesian philosophy , they are intervals with a precise probability ( credibility ) of containing the true property values . as is the case for qse error regions , these spe error intervals are optimal in two ways .first , they have the largest likelihood for the data among all the intervals of the same size .second , they are smallest among all regions of the same credibility . here, the natural notion of the size of an interval is its prior content , i.e. , our belief in the interval s importance before any data are taken ; the credibility of an interval is its posterior after taking the data into account content .we will focus on the situation in which a single property of the state is of interest .this is already sufficient for illustration , but is not a restriction of our methods .( note : if there are several properties of interest and a consistent set of values is needed , they should be estimated jointly , not one - by - one , to ensure that constraints are correctly taken into account . )the optimal error interval is a range of values for this property that answers the question : given the observed data , how well do we know the value of the property ?this question is well answered by the above - mentioned generalization of the maximum - likelihood point estimator to an interval of most - likely values , as well as the dual bayesian picture of intervals of specified credibility .our error interval is in contrast to other work based on the frequentists concept of confidence regions / intervals , which answer a different question pertaining to all possible data that could have been observed but is not the right concept for drawing inference from the actual data acquired in a single run ( see appendix [ sec : appcc ] ) .as we will see below , the concepts and strategies of the optimal error regions for qse carry over naturally to this spe situation .however , additional methods are needed for the specific computational tasks of spe .in particular , there is the technical challenge of computing the property - specific likelihood : in qse , the likelihood for the data as a function over the state space is straightforward to compute ; in spe , the relevant likelihood is the property - specific _ marginal likelihood _ , which requires an integration of the usual ( state ) likelihood over the `` nuisance parameters '' that are not of interest .this can be difficult to compute even in classical statistics . here, we offer an iterative algorithm that allows for reliable estimation of this marginal likelihood .in addition , we point out the connection between our optimal error intervals and _ plausible intervals _ , an elegant notion of evidence for property values supported by the observed data .plausible intervals offer a complementary understanding of our error intervals : plausibility identifies a unique error interval that contains all values for which the data are in favor of , with an associated critical credibility value .here is a brief outline of the paper .we set the stage in sec .[ sec : stage ] where we introduce the reconstruction space and review the notion of size and credibility of a region in the reconstruction space .analogously , we identify the size and credibility of a range of property values in sec .[ sec : scpr ] .then , the flexibility of choosing priors in the property - value space is discussed in sec .[ sec : prior ] . with these tools at hand , we formulate in sec . [ sec : pe - oei ] the point estimators as well as the optimal error intervals for spe .section [ sec : evidence ] explains the connection to plausible regions and intervals .section [ sec : mcint ] gives an efficient numerical algorithm that solves the high - dimensional integrals for the size and credibility .we illustrate the matter by simulated single - qubit and two - qubit experiments in secs .[ sec:1qubit ] and [ sec:2qubit ] , and close with a summary .additional material is contained in several appendixes : the fundamental differences between bayesian credible intervals and the confidence intervals of frequentism are the subject matter of appendix [ sec : appcc ] .appendixes [ sec : appa ] and [ sec : appb ] deals with the limiting power laws of the prior - content functions that are studied numerically in sec .[ sec:2qubit ] . for ease of reference ,a list of the various prior densities is given in appendix [ sec : appc ] and a list of the acronyms in appendix [ sec : appd ] .as in refs . , we regard the probabilities of a measurement with outcomes as the basic parameters of the quantum state . the born rule states that the probability is the expectation value of the probability operator in state .together , the probability operators constitute a probability - operator measurement ( pom ) , where is the identity operator .the pom is fully tomographic if we can infer a unique state when the values of all are known .if the measurement provides partial rather than full tomography , we choose a suitable set of statistical operators from the state space , such that the mapping is one - to - one ; this set is the reconstruction space . while there is no unique or best choice for the `` suitable set '' that makes up , the intended use for the state , once estimated , may provide additional criteria for choosing the reconstruction space . as far as qse and spe are concerned , however , the particulars of the mapping do not matter at all . yet , that there is such a mapping , permits viewing a region in also as a region in the probability space , and we use the same symbols in both cases whenever the context is clear .note , however , that while the probability space in which the numerical work is done is always convex , the reconstruction space of states may or may not be .examples for that can be found in where various aspects of the mapping are discussed in the context of measuring pairwise complementary observables . the parameterization of the reconstruction space in terms of the probabilities gives us for the volume element prior element in , where is the volume element in the probability space .the factor accounts for all the constraints that the probabilities must obey , among them the constraints that follow from the positivity of in conjunction with the quantum - mechanical born rule .other than the mapping , this is the _ only _ place where quantum physics is present in the formalism of qse and spe . yet , the quantum constraints in are the defining feature that distinguishes quantum state estimation from non - quantum state estimation. probabilities that obey the constraints are called `` physical '' or `` permissible '' . vanishes on the unphysical and is generally a product of step functions and delta functions .the factor in eq .( [ eq:2 - 1 ] ) is the prior density of our choice ; it reflects what we know about the quantum system before the data are taken .usually , the prior density gives positive weight to the finite neighborhoods of all states in ; criteria for choosing the prior are reviewed in appendix a of ref . `` use common sense '' is a guiding principle .although not really necessary , we shall assume that and are normalized , so that we do not need to exhibit normalizing factors in what follows . then , the size of a region , that is : its prior content , is with equality only for .this identification of size and prior content is natural in the context of state estimation ; see for a discussion of this issue . while other contexts may very well have their own natural notions of size , such other contextsdo not concern us here .after measuring a total number of copies of the quantum system and observing the outcome times , the data are the recorded sequence of outcomes ( `` detector clicks '' ) .the probability of obtaining is the point likelihood in accordance with sec . 2.3 in ref . , then , the joint probability of finding in the region and obtaining data is with ( i ) the region likelihood , ( ii ) the credibility the posterior content of the region , and ( iii ) the prior likelihood for the data wish to estimate a particular property , specified as a function of the probabilities , with values between and , the restriction to this convenient range can be easily lifted , of course .usually , there is at first a function of the state , and is the implied function of .we take for granted that the value of can be found without requiring information that is not contained in the probabilities . otherwise , we need to restrict to in . by convention , we use lower - case letters for the functions on the probability space and upper - case letters for the function values .the generic pair is here ; we will meet the pairs and in sec .[ sec:1qubit ] , and the pairs and in sec .[ sec:2qubit ] .a given value , say identifies hypersurfaces in the probability space and the reconstruction space , and an interval corresponds to a region ; see fig .[ fig : regions ] .such a region has size \nonumber\\&= & \int({\mathrm{d}}p)\,w_0(p)\bigl[\eta\bigl(f_2-f(p)\bigr ) -\eta\bigl(f_1-f(p)\bigr)\bigr ] \nonumber\\&= & \int({\mathrm{d}}p)\,w_0(p)\int_{f_1}^{f_2}{\mathrm{d}}f\,\delta\bigl(f - f(p)\bigr)\end{aligned}\ ] ] and credibility \nonumber\\&= & \frac{1}{l(d)}\int({\mathrm{d}}p)\,w_0(p)l(d|p ) \int_{f_1}^{f_2}{\mathrm{d}}f\,\delta\bigl(f - f(p)\bigr)\,,\end{aligned}\ ] ] where is heaviside s unit step function and is dirac s delta function .for an infinitesimal slice , , the size ( [ eq:3 - 2 ] ) identifies the prior element in , and the credibility ( [ eq:3 - 3 ] ) tells us the likelihood of the data for given property value , of course , eqs . ([ eq:3 - 4 ] ) and ( [ eq:3 - 5 ] ) are just the statements of eqs .( [ eq:2 - 4 ] ) and ( [ eq:2 - 6a ] ) in the current context of infinitesimal regions defined by an increment in ; it follows that and are positive everywhere , except possibly for a few isolated values of . to avoid any potential confusion with the likelihood of eq .( [ eq:2 - 5 ] ) , we shall call the -likelihood .in passing , we note that can be viewed as the marginal likelihood of with respect to the probability density in . for the computation of , however , standard numerical methods for marginal likelihoods , such as those compared by bos , do not give satisfactory results .the bench marking conducted by bos speaks for itself ; in particular , we note that none of those standard methods has a built - in accuracy check .therefore , we are using the algorithm described in sec .[ sec : mcint ] . in terms of and , a finite interval of values , or the union of such intervals , denoted by the symbol , has the size and the credibility where has the same value as the integral of eq .( [ eq:2 - 7 ] ) . denotes the whole range of property values , where we have .note that the -likelihood is the natural derivative of the interval likelihood , the conditional probability if we now define the -likelihood by the requirement we recover the expression for in eq .( [ eq:3 - 5 ] ) .the prior density and the -likelihood have an implicit dependence on the prior density in probability space , and it may seem that we can not choose as we like , nor would the -likelihood be independent of the prior for .this is only apparently so : as usual , the likelihood does not depend on the prior . when we restrict the prior density to the hypersurface where , we exhibit the implied prior density that tells us the relative weights of within the iso- hypersurface . as a consequence of the normalization of and , which are more explicit versions of and , is also normalized , in a change of perspective , let us now regard and as independently chosen prior densities for all iso- hypersurfaces and for property . since is the coordinate in -space that is normal to the iso- hypersurfaces ( see fig .[ fig : regions ] ) , these two prior densities together define a prior density on the whole probability space , the restriction to a particular value of takes us back to eq . ([ eq:4 - 0a ] ) , as it should . for a prior density of the form ( [ eq:4 - 3 ] ), the -likelihood does not involve and is solely determined by .therefore , different choices for in eq .( [ eq:4 - 3 ] ) do not result in different -likelihoods .put differently , if we begin with some reference prior density , which yields the iso- prior density that we shall use throughout , then is the corresponding prior density for the of our liking .clearly , the normalization of is not important ; more generally yet , the replacement with an arbitrary function has no effect on the right - hand sides of eqs .( [ eq:4 - 0e ] ) , ( [ eq:4 - 0d ] ) , as well as ( [ eq:4 - 5 ] ) below .one can think of this replacement as modifying the prior density in that derives from upon proper normalization .while the -likelihood is the same for all , it will usually be different for different and thus for different .for sufficient data , however , is so narrowly peaked in probability space that it will be essentially vanishing outside a small region within the iso- hypersurface , and then it is irrelevant which reference prior is used . in other words ,the data dominate rather than the priors unless the data are too few .typically , we will have a natural choice of prior density on the probability space and accept the induced and .nevertheless , the flexibility offered by eq .( [ eq:4 - 0d ] ) is useful .we exploit it for the numerical procedure in sec .[ sec : mcint ] . in the examples below , we employ two different reference priors .the first is the _ primitive prior _ , so that the density is uniform in over the ( physical ) probability space .the second is the _jeffreys prior _ , which is a common choice of prior when no specific prior information is available . for ease of reference, there is a list of the various prior densities in appendix [ sec : appc ] . in sec .[ sec:1qubit ] , we use and for and then work with the induced priors of eq .( [ eq:3 - 4 ] ) , as this enables us to discuss the difference between direct and indirect estimation in sec . [ sec:1qubitb ]. the natural choice of will serve as the prior density in sec .[ sec:2qubit ] .the -likelihood is largest for the maximum - likelihood estimator , another popular point estimator is the bayesian mean estimator they are immediate analogs of the maximum - likelihood estimator for the state , and the bayesian mean of the state , usually , the value of for one of these state estimators is different from the corresponding estimator , although the equal sign can hold for particular data ; see fig . [fig : regions ] . as an exception ,we note that is always true if is linear in .the observation of eq .( [ eq:5 - 5 ] ) the best guess for the property of interest may not , and often does not , come from the best guess for the quantum state deserves emphasis , although it is not a new insight .for example , the issue is discussed in ref . in the context of confidence regions ( see topic sm4 in the supplemental material ) .we return to this in sec .[ sec:1qubitb ] . for reasons that are completely analogous to those for the optimal error regions in ref . , the optimal error intervals for property are the bounded - likelihood intervals ( blis ) specified by while the set of is fully specified by the -likelihood and is independent of the prior density , the size and credibility of a specific do depend on the choice of .the interval of largest -likelihood for given size the maximum - likelihood interval ( mli ) is the bli with , and the interval of smallest size for given credibility the smallest credible interval ( sci ) is the bli with , where and are the size and credibility of eqs .( [ eq:3 - 6 ] ) and ( [ eq:3 - 7 ] ) evaluated for the interval .we have , , and for , with given by .as increases from to , and decreases monotonically from to .moreover , we have the link between and , exactly as that for the size and credibility of bounded - likelihood regions ( blrs ) for state estimation in ref . .the normalizing integral of the size in the denominator has a particular significance of its own , as is discussed in the next section .as soon as the -likelihood is at hand , it is a simple matter to find the mlis and the scis .usually , we are most interested in the sci for the desired credibility : the actual value of is in this sci with probability . since all blis contain the maximum - likelihood estimator , each bli , and thus each sci , reports an error bar on in this precise sense . in marked contrast, plays no such distinguished role .the data provide evidence in favor of the in a region if we would put a higher bet on after the data are recorded than before , that is : if the credibility of is larger than its size , in view of eq .( [ eq:2 - 6 ] ) , this is equivalent to requiring that the region likelihood exceeds , the likelihood for the data .upon considering an infinitesimal vicinity of a state , we infer from eq .( [ eq:5 - 8 ] ) that we have _ evidence in favor _ of if , and we have _ evidence against _ , and thus against , if .the ratio , or any monotonic function of it , measures the strength of the evidence .it follows that the data provide strongest evidence for the maximum - likelihood estimator .further , since for all blrs , there is evidence in favor of each blr .the larger blrs , however , those for the lower likelihood thresholds set by smaller values , contain subregions against which the data give evidence .the with evidence against them are not plausible guesses for the actual quantum state .we borrow evans sterminology and call the set of all , for which the data provide evidence in favor , the _ plausible region _ the largest region with evidence in favor of all subregions .it is the scr for the _ critical value _ of , [ eq:5 - 9 ] the equal sign in eq .( [ eq:5 - 9a ] ) is that of eq .( 21 ) in ref . . in a plot of and , such as those in figs . 4 and 5 of or in figs .[ fig : size&credibility ] and [ fig : bli - sizecred ] below , we can identify as the value with the largest difference .this concept of the plausible region for qse carries over to spe , where we have the _ plausible interval _ composed of those values for which exceeds .it is the sci for the critical value , [ eq:5 - 10 ] where now and refer to the -likelihood . usually , the values of in eqs .( [ eq:5 - 9b ] ) and ( [ eq:5 - 10b ] ) are different and , therefore , the critical values are different . after measuring a sufficient number of copies of the quantum system symbolically : `` '' one can invoke the central limit theorem and approximate the -likelihood by a gaussian with a width , where is a scenario - dependent constant .the weak -dependence of and is irrelevant here and will be ignored .then , the critical value is provided that is smooth near , which property we take for granted .accordingly , the size and credibility of the plausible interval are { \displaystyle}c_{{\lambda_{\mathrm{crit}}}}^{\ } & { \displaystyle}\mathrm{erf}{\left({\left(\log\frac{1}{{\lambda_{\mathrm{crit}}}}\right)}^{\frac{1}{2}}\right ) } \end{array}\right.\ ] ] under these circumstances .when focusing on the dominating dependence , we have which conveys an important message : as more and more copies of the quantum system are measured , the plausible interval is losing in size and gaining in credibility .the size element of eq . ( [ eq:3 - 4 ] ) , the credibility element of eq .( [ eq:3 - 5 ] ) , and the -likelihood of eqs . ([ eq:4 - 0c ] ) and ( [ eq:4 - 5 ] ) , introduced in eq .( [ eq:3 - 5 ] ) , are the core ingredients needed for the construction of error intervals for .the integrals involved are usually high - dimensional and can only be computed by monte carlo ( mc ) methods .the expressions with the delta - function factors in their integrands are , however , ill - suited for a mc integration . therefore , we consider the antiderivatives and these are the prior and posterior contents of the interval for the reference prior with density .the denominator in the -likelihood of eq .( [ eq:4 - 5 ] ) is the derivative of with respect to , the numerator that of .let us now focus on the denominator in eq .( [ eq:4 - 5 ] ) , for the mc integration , we sample the probability space in accordance with the prior and due attention to of eq .( [ eq:2 - 2 ] ) , for which the methods described in refs . and are suitable .this gives us together with fluctuations that originate in the random sampling and the finite size of the sample ; for a sample with values of , the expected mean - square error is ^{1/2} ] . in particular , for , the eigenstate of , we have , and the fidelity is a function of only the -component of , namely }^{\frac{1}{2}} ] , ] , , ] , and we have \ ] ] for the chsh quantity in eq .( [ eq:8 - 3 ] ) . with the data provided by the tat measurement, we can evaluate for any choice of the unit vectors , , , in the plane . if we choose the vectors such that is largest for the given , then ^{\frac{1}{2}}\end{aligned}\ ] ] for the optimized chsh quantity . in terms of the tat probabilities , it is given by .\nonumber\end{aligned}\ ] ] whereas the fixed - vectors chsh quantity in eq .( [ eq:8 - 6 ] ) is a linear function of the tat probabilities , the optimal - vectors quantity is not .the inequality holds for any two - qubit state , of course .extreme examples are the bell states , the common eigenstates of and with opposite eigenvalues , for which and . the same values are also found for other states , among them all four common eigenstates of and .the simulated experiment uses the true state with , for which the tat probabilities are and the true values of and are when simulating the detection of copies , we obtained the relative frequencies if we estimate the probabilities by the relative frequencies and use these estimates in eqs .( [ eq:8 - 6 ] ) and ( [ eq:8 - 8 ] ) , the resulting estimates for and are and , respectively .this so - called `` linear inversion '' is popular , and one can supplement the estimates with error bars that refer to confidence intervals , but the approach has well - known problems . instead , we report scis for and , and for those we need the -likelihoods and .we describe in the following sec .[ sec:2qb - mcint ] how the iteration algorithm of sec .[ sec : mcint ] is implemented , and present and thus found in sec .[ sec:2qb - scis ] together with the resulting scis . rather than or , which have values in the range , we shall use and themselves as the properties to be estimated , with the necessary changes in the expressions in secs .[ sec : scpr][sec : mcint ] . for the mc integration of ,say , we sample the probability space with the hamiltonian mc algorithm described in sec . 4.3 in . in this context, we note the following implementation issue : the sample probabilities carry a weight proportional to the range of permissible values for , i.e. , parameter in ( [ eq : b-4 ] ) .it is expedient to generate an unweighted sample by resampling ( `` bootstrapping '' ) the weighted sample .the unweighted sample is then used for the mc integration .( a ) histogram of chsh values in a random sample of 500000 states in accordance with the primitive prior of eq .( [ eq:4 - 6 ] ) . for of eq .( [ eq:8 - 6 ] ) we have the full range of , whereas of eq .( [ eq:8 - 8 ] ) is positive by construction . ( b ) corresponding histogram for a random sample drawn from the posterior distribution for the simulated data in eq .( [ eq:8 - 12 ] ) . in plot ( a ) , the black - line envelopes show the few - parameter approximations of eq .( [ eq:8 - 14 ] ) with eq .( [ eq:8 - 15 ] ) for and eqs .( [ eq : b-17])([eq : b-19 ] ) for . in plot ( b ) , the envelopes are the derivatives of the fits to and . ]the histograms in fig .[ fig : chsh - histo](a ) show the distribution of and values in such a sample , drawn from the probability space in accordance with the primitive prior of ( [ eq:4 - 6 ] ) .these prior distributions contain few values with and much fewer with . in fig .[ fig : chsh - histo](b ) , we have the histograms for a corresponding sample drawn from the posterior distribution to the simulated data of eq .( [ eq:8 - 12 ] ) . in the posterior distributions , values exceeding prominent for , but virtually non - existent for .we determine the -likelihoods and by the method described in sec .[ sec : mcint ] .the next five paragraphs deal with the details of carrying out a few rounds of the iteration . the green dots in fig . [ fig : p0-iteration](a )show the values obtained with the sample of 500000 sets of probabilities that generated the histograms in fig .[ fig : chsh - histo](a ) .we note that the mc integration is not precise enough to distinguish from for or from for and , therefore , we can not infer a reliable approximation for for these values ; the sample contains only 144 entries with and no entries with .the iteration algorithm solves this problem .consecutive functions for as obtained by mc integration .the green dots ( ) represent values for , computed with the primitive prior ( [ eq:4 - 6 ] ) .the flat regions near the end points at are a consequence of the power in eq .( [ eq:8 - 13 ] ) .the black curve through the green dots is the graph of the four - parameter approximation of eq . ([ eq:8 - 14 ] ) .the blue , cyan , and red dots are the mc values for , , and , respectively , all close to the straight line .the cyan dots are difficult to see between the blue and red dots in plot ( a ) .they are well visible in plot ( b ) , where the straight - line values are subtracted .the curves through the dots in plot ( b ) show the few - term fourier approximations analogous to eq .( [ eq:6 - 7 ] ) . ]as discussed in appendix [ sec : appa ] , we have near the boundaries of the range in fig . [fig : p0-iteration](a ) . in conjunction with the symmetry property or , this invites the four - parameter approximation where is a normalized incomplete beta function integral with and ; and are fitting parameters larger than ; and are weights with unit sum .a fit with a root mean squared error of is achieved by , , and .the graph of is the black curve through the green dots in fig .[ fig : p0-iteration](a ) ; the corresponding four - parameter approximation for is shown as the black envelope for the green histogram in fig .[ fig : chsh - histo](a ) .the subsequent approximations , , and , are shown as the blue , cyan , and red dots in fig .[ fig : p0-iteration](a ) and , after subtracting , also in fig .[ fig : p0-iteration](b ) .we use the truncated fourier series of eq . ( [ eq:6 - 7 ] ) with for fitting a smooth curve to the noisy mc values for , , and . as a consequence of ,all fourier amplitudes with odd vanish .fourier coefficients of eq .( [ eq:6 - 7 ] ) for ( blue dots in fig .[ fig : p0-iteration ] ) .all amplitudes with odd index vanish , , and are not included in the figure .the `` low - pass filter '' set at keeps only the four amplitudes , , , and in order to remove the high - frequency noise in .each of the discarded amplitudes is less than in magnitude of the largest amplitude ; the gray strip about the horizontal axis indicates this band . ] for an illustration of the method , we report in fig .[ fig : fourier ] the amplitudes of a full fourier interpolation between the blue dots ( ) in fig .[ fig : p0-iteration](b ) . upon discarding all components with and thus retaining only four nonzero amplitudes, the resulting truncated fourier series gives the smooth blue curve through the blue dots .its derivative contributes a factor to the reference prior density , in accordance with step s5 of the iteration algorithm in sec .[ sec : mcint ] . in the next roundwe treat in the same way , followed by in the third round . after each iterationround , we use the current reference prior and the likelihood for a mc integration of the posterior density and so obtain the corresponding as well as its analytical parameterization analogous to that of ; the black envelopes to the histograms in fig .[ fig : chsh - histo](b ) show the final approximations for the derivatives of and thus obtained .the ratio of their derivatives is the approximation to the -likelihood ; and likewise for , see [ sec : appb ] .figure [ fig : s - likelihood ] shows the sequence of approximations .likelihood function for and .the plot of shows the -likelihood obtained for the three subsequent iterations in fig .[ fig : p0-iteration](b ) , with a blow - up of the region near the maximum .the colors blue , cyan , and red correspond to those in fig .[ fig : p0-iteration ] . the plot of is analogous . ]size and credibility of bounded - likelihood intervals for the chsh quantities , computed from the likelihood functions in fig .[ fig : s - likelihood ] .( a ) fixed measurement of eq . ( [ eq:8 - 6 ] ) with the flat prior density ; ( b ) optimized measurement of eq .( [ eq:8 - 8 ] ) with the flat prior density .the red vertical lines mark the critical values at and . ]we note that the approximations for the -likelihood hardly change from one iteration to the next , so that we can stop after just a few rounds and proceed to the calculation of the size and the credibility of the blis .these are shown in fig .[ fig : bli - sizecred ] for the flat priors in and , respectively . the plots in figs .[ fig : chsh - histo][fig : bli - sizecred ] refer to the primitive prior of eq .( [ eq:4 - 6 ] ) as the reference prior on the probability space . the analogous plots for the jeffreys prior of eq .( [ eq:4 - 7 ] ) are quite similar . as a consequence of this similarity, there is not much of a difference in the scis obtained for the two reference priors , although the number of measured copies ( ) is not large ; see fig . [fig : s - oeis ] .the advantage of over is obvious : whereas virtually all -scis with non - unit credibility are inside the range , the -scis are entirely in the range for credibility up to 95% and 98% for the primitive reference prior and the jeffreys reference prior , respectively .optimal error intervals for ( a ) and ( b ) .the blue and red curves ( labeled ` a ' and ` b ' , respectively ) delineate the boundaries of the scis in the same manner as in figs .[ fig : qubitscis ] and [ fig : dspevsispe ] .the true values of and , marked by the down - pointing arrows ( ) , are inside the indicated plausible intervals for the primitive reference prior with credibility and , respectively . the primitive prior of eq .( [ eq:4 - 6 ] ) and the jeffreys prior of eq .( [ eq:4 - 7 ] ) solely serve as the reference priors on the probability space for the computation of the -likelihoods ( shown in fig . [fig : s - likelihood ] for the primitive prior ) , whereas flat priors for and are used for establishing the boundaries of the scis from these -likelihoods . ]in full analogy to the likelihood of the data for the specified probability parameters of the quantum state , which is the basic ingredient exploited by all strategies for quantum state estimation , the -likelihood plays this role when one estimates the value of a function the value of a property of the quantum state .although the definition of in terms of relies on bayesian methodology and , in particular , needs a pre - selected reference prior on the probability space , the prior density for can be chosen freely and the -likelihood is independent of this choice .as soon as the -likelihood is at hand , we have a maximum - likelihood estimator for , embedded in a family of smallest credible intervals that report the accuracy of the estimate in a meaningful way .this makes optimal use of the data .the dependence of the smallest credible regions on the prior density for is irrelevant when enough data are available .in the examples studied , `` enough data '' are obtained by measuring a few tens of copies per outcome . not only is there no need for estimating the quantum state first and finding its smallest credible regions , this is not even useful : the value of the best - guess state is not the best guess for , and the smallest credible region for the state does not carry the meaning of the smallest credible interval for .the reliable computation of the marginal -likelihood from the primary state - conditioned likelihood is indeed possible .it requires the evaluation of high - dimensional integrals with monte carlo techniques .it can easily happen that the pre - selected prior on the probability space gives very little weight to sizeable ranges of values , and then the -likelihood is ambiguous there .we overcome this problem by an iterative algorithm that replaces the inadequate prior by suitable ones , and so yields a -likelihood that is reliable for all values of .the two - qubit example , in which we estimate chsh quantities , illustrates these matters . from a general point of view, one could regard values of functions of the quantum state as parameters of the state .the term _ quantum parameter estimation _ is , however , traditionally used for the estimation of parameters of the experimental apparatus , such as the emission rate of the source , efficiencies of detectors , or the phase of an interferometer loop .a forthcoming paper will deal with optimal error regions for quantum parameter estimation in this traditional sense smallest credible regions , that is . in this context , it is necessary to account , in the proper way , for the quantum systems that are emitted by the source but escape detection .there are also situations , in which the quantum state and parameters of the apparatus are estimated from the same data , often referred to as _ self - calibrating experiments _ .various aspects of the combined optimal error regions for the parameters of both kinds are discussed in and are the subject matter of ongoing research .we thank david nott and michael evans for stimulating discussions .this work is funded by the singapore ministry of education ( partly through the academic research fund tier 3 moe2012-t3 - 1 - 009 ) and the national research foundation of singapore .h. k. n is also funded by a yale - nus college start - up grant .it is common practice to state the result of a measurement of a physical quantity in terms of a _ confidence interval_. usually , two standard deviations on either side of the average value define a 95% confidence interval for the observed data .this is routinely interpreted as assurance that the actual value ( among all thinkable values ) is in this range with 95% probability . although this interpretation is temptingly suggested by the terminology , it is incorrect one must not have such confidence in a confidence interval .rather , the situation is this : after defining a full set of confidence intervals for quantity , one interval for each thinkable data , the confidence level of the set is its so - called _ coverage _ , which is the fraction of intervals that cover the actual value , minimized over all possible values , whereby each interval is weighted by the probability of observing the data associated with it . upon denoting the confidence interval for data by , the coverage of the set is thus calculated in accordance with we emphasize that the coverage is a property of the set , not of any individual confidence interval ; the whole set is needed for associating a level of confidence with the intervals that compose the set .a set of confidence intervals with coverage has this meaning : if we repeat the experiment very often and find the respective interval for each data obtained , then 95% of these intervals will contain the actual value of .confidence intervals are a concept of frequentism where the notion of probability refers to asymptotic relative frequencies the confidence intervals are random while the actual value of is whatever it is ( yet unknown to us ) and 95% of the confidence intervals contain it . herewe do statistics on the intervals , not on the value of .it is incorrect to infer that , for each 95% confidence interval , there are odds of 19:1 in favor of containing the actual value of , an individual confidence interval conveys no such information .it is possible , as demonstrated by the example that follows below , that the confidence interval associated with the observed data contains the actual value of certainly , or certainly not , and that the data tell us about this .this can even happen for each confidence interval in a set determined by standard optimality criteria ( see also example 3.4.3 in ) .the example just alluded to is a scenario invented by jaynes ( see also ) .we paraphrase it as follows : a certain process runs perfectly for duration , after which failures occur at a rate , so that the probability of observing the first failure between time and is we can not measure directly ; instead we record first - failure times when restarting the process times .question : what do the data tell us about ?one standard frequentist approach begins with noting that the expected first - failure time is since the average of the observed failure times , is an estimate for , we are invited to use as the point estimator for . in many repetitions of the experiment ,then , the probability of obtaining the estimator between and is with ^{n-1}}{(n-1)!}{\mathrm{e}^{\mbox{\footnotesize}}}\eta(rt+1)\,.\ ] ] accordingly , the expected value of is , which says that the estimator of eq .( [ eq : cc-5 ] ) is unbiased .it is also consistent ( the more important property ) since next , we consider the set of intervals specified by and establish its coverage , with and . of the pairs that give a coverage of , one would usually not use the pairs with or but rather opt for the pair that gives the shortest intervals the frequentist analog of the smallest credible intervals .these shortest intervals are obtained by the restrictions \mbox{with}&\quad & y_2^{n-1}{\mathrm{e}^{\mbox{\footnotesize } } } = y_1^{n-1}{\mathrm{e}^{\mbox{\footnotesize}}}\end{aligned}\ ] ] on and in eq .( [ eq : cc-11 ] ) . when , we have and , and the shortest confidence intervals with coverage are given by there is , for instance , the interval associated with the data , , and , most certainly , the actual value of is _ not inside _ this 95% confidence interval since must be less than the earliest observed failure time , here : .by contrast , the 95% confidence interval for the data , , and , namely contains all values between and , so that the actual value is _ certainly inside . _these examples illustrate well what is stated above : the interpretation `` the actual value is inside this 95% confidence interval with 95% probability '' is incorrect .jaynes s scenario is particularly instructive because the data tell us that the confidence interval of eq .( [ eq : cc-14 ] ) is completely off target and that of eq .( [ eq : cc-14 ] ) is equally useless .clearly , these 95% confidence intervals do not answer the question asked above : what do the data tell us about ?this is not the full story , however .the practicing frequentist can use alternative strategies for constructing sets of shortest confidence intervals .there is , for example , another standard method that takes the maximum - likelihood point estimator as its starting point .the point likelihood for observing first failures at times is where is the precision of the observations and the maximal value is obtained for the maximum - likelihood estimator . in this case , of eqs .( [ eq : cc-6 ] ) and ( [ eq : cc-7 ] ) is replaced by which , not accidentally , is strikingly similar to the likelihood in eq .( [ eq : cc-16 ] ) but has a completely different meaning . since eq .( [ eq : cc-9 ] ) holds , this estimator is consistent , and it has a bias , that could be removed .the resulting shortest confidence intervals are specified by where is the desired coverage of the set thus defined . here, we obtain the 95% confidence intervals for the data that yielded the intervals in eqs .( [ eq : cc-14 ] ) and ( [ eq : cc-14 ] ) . while this suggests , and rather strongly so , that the confidence intervals of this second kind are more reasonable and more useful than the previous ones , it confronts us with the need for a criterion by which we select the preferable set of confidence intervals among equally legitimate sets .chernoff offers pertinent advice for that : `` start out as a bayesian thinking about it , and you ll get the right answer. then you can justify it whichever way you like . ''so , let us now find the corresponding scis of the bayesian approach , where probability quantifies our belief in colloquial terms : which betting odds would we accept ? for the point likelihood of eq .( [ eq : cc-16 ] ) , the bli is specified by jaynes recommends a flat prior in such applications unless we have specific prior information about , that is but , without a restriction on the permissible values , that would be an improper prior here .instead we use for the prior element and enforce `` flatness '' by taking the limit of eventually .then , the likelihood for the observed data is and the credibility of is [l]{-nr{t_{\mathrm{min}}}}&\end{aligned}\ ] ] after taking the limit .we so arrive at for the sci with pre - chosen credibility . for example , the scis for that corresponds to the confidence intervals in eqs .( [ eq : cc-14 ] ) and ( [ eq : cc-14 ] ) , and also to the confidence intervals in eq .( [ eq : cc-81 ] ) , are these really _ are _ useful answers to the question of what do the data tell us about : the actual value is in the respective range with 95% probability . regarding the choice between the set of confidence intervals of the first and the second kind associated with the point estimators and , respectively chernoff s strategy clearly favors the second kind .except for the possibility of getting a negative value for the lower bound , the confidence intervals of eq .( [ eq : cc-26 ] ) are the blis of eq .( [ eq : cc-18 ] ) for , and they are virtually identical with the scis of eq .( [ eq : cc-21 ] ) usually the term is negligibly small there . yet ,these confidence intervals retain their frequentist meaning .such a coincidence of confidence intervals and credible intervals is also possible under other circumstances , and this observation led jaynes to the verdict that `` confidence intervals are satisfactory as inferences _ only _ in those special cases where they happen to agree with bayesian intervals after all '' ( jaynes s emphasis , see p. 674 in ) . that is: one can get away with misinterpreting the confidence intervals as credible intervals for an unspecified prior . in the context of the example we are using, the coincidence occurs as a consequence of two ingredients : ( i ) we are guided by the bayesian reasoning when choosing the set of confidence intervals ; ( ii ) we are employing the flat prior when determining the scis .the coincidence does not happen when ( i ) another strategy is used for the construction of the set of confidence intervals , or ( ii ) for another prior , as we would use it if we had genuine prior information about ; the coincidence could still occur when is large but hardly for . in way of summary , the fundamental difference between the confidence intervals of eqs .( [ eq : cc-14 ] ) and ( [ eq : cc-14 ] ) , or those of eq .( [ eq : cc-81 ] ) , and the credible intervals of eq .( [ eq : cc-22 ] ) , which refer to the same data , is this : we judge the quality (= confidence level = coverage ) of the confidence interval by the company it keeps (= the full set ) , whereas the credible interval is judged on its own merits (= credibility ) .it is worth repeating here that the two types of intervals tell us about very different things : confidence intervals are about statistics on the data ; credible intervals are about statistics on the quantity of interest .if one wishes , as we do , to draw reliable conclusions from the data of a single run , one should use the bayesian credible interval and not the frequentist confidence interval .what about many runs ?if we take , say , one hundred measurements of three first - failure times , we can find the one hundred shortest 95% confidence intervals of either kind and base our conclusions on the properties of this set .alternatively , we can combine the data and regard them as three hundred first - failure times of a single run and so arrive at a sci with a size that is one - hundredth of each sci for three first - failure times .misconceptions such as `` confidence regions have a natural bayesian interpretation as regions which are credible for any prior '' , as widespread as they may be , arise when the fundamental difference in meaning between confidence intervals and credible intervals is not appreciated .while , obviously , one can compute the credibility of any region for any prior , there is no point in this `` natural bayesian interpretation '' for a set of confidence regions ; the credibility thus found for a particular confidence region has no universal relation to the coverage of the set .it is much more sensible to determine the scrs for the data actually observed . on the other hand, it can be very useful to pay attention to the corresponding credible regions when constructing a set of confidence regions . in the context of qse , this chernoff - type strategy is employed by christandl and renner who take a set of credible regions and enlarge all of them to produce a set of confidence regions ; see also .another instance where a frequentist approach benefits from bayesian methods is the marginalization of nuisance parameters in where a mc integration employs a flat prior , apparently chosen because it is easy to implement .the histograms thus produced they report differences of in eq .( [ eq:6 - 2 ] ) between neighboring values , just like the binned probabilities in fig .[ fig : chsh - histo ] depend on the prior , and so do the confidence intervals inferred from the histograms .there is also a rather common misconception about the subjectivity or objectivity of the two methods .the frequentist confidence regions are regarded as objective , in contrast to the subjective bayesian credible regions .the subjective nature of the credible regions originates in the necessity of a prior , privately chosen by the scientist who evaluates the data and properly accounts for her prior knowledge .no prior is needed for the confidence regions , they are completely determined by the data or so it seems .in fact , the choice between different sets of confidence regions is equally private and subjective ; in the example above , it is the choice between the confidence intervals of eqs .( [ eq : cc-10])([eq : cc-12 ] ) , those of eq .( [ eq : cc-26 ] ) , and yet other legitimate constructions which , perhaps , pay attention to prior knowledge . clearly , either approach has unavoidable subjective ingredients , and this requires that we state , completely and precisely , how the data are processed ; see sec .1.5.2 in for further pertinent remarks .in this appendix , we consider the sizes of the regions with and .it is our objective to justify the power law stated in eq .( [ eq:8 - 13 ] ) and so motivate the approximation in eq . ([ eq:8 - 14 ] ) .we denote the kets of the maximally entangled states with by , that is where , for example , has for the first qubit and for the second . since we have [ recall eq .( [ eq:8 - 5 ] ) ] }^{\ } { { \left|{\pm}\right\rangle } } = \pi_j^{(1)}\otimes\pi^{(2)}_{j'}{{\left|{\pm}\right\rangle}}\nonumber\\&= & \frac{1}{9}(1+{\vecfont{t}}_j\cdot{\boldsymbol{\sigma}})\otimes ( 1-{\vecfont{t}}_{j'}\cdot{\boldsymbol{\sigma}}){{\left|{\pm}\right\rangle}}\nonumber\\&= & \frac{1}{9}(1+{\vecfont{t}}_j\cdot{\boldsymbol{\sigma } } ) ( 1\pm{\vecfont{t}}_{j'}\cdot{\boldsymbol{\sigma}})\otimes{{\dyadfont{1}}}{{\left|{\pm}\right\rangle } } \nonumber\\&= & \frac{1}{9}\bigl[(1\pm{\vecfont{t}}_j\cdot{\vecfont{t}}_{j'}){{\dyadfont{1}}}\nonumber\\ & & \phantom{\frac{1}{9}\bigl [ } + ( { \vecfont{t}}_j\pm{\vecfont{t}}_{j'}\pm{\mathrm{i}}{\vecfont{t}}_j{\boldsymbol{\times}}{\vecfont{t}}_{j'})\cdot { \boldsymbol{\sigma}}\bigr ] \otimes{{\dyadfont{1}}}{{\left|{\pm}\right\rangle}}\,,\qquad\end{aligned}\ ] ] where are the three unit vectors of the trine .states in an -vicinity of are of the form where is any two - qubit operator and is a traceless rank-1 operator with the properties the tat probabilities are \nonumber\\ & & + o(\epsilon^2)\end{aligned}\ ] ] with the real vectors and given by owing to the trine geometry , the and components of and the component of matter , but the other three components do not . in the eight - dimensional probability space ,then , we have increments in three directions only , and increments in the other five directions . for the primitive prior, therefore , the size of the -vicinity is .the sum of probabilities in eq .( [ eq:8 - 6 ] ) is }+p_{[22]}+p_{[33]}=\frac{1}{3}(1\pm1)+o(\epsilon^2)\,,\ ] ] so that }$ ] or . accordingly , we infer that and which imply eq . ( [ eq:8 - 13 ] ) .in this appendix , we consider the sizes of the regions with and .we wish to establish the analogs of eqs .( [ eq:8 - 13 ] ) and ( [ eq:8 - 14 ] ) . in the context of , it is expedient to switch from the nine tat probabilities to the expectation values of the eight single - qubit and two - qubit observables that are linearly related to the probabilities , \longleftarrow\hspace*{-0.5em}\frac{\quad}{}\hspace*{-0.5em}\longrightarrow \\[-1.5ex]\mbox{\footnotesize{}relation } \end{array } { \left[\begin{array}{@{}c|cc@ { } } & { { \left\langle{{{\dyadfont{1}}}\otimes\sigma_x}\right\rangle } } & { { \left\langle{{{\dyadfont{1}}}\otimes\sigma_z}\right\rangle } } \\ \hline { { \left\langle{\sigma_x\otimes{{\dyadfont{1}}}}\right\rangle } } & { { \left\langle{\sigma_x\otimes\sigma_x}\right\rangle } } & { { \left\langle{\sigma_x\otimes\sigma_z}\right\rangle } } \\ { { \left\langle{\sigma_z\otimes{{\dyadfont{1}}}}\right\rangle } } & { { \left\langle{\sigma_z\otimes\sigma_x}\right\rangle } } & { { \left\langle{\sigma_z\otimes\sigma_z}\right\rangle } } \end{array}\right]}\equiv { \left[\begin{array}{@{}c|cc@ { } } & x_3 & x_4 \\\hline x_1 & y_1 & y_2 \\x_2 & y_3 & y_4 \end{array}\right]}.\end{aligned}\ ] ] the jacobian matrix associated with the linear relation does not depend on the probabilities and , therefore , we have for the primitive prior , where and , and equals a normalization factor for permissible values of and , whereas for unphysical values . thereby , the permissible values of and are those for which one can find in the range such that while the implied explicit conditions on and are rather involved , the special cases of interest here namely and , respectively are quite transparent .we have and the sum of the squares of these characteristic values is ; it determines the value of , we obtain for and with and .these values make up a four - dimensional volume but , since there is no volume in the four - dimensional space , the set of probabilities with has no eight - dimensional volume it has no size .the generic state in this set has and full rank .a finite , if small , four - dimensional ball is then available for the values .all values on the three - dimensional surface of the ball have the same value of , equal to the diameter of the ball .the volume of the ball is proportional to and , therefore , we have we reach for all maximally entangled states with . then , and both characteristic values of the matrix in eq .( [ eq : b-6 ] ) are maximal . more generally , when , the permissible values are with , , , where and are the characteristic values .the determinant can be positive or negative ; we avoid double coverage by restricting to positive values while letting and range over a full period .the jacobian factor in vanishes when and .therefore , there is no nonzero four - dimensional volume in the space for .more specifically , the -space volume for is } \nonumber\\&=&\frac{\sqrt{8}\,\pi^2}{3 } { \left(\sqrt{8}-{\theta_{\mathrm{opt}}}\right)}^3 + o{\left({\left(\sqrt{8}-{\theta_{\mathrm{opt}}}\right)}^4\right)}\nonumber\\ & & \mbox{for}\quad{\theta_{\mathrm{opt}}}\lesssim\sqrt{8}\,.\end{aligned}\ ] ] with respect to the corresponding -space volume , we note that the maximally entangled states with are equivalent because local unitary transformations turn them into each other .it is , therefore , sufficient to consider an -vicinity of one such state , for which we take that with and .this is of eq .( [ eq : a-1 ] ) , with in eq .( [ eq : a-5 ] ) . as a consequence of eq .( [ eq : a-2 ] ) , we have so that the -space volume is proportional to . since we know from ( [ eq : a-11 ] ) that , it follows that the -space volume is proportional to .together with the -space volume in ( [ eq : b-13 ] ) , we so find that just like eq .( [ eq:8 - 13 ] ) suggests the approximation eq .( [ eq:8 - 14 ] ) for , the power laws for near and in ( [ eq : b-10 ] ) and ( [ eq : b-16 ] ) , respectively , invite the approximation with and one of the powers is equal to and one of the is equal to , and the other ones are larger . for the sample of 500000 sets of probabilities that generated the red histograms in fig .[ fig : chsh - histo](a ) , a fit with a mean squared error of is achieved by a five - term approximation with these parameter values : there are 12 fitting parameters here .the black curve to that histogram shows the corresponding approximation for .the various prior densities introduced in secs . [ sec : stage][sec : prior ] are there is also the probability - space factor in eq . ([ eq:2 - 2 ] ) that accounts for the constraints .if we choose to our liking , then and are determined by eqs .( [ eq:3 - 4 ] ) and ( [ eq:4 - 0a ] ) , respectively . alternatively , we can freely choose and either or , and then obtain from eq .( [ eq:4 - 3 ] ) or ( [ eq:4 - 0d ] ) . for given , the -likelihood does not depend on .00 c. s. bos , _ a comparison of marginal likelihood computation methods _ , pp .111116 in _ compstat : proceedings in computational statistics _ ,( heidelberg : physica - verlag hd , 2002 ) , edited by w. hrdle and b. rnz . e. t. jaynes , _ confidence intervals vs bayesian intervals _ , pp .175267 in _ foundations of probability theory , statistical inference , and statistical theories of science _ , vol .ii ( reidel publishing company , dordrecht , 1976 ) , edited by w. l. harper and c. a. hooker . for the properties of two - qubit states and their classification ,englert and n. metwally , _kinematics of qubit pairs _ , chapter 2 in _ mathematics of quantum computation _( boca raton : chapman and hall , 2002 ) , edited by g. chen and r. k. brylinski .
|
quantum state estimation aims at determining the quantum state from observed data . estimating the full state can require considerable efforts , but one is often only interested in a few properties of the state , such as the fidelity with a target state , or the degree of correlation for a specified bipartite structure . rather than first estimating the state , one can , and should , estimate those quantities of interest directly from the data . we propose the use of optimal error intervals as a meaningful way of stating the accuracy of the estimated property values . optimal error intervals are analogs of the optimal error regions for state estimation [ new j. phys . * 15 * , 123026 ( 2013 ) ] . they are optimal in two ways : they have the largest likelihood for the observed data and the pre - chosen size , and are the smallest for the pre - chosen probability of containing the true value . as in the state situation , such optimal error intervals admit a simple description in terms of the marginal likelihood for the data for the properties of interest . here , we present the concept and construction of optimal error intervals , report on an iterative algorithm for reliable computation of the marginal likelihood ( a quantity difficult to calculate reliably ) , explain how plausible intervals a notion of evidence provided by the data are related to our optimal error intervals , and illustrate our methods with single - qubit and two - qubit examples .
|
_ _ remark 1.__here we show that any upper bound on the classical fisher information which is _ independent _ of the chosen family of measurement operators is an upper bound on the qfi too . _proof.__to prove this statement , we note that the classical fisher information not only depends on , but also depends on the chosen family of measurement operators [ eq . ] . herewe indicate this dependence explicitly through the notation .thus , for any in which does not depend on .moreover , since by definition it is obvious from eq . that + replacing with , one reaches eq .( 2 ) of the main text , i.e. , the definition of the nsld as an operator satisfying the relation and inserting from eq .( [ sld-1 ] ) and ] and ] , thus eq .( [ qfi-2 ] ) yields ^ 2\right] ] . + _ proof . _ since =\mathrm{tr}[\pi_{\mathrm{x}'}\varrho ( \varrho^{-1}\widetilde{l}_{\varrho}\varrho + \widetilde{l}^{\dag}_{\varrho})]$ ] .following eq .( [ fi-2 ] ) and using the cauchy - schwarz inequality , it is found that + } } \sqrt{\varrho }\left(\varrho^{-1}\widetilde{l}_{\varrho}\varrho + \widetilde{l}^{\dag}_{\varrho}\right)\sqrt{\pi_{\mathrm{x}'}}\big]\big|^2\nonumber\\ & & \leqslant \frac{1}{4}\int_{\mathpzc{d}_{\mathrm{x } ' } } d\mathrm{x}'\mathrm{tr}[\varrho \left(\varrho^{-1}\widetilde{l}_{\varrho}\varrho + \widetilde{l}^{\dag}_{\varrho}\right)\pi_{\mathrm{x}'}\left(\varrho \widetilde{l}^{\dag}_{\varrho}\varrho^{-1}+\widetilde{l}_{\varrho}\right)]\nonumber\\ & & = \frac{1}{4}\mathrm{tr}\left[\varrho \left(\varrho^{-1}\widetilde{l}_{\varrho}\varrho + \widetilde{l}^{\dag}_{\varrho}\right)\left(\varrho \widetilde{l}^{\dag}_{\varrho}\varrho^{-1}+\widetilde{l}_{\varrho}\right)\right]\nonumber\\ & & = \frac{1}4\mathrm{tr}\left[\widetilde{l}_{\varrho}\varrho^2 \widetilde{l}^{\dag}_{\varrho}\varrho^{-1 } + \widetilde{l}_{\varrho}\varrho \widetilde{l}_{\varrho}+\widetilde{l}^{\dag}_{\varrho}\varrho \widetilde{l}^{\dag}_{\varrho}+\widetilde{l}_{\varrho}\varrho \widetilde{l}^{\dag}_{\varrho}\right]\nonumber\\ & & = \frac{1}4\mathrm{tr}\left[(\varrho^{-1}\widetilde{l}_{\varrho}\varrho + \widetilde{l}^{\dag}_{\varrho})\varrho \widetilde{l}^{\dag}_{\varrho}+\widetilde{l}_{\varrho}\left(\widetilde{l}_{\varrho}\varrho + \varrho \widetilde{l}^{\dag}_{\varrho}\right)\right]\nonumber\\ & & = \frac{1}4\mathrm{tr}\left[\varrho \widetilde{l}^{\dag}_{\varrho}\varrho^{-1}(\widetilde{l}_{\varrho}\varrho + \varrho \widetilde{l}^{\dag}_{\varrho})+\widetilde{l}_{\varrho}\left(\widetilde{l}_{\varrho}\varrho + \varrho \widetilde{l}^{\dag}_{\varrho}\right)\right]\nonumber\\ & & = \frac{1}4\mathrm{tr}\left[\left(\varrho \widetilde{l}^{\dag}_{\varrho}\varrho^{-1}+\widetilde{l}_{\varrho}\right)\left(\widetilde{l}_{\varrho}\varrho + \varrho \widetilde{l}^{\dag}_{\varrho}\right)\right]\nonumber\\ & & = \frac{1}4\mathrm{tr}\left[\left(\varrho \widetilde{l}^{\dag}_{\varrho}+\widetilde{l}_{\varrho}\varrho \right)\varrho^{-1}\left(\widetilde{l}_{\varrho}\varrho + \varrho \widetilde{l}^{\dag}_{\varrho}\right)\right]\nonumber\\ & & = \frac{1}4\mathrm{tr}\left[\varrho^{-1}\left(\widetilde{l}_{\varrho}\varrho + \varrho \widetilde{l}^{\dag}_{\varrho}\right)^2\right]\nonumber\\ & & = \mathrm{tr}\left[\varrho^{-1}(\partial_{\mathrm{x}}\varrho ) ^2\right].\end{aligned}\ ] ] + the bound is saturated if is chosen such that }=\frac{\sqrt{\varrho } ( \varrho^{-1}\widetilde{l}_{\varrho}\varrho + \widetilde{l}^{\dag}_{\varrho})\sqrt{\pi_{\mathrm{x}'}}}{\mathrm{tr}[\varrho ( \varrho^{-1}\widetilde{l}_{\varrho}\varrho + \widetilde{l}^{\dag}_{\varrho } ) \pi_{\mathrm{x}'}]},\nonumber\\\end{aligned}\ ] ] which in turn is satisfied by choosing the eigenvectors of as .however , since is not necessarily hermitian , its eigenvectors do not provide a complete set for measurement operators . as a result ,the bound is not necessarily achievable .
|
we prove an extended convexity for quantum fisher information of a mixed state with a given convex decomposition . this convexity introduces a bound which has two parts : i. _ classical _ part associated to the fisher information of the probability distribution of the states contributing to the decomposition , and ii . _ quantum _ part given by the average quantum fisher information of the states in this decomposition . next we use a non - hermitian extension of symmetric logarithmic derivative in order to obtain another upper bound on quantum fisher information , which enables to derive a closed form for a fairly general class of system dynamics given by a dynamical semigroup . we combine our two upper bounds together in a general ( open system ) metrology framework where the dynamics is described by a quantum channel , and derive the ultimate precision limit for quantum metrology . we illustrate our results and their applications through two examples , where we also demonstrate that how the extended convexity allows to track transition between quantum and classical behaviors for an estimation precision . _ _ introduction.__advent of ultra - precise quantum technologies in recent years has spurred the need for devising metrological protocols with the highest sensitivity allowed by laws of physics . quantum metrology investigates fundamental limits on the estimation error through the quantum crmer - rao bound . without using quantum resources , the very central limit theorem indicates that parameter estimation error is bounded by the shot - noise limit " ; however , employing quantum resources , such as quantum correlations between probes , allows for scaling of the error to beat the shot - noise limit and reach the more favorable heisenberg ( or sub - shot - noise ) limit " and perhaps beyond . this feature of quantum metrology has been realized experimentally . in realistic systems , interaction with environment is inevitable . since quantum procedures are susceptible to noise , formulation of a framework for noisy / dissipative quantum metrology is required . recently , some attempts have been made toward proposing systematic analysis of open - system quantum metrology , where some purification for density matrices has been used . some other methods based on different approaches such as using right / left logarithmic derivative and the channel extension idea have also been proposed . exact calculation of quantum fisher information ( qfi ) in general is difficult since it needs diagonalization of the system density matrix , which appears through the key quantity of symmetric logarithmic derivative ( sld ) . besides , it is also not straightforward ( except when the dynamics is unitary ) to recognize from the exact form of qfi the role of underlying physical properties of the system of interest in when scaling of the estimation error behaves classically or quantum mechanically . given these difficulties , resorting to upper bounds on qfi can be beneficial both theoretically and practically . in deriving such bounds , different properties of qfi may prove useful . convexity is an appealing property , which unfortunately does not hold for qfi in general . notwithstanding , here we derive an _ extended _ convexity relation for qfi , which obviously gives rise to an upper bound on qfi . we remind that every quantum state can be written ( in infinite ways ) as a convex decomposition of states which prepare the very state when mixed according to a given probability distribution . having such a decomposition , we show that the upper bound contains classical " and quantum " parts . the classical part is the fisher information associated to the ( classical ) probability distribution of the mixture , and the quantum part is related to the weighted average of the qfi of the constituting states of the mixture . this result is completely general and always holds unlike some earlier results in the literature . additionally , we show that how having such a classical - quantum picture for qfi enables us to find when a quantum metrology scenario exhibits either of classical ( shot - noise ) or quantum ( heisenberg ) regimes . we also employ an extension of sld which is non - hermitian ( hereafter , nsld ) , and define an extended qfi which is shown to upperbound qfi . this nsld has this extra utility that for dynamics with a semigroup property * ? ? ? * , its associated ( extended ) qfi is directly related to the quantum jump operators of the dynamics . in addition , we show that this extended qfi ( irrespective of the underlying dynamics ) for a density matrix is the same as the uhlmann metric , obtained earlier in the context of geometry of a state . this endows a geometric picture to nsld . our nsld concept also allows to supplement the extended convexity property for the case of a general open quantum dynamics given by a quantum channel . interestingly , by putting the concepts of the extended convexity and nsld together , we recover the exact qfi for an open system , whence the ultimate precision for estimation of a parameter of an open system . we illustrate utility of our results through two important examples . _ _ extended convexity of qfi.__we first briefly remind fisher information and its role in metrology . in estimation of a parameter of a classical system , the estimation error is lowerbounded by the inverse square of the classical fisher information where is the probability distribution of obtaining value ( in a measurement ) given the exact value of the unknown parameter is , and is the domain of all admissible . in quantum systems , measurements are described by a set of positive operators which has the completeness property . if ( hereafter sometimes or , to lighten the notation ) denotes the state of system at time , we have ] for qfi , which reproduces the right - hand side of eq . ( [ ave - cfi - qfi])named for future reference as replacing . several remarks are in order here . i. from eq . ( [ ave - cfi - qfi ] ) it should be evident that the very classical term obstructs the convexity to hold in general whence extended " convexity . this term , however , vanishes when the mixing probabilities do not depend on . such a special case occurs when ( assumed independent of ) evolves unitarily under an -dependent hamiltonian , e.g. , . here , one can see that where is the spectral decomposition of , and [ with ] , where =\sum_a p_a(\mathrm{x } ) a_{a } \circ a_a^{\dag} ] . minimizing length of a dynamical curve the geodesic " leads to a parallel transport condition . " requesting the same geodesic condition here as well implies that nsld reduces to ( the hermitian ) sld ( [ sld ] ) . the utility of the extended qfi and nsld will be illustrated below where we use them to enhance the implications of the extended convexity property for general quantum dynamical systems . _ qfi for a general dynamical quantum channels_.a quantum state passing through a ( parameter - dependent ) quantum channel evolves as =\textstyle{\sum_a } a_{a}(\mathrm{x } ) \varrho_0 a_{a}^{\dag}(\mathrm{x}),\ ] ] given by a set of kraus operators ( which are not unique ) . this ( trace - preserving ) completely - positive map describes the general class of ( open ) quantum evolutions . this form immediately implies a possible ensemble decomposition with ] . now we apply the extended convexity bound ( [ ave - cfi - qfi ] ) in which bearing in mind eq . ( [ f_u - text ] ) for the extended qfi we replace on the right - hand side with . it is straightforward to see that if in the proof of eq . ( [ ave - cfi - qfi ] ) we use an nsld ( ) rather than sld ( ) , the final result changes to ] and +\sqrt{1-q}\cos[\alpha \mathrm{x}]\sigma_z)e^{i \mathrm{x } \tau \sigma_z/2} ] constitute a mixture , where indicates the probe . by choosing , where , eq . gives , where and . from this relation , if and do not depend on , one can find the threshold size as . however , in finding the optimal ( with respect to the arbitrary parameter ) , an -dependent arises . choosing ] and when , and and when . figure [ transition ] depicts this quantum - to - classical transition . note that we have not optimized over _ all _ compatible kraus operators , whereby our obtained upper bound here does not necessarily follow the exact qfi . in fact , one can show that here the exact qfi vanishes exponentially with and . if we perform an exhaustive search and optimization over a larger class of kraus operators , we expect to capture this exponential reduction of the qfi through our formalism too . ( solid - black ) and its approximations [ dashed - red ] and [ dashed - blue ] for and , respectively . here , , and , which yield , agreeing also with the plot [ the light blue area ] . the inset compares our bound [ dashed - red ] with the exact value of [ solid - black ] , which again both agree well up to . ] _ _ extended qfi for lindbladian evolutions.__suppose that the dynamics of an open quantum system has the dynamical semigroup property , which has been proven to give rise to the lindbladian master equation , where the parameter to be estimated is in . we can replace with in the dynamical equation with no adverse effect . in the case that all the operators of the set commute with each other ( as in the dephasing dynamics of example below ) , one can obtain , \label{erho - x0}\\ \partial_{\mathrm{x}_a}\varrho&=\tau(\gamma_{a}\varrho \gamma_{a}^{\dag}-\frac{1}{2}\{\gamma_{a}^{\dag}\gamma_{a},\varrho\}).\end{aligned}\ ] ] when is invertible , straightforward calculations yield as possible choices of the nsld in the estimation of and , respectively . for estimating eq . gives this relation shows that , when estimating the hamiltonian coupling , the effect of the interaction with environment is encapsulated indirectly only through in the variance of the hamiltonian . when the system evolves unitarily the result is the same as eq . with replaced with . this result is in complete agreement with the known bound on the qfi . for the estimation of a jump rate , when commutes with , from eq . we obtain \big).\end{aligned}\ ] ] since this relation exhibits a direct dependence on the _ dynamical _ properties of the system , it can be useful in studying the role of various features of the open system in the precision of a metrology scenario . additionally , eq . may hint which initial quantum state is more suitable giving a lower estimation error . the following example also shows that our framework can give exact or close - to - exact results . _ example 2 : quantum dephasing channel.__consider a dephasing quantum channel defined as . thus . for a separable scenario with the initial state [ where , eq . ( [ f - lindblad - dynamics ] ) yields , which is significantly close to the exact value . if we choose the entangled , we get , in comparison with the slightly different exact qfi . _ _ summary and outlook.__here we have proved an extended convexity property for quantum fisher information . this property implies that quantum fisher information of a mixture ( convex decomposition ) comprising of quantum states with some probabilities is bounded by average quantum fisher information of constituent states plus classical fisher information attributed to the mixing probabilities . this division of quantum fisher information to quantum and classical parts has been shown to have physically interesting and important consequences . for example , we supplemented this convexity property with a notion of non - hermitian symmetric logarithmic derivative to prove that our convexity relation gives rise to exact value of quantum fisher information in a general open - system / noisy quantum metrology . the non - hermitian extension has also been shown to have several appealing physical implications on its own . this concept has enabled us to derive general , closed ( and simple ) upper bounds on quantum fisher information for open - system scenarios with lindbladian ( or dynamical semigroup ) dynamics . an interesting and practically relevant feature of such bounds is that they clearly relate dynamics to the expected precision of an associated quantum metrology scenario . we have also demonstrated that these bounds are exact or close - to - exact in some important physical systems , and have an intuitive geometrical interpretation . another opening that our extended convexity property has made possible is to track how in a quantum metrology scenario precision exhibits a classical ( shot - noise ) or quantum ( sub - shot - noise or heisenberg ) behavior . in particular , we have shown that as the number of probes increases , a competition between classical and quantum parts of fisher information could determine whether and when ( in terms of probe size ) to expect either of classical or quantum regimes . it is evident that this possibility can have numerous implications for classical / quantum control and for optimizing a metrology scenario for high - precision advanced technologies in physical ( and even biological ) systems . 99 v. giovannetti , s. lloyd , and l. maccone , nature photon . * 5 * , 222 ( 2011 ) ; p. cappellaro , j. emerson , n. boulant , c. ramanathan , s. lloyd , and d. g. cory , phys . rev . lett . * 94 * , 020502 ( 2005 ) ; v. giovannetti , s. lloyd , and l. maccone , _ ibid . _ * 96 * , 010401 ( 2006 ) . c. w. helstrom , _ quantum detection and estimation theory _ ( academic press , new york , 1976 ) . a. s. holevo , _ probabilistic and statistical aspects of quantum theory _ ( north - holland , amsterdam , 1982 ) . s. l. braunstein and c. m. caves , phys . rev . lett . * 72 * , 3439 ( 1994 ) ; s. l. braunstein , c. m. caves , and g. j. milburn , ann . phys . ( n.y . ) * 247 * , 135 ( 1996 ) . p. r. bevington and d. k. robinson , _ data reduction and error analysis for the physical sciences _ ( mcgraw - hill , new york , 2003 ) . a. rivas and a. luis , phys . rev . lett . * 105 * , 010403 ( 2010 ) ; d. braun , eur . phys . j. d * 59 * , 521 ( 2010 ) . f. benatti , s. alipour , and a. t. rezakhani , new j. phys . * 16 * , 015023 ( 2014 ) . f. benatti , r. floreanini , and u. marzolino , phys . rev . a * 89 * , 032326 ( 2014 ) . m. zwierz , c. a. prez - delgado , and p. kok , phys . rev . lett . * 105 * , 180402 ( 2010 ) . s. boixo , s. t. flammia , c. m. caves , and jm geremia , phys . rev . lett . * 98 * , 090401 ( 2007 ) ; s. m. roy and s. l. braunstein , phys . rev . lett . * 100 * , 220501 ( 2008 ) ; m. napolitano , m. koschorreck , b. dubost , n. behbood , r. j. sewell , and m. w. mitchell , nature * 471 * , 486 ( 2011 ) ; b. c. sanders and g. j. milburn , phys . rev . lett . * 75 * , 2944 ( 1995 ) . d. leibfried , m. d. barrett , t. schaetz , j. britton , j. chiaverini , w. m. itano , j. d. jost , c. langer , and d. j. wineland , science * 304 * , 1476 ( 2004 ) ; g. brida , m. genovese , and i. ruo berchera , nature photon . * 4 * , 227 ( 2010 ) ; b. lcke , m. scherer , j. kruse , l. pezz , f. deuretzbacher , p. hyllus , o. topic , j. peise , w. ertmer , j. arlt , l. santos , a. smerzi , and c. klempt , science * 334 * , 773 ( 2011 ) ; j. abadie _ et al . _ , nature phys . * 7 * , 962 ( 2011 ) ; m. a. taylor , j. janousek , v. daria , j. knittel , b. hage , h .- a . bachor , and w. p. bowen , nature photon . * 7 * , 229 ( 2013 ) . v. giovannetti , s. lloyd , and l. maccone , nature photon . * 5 * , 222 ( 2011 ) . s. f. huelga , c. macchiavello , t. pellizzari , a. k. ekert , m. b. plenio , and j. i. cirac , phys . rev . lett . * 79 * , 3865 ( 1997 ) ; y. watanabe , t. sagawa , and m. ueda , phys . rev . lett . * 104 * , 020401 ( 2010 ) ; x .- m . lu , x. wang , and c. p. sun , phys . rev . a * 82 * , 042103 ( 2010 ) ; a. monras and m. g. a. paris , phys . rev . lett . * 98 * , 160401 ( 2007 ) ; g. adesso , f. dellanno , s. de siena , f. illuminati , and l. a. m. souza , phys . rev . a * 79 * , 040305(r ) ( 2009 ) ; a. monras and f. illuminati , phys . rev . a * 83 * , 012315 ( 2011 ) . b. m. escher , r. l. de matos filho , and l. davidovich , nature phys . * 7 * , 406 ( 2011 ) . m. tsang , new j. phys . * 15 * , 073005 ( 2013 ) . s. alipour , m. mehboudi , and a. t. rezakhani , phys . rev . lett . * 112 * , 120405 ( 2014 ) . a. fujiwara and h. imai , j. phys . a : math . theor . * 41 * , 255304 ( 2008 ) . j. koodynski and r. demkowicz - dobrzaski , new j. phys . * 15 * , 073043 ( 2013 ) . r. demkowicz - dobrzaski , j. koodynski , and m. gu , nature commun . * 3 * , 1063 ( 2012 ) . a. fujiwara , phys . rev . a * 63 * , 042304 ( 2001 ) ; s. yu , arxiv:1302.5311 . g. tth and d. petz , phys . rev . a * 87 * , 032324 ( 2013 ) . a. uhlmann , phys . lett . a * 161 * , 329 ( 1992 ) . a. t. rezakhani and p. zanardi , _ ibid . _ * 73 * , 012107 ( 2006 ) . h. cramr , _ mathematical methods of statistics _ ( princeton university press , princeton , nj , 1946 ) . m. g. a. paris , intl . j. quant . inf . * 07 * , 125 ( 2009 ) . see the supplemental material for details of derivations . m. a. nielsen , phys . rev . a * 62 * , 052308 ( 2000 ) . h. nagaoka , _ on fisher information of quantum statistical models _ ( world scientific , singapore , 2005 ) . h .- p . breuer and f. petruccione , _ the theory of open quantum systems _ ( oxford university press , new york , 2002 ) ; a. rivas and s. f. huelga , _ open quantum systems : an introduction _ ( springer , heidelberg , 2012 ) . equations and are special cases of the more general matrix equation , where and are hermitian operators , and is an unknown operator . if ( is the projection onto the null space of ) , this equation has always a solution given as , where is the inverse of over its support ( or the pseudo / moore - penrose inverse ) , is an arbitrary operator , and satisfies [ d. s. djordjevi , j. comput . appl . math . * 200 * , 701 ( 2007 ) ] . a. w. chin , s. f. huelga , and m. b. plenio , phys . rev . lett . * 109 * , 233601 ( 2012 ) . * supplemental material *
|
internet of things ( iot ) refers to a technology paradigm wherein ubiquitous sensors numbering in the billions will able to monitor physical infrastructure and environment , human beings and virtual entities in real - time , process both real - time and historic observations , and take actions that improve the efficiency and reliability of systems , or the comfort and lifestyle of society .the technology building blocks for iot have been ramping up over a decade , with research into pervasive and ubiquitous computing , and sensor networks forming precursors .recent growth in the capabilities of high - speed mobile ( e.g. , 2g/3g/4 g ) and _ ad hoc _( e.g. , bluetooth ) networks , smart phones , affordable sensing and crowd - sourced data collection , cloud data - centers and big data analytics platforms have all contributed to the current inflection point for iot . currently , the iot applications are often manifest in vertical domains , such as demand - response optimization and outage management in _ smart grids _ , or fitness and sleep tracking and recommendations by _ smart watches and health bands_ . the iot stack for such domains is tightly integrated to serve specific needs , but typically operates on a closed - loop _ observe orient decide act ( ooda ) _ cycle , where sensors communicate time - series observations of the ( physical or human ) system to a central server or the cloud for analysis , and the analytics drive recommendations that are enacted on , or notified to , the system to improve it , which is again observed and so on .in fact , this _ closed - loop _ responsiveness is one of the essential design characteristics of iot applications .this low - latency cycle makes it necessary to process data streaming from sensors at fine spatial and temporal scales , in _ real - time _ , to derive actionable intelligence .in particular , this streaming analytics has be to done at massive scales ( millions of sensors , thousands of events per second ) from across distributed sensors , requiring large computational resources ._ cloud computing _ offers a natural platform for scalable processing of the observations at globally distributed data centers , and sending a feedback response to the iot system at the edge .recent _ big data platforms _ like apache storm and spark provide an intuitive programming model for composing such streaming applications , with a scalable , low - latency execution engine designed for commodity clusters and clouds . these _ distributed stream processing systems ( dsps ) _ are becoming essential components of any iot stack to support online analytics and decision - making for iot applications .in fact , reference iot solutions from cloud providers like amazon aws and microsoft azure include their proprietary stream and event processing engines as part of the iot analytics architecture .shared - memory stream processing systems have been investigated over a decade back for wireless sensor networks , with community benchmarks such as _ linear road _ being proposed .but there has not been a detailed review of , or benchmarks for , _ distributed _ stream processing for iot domains . in particular , the efficacy and performance of contemporary dsps , which were originally designed for social network and web traffic , have not been rigorously studied for _ iot data streams and applications_. we address this gap in this paper .we develop a benchmark suite for dsps to evaluate their effectiveness for streaming iot applications .the proposed workload is based on common building - block tasks observed in various iot domains for real - time decision making , and the input streams are sourced from real iot observations from smart cities .specifically , we make the following contributions : 1 .we classify different characteristics of streaming applications and their data sources , in [ sec : features ] .we propose categories of tasks that are essential for iot applications and the key features that are present in their input data streamswe identify performance metrics of dsps that are necessary to meet the latency and scalability needs of streaming iot applications , in [ sec : metrics ] .we propose an iot benchmark for dsps based on representative _ micro - benchmark tasks _ , drawn from the above categories , in [ sec : benchmark ] .further , we design two reference iot applications for _ statistical analytics _ and _ predictive analytics _ composed from these tasks .we also offer real - world streams with different distributions on which to evaluate them .4 . we run the benchmark for the popular apache storm dsps , and present empirical results for the same in [ sec : results ] .our contributions here will allow iot applications to evaluate if current and future dsps meet their performance and scalability needs , and offer a baseline for big data researchers and developers to uniformly compare dsps platforms for different iot domains .stream processing systems allow users to compose applications as a dataflow graph , with task vertices having some user - defined logic , and streaming edges passing messages between the tasks , and run these applications continuously over incoming data streams .early data stream management systems ( dsms ) were motivated by sensor network applications , that have similarities to iot .they supported continuous query languages with operators such as join , aggregators similar to sql , but with a temporal dimension using windowed - join operations .these have been extended to distributed implementations and complex event processing ( cep ) engines for detecting sequences and patterns .current distributed stream processing systems ( dsps ) like storm and spark streaming leverage big data fundamentals , running on commodity clusters and clouds , offering weak scaling , ensuring robustness , and supporting fast data processing over thousands of events per second .they do not support native query operators and instead allow users to plug in their own logic composed as dataflow graphs executed across a cluster . while developed for web and social network applications , such fast data platforms have found use in financial markets , astronomy , and particle physics .iot is one of the more recent domains to consider them .work on dsms spawned the linear road benchmark ( lrb ) that was proposed as an application benchmark . in the scenario , dsms had to evaluate toll and traffic queries over event streams from a virtual toll collection and traffic monitoring system .this parallels with current smart transportation scenarios. however , there have been few studies or community efforts on benchmarking dsps , other than individual evaluation of research prototypes against popular dsps like storm or spark .these efforts define their own measures of success typically limited to throughput and latency and use generic workloads such as enron email dataset with no - operation as micro - benchmark to compare infosphere streams and storm.sparkbench uses two streaming applications , twitter popular tag retrieving data from the twitter website to calculate most popular tag every minute and pageview over synthetic user clicks to get various statistics using spark .stream bench has proposed 7 micro - benchmarks on 4 different synthetic workload suites generated from real - time web logs and network traffic to evaluate dsps .metrics including performance , durability and fault tolerance are proposed .the benchmark covers different dataflow composition patterns and common tasks like grep and wordcount . while useful as a generic streaming benchmark , it does not consider aspects unique to iot applications and streams .sparkbench is a framework - specific benchmark for apache spark , and includes four categories of applications from domains spanning graph computation and sql queries , with one on streaming applications supported by spark streaming .the benchmark metrics include cpu , memory , disk and network io , with the goal of identifying tuning parameters to improve spark s performance .cepben evaluates the performance of cep systems based of the functional behavior of queries .it shows the degree of complexity of cep operations like filter , transform and pattern detection .the evaluation metrics consider event processing latency , but ignore network overheads and cpu utilization .further , cep applications rely on a declarative query syntax to match event patterns rather than a dataflow composition based on user - logic provided by dsps . in contrast, the goal for this paper is to develop relevant micro- and application - level benchmarks for evaluating dsps , specifically for _ iot workloads _ for which such platforms are increasingly being used .our benchmark is designed to be _ platform - agnostic _ , _ simple _ to implement and execute within diverse dsps , and _ representative _ of both the application logic and data streams observed in iot domains .this allows for the performance of dsps to be independently and reproducibly verified for iot applications .there has been a slew of big data benchmarks that have been developed recently in the context of processing high volume ( i.e. , mapreduce - style ) and enterprise / web data that complement our work ._ hibench _ is a workload suite for evaluating hadoop with popular micro - benchmarks like sort , wordcount and terasort , mapreduce applications like nutch indexing and pagerank , and machine learning algorithms like k - means clustering ._ bigdatabench _ analyzes workloads from social network and search engines , and analytics algorithms like support vector machine ( svm ) over structured , semi - structured and unstructured web data .both these benchmarks are general purpose workloads that do not target any specific domain , but mapreduce platforms at large ._ bigbench _ uses a synthetic data generator to simulate enterprise data found in online retail businesses .it combines structured data generation from the tpc - ds benchmark , semi - structured data on user clicks , and unstructured data from online product reviews .queries cover data _ velocity _ by processing periodic refreshes that feed into the data store , _ variety _ by including free - text user reviews , and _ volume _ by querying over a large web log of clicks .we take a similar approach for benchmarking fast data platforms , targeting the iot domain specifically and using real public data streams .there has been some recent work on benchmarking iot applications .in particular , the generating large volumes of synthetic sensor data with realistic values is challenging , yet required for benchmarking . _iotabench _ provides a scalable synthetic generator of time - series datasets .it uses a markov chain model for scaling the time series with a limited number of inputs such that important statistical properties of the stream is retained in the generated data .they have demonstrated this for smart meter data .the benchmark also includes six sql queries to evaluate the performance of different query platforms on the generated dataset .their emphasis is more on the data characteristics and content , which supplements our focus on the systems aspects of the executing platform .citybench is a benchmark to evaluate rdf stream processing systems .they include different generation patterns for smart city data , such as traffic vehicles , parking , weather , pollution , cultural and library events , with changing event rates and playback speeds .they propose fixed set of semantic queries over this dataset , with concurrent execution of queries and sensor streams . here , the target platform is different ( rdf database ) , but in a spirit as our work .in this section , we review the common application composition capabilities of dsps , and the dimensions of the streaming applications that affect their performance on dsps .these semantics help define and describe streaming iot applications based on dsps capabilities . subsequently in this section , we also categorize iot tasks , applications and data streams based on the domain requirements . together , these offer a search space for defining workloads that meaningfully and comprehensively validate iot applications on dsps .dsps applications are commonly composed as a _ dataflow graph _ , where vertices are user provided _ tasks _ and directed edges are refer to _ streams of messages _ that can pass between them . the graph need not be acyclic .tasks in the dataflows can execute zero or more times , and a task execution usually depends on data - dependency semantics , i.e , when `` adequate '' inputs are available , the task executes . however , there are also more nuanced patterns that are supported by dsps that we discuss. _ messages _ ( or events or tuples ) from / to the stream are consumed / produced by the tasks .dsps typically treat the messages as opaque content , and only the user logic may interpret the message content .however , dsps may assign identifiers to messages for fault - tolerance and delivery guarantees , and some message attributes may be explicitly exposed as part of the application composition for the dsps to route messages to downstream tasks ._ selectivity ratio _ , also called _ gain _ , is the number of output messages emitted by a task on consuming a unit input message , expressed as =_input rate_:_output rate_. based on this , one can assess whether a task amplifies or attenuates the incoming message rate .it is important to consider this while designing benchmarks as it can have a multiplicative impact on downstream tasks .there are message generation , consumption and routing semantics associated with tasks and their dataflow composition .[ fig : semantics ] captures the basic _ composition patterns _ supported by modern dsps .` source ` tasks have only outgoing edge(s ) , and these tasks encapsulate user logic to generate or receive the input messages that are passed to the dataflow . likewise , ` sink ` tasks have only incoming edge(s ) and these tasks react to the output messages from the application , say , by storing it or sending an external notification . `transform ` tasks , sometimes called _ map _ tasks , generate one output message for every input message received ( ) .their user logic performs a transformation on the message , such as changing the units or projecting only a subset of attribute values . `filter ` tasks allow only a subset of messages that they receive to pass through , optionally performing a transformation on them ( , ) .conversely , a ` flatmap ` consumes one message and emits multiple messages ( ) .an ` aggregate ` pattern consumes a _ window _ of messages , with the window width provided as a _ count _ or a _ time _ duration , and generates one or more messages that is an aggregation over each message window ( ) . when a task has multiple outgoing edges , routing semantics on the dataflow control if an output message is _ duplicated _ onto all the edges , or just one downstream task is selected for delivery , either based on a _ round robin _ behavior or using a _ hash function _ on an attribute in the outgoing message to decide the target task .similarly , multiple incoming streams arriving at a task may be _ merged _ into a single interleaved message stream for the task . or alternatively , the messages coming on each incoming stream may be conjugated , based on order of arrival or an attribute exposed in each message , to form a _ joined _ stream of messages. there are additional dimensions of the streaming dataflow that can determine its performance on a dsps .tasks may be _ data parallel _ , in which case , it may be allocated concurrent resources ( threads , cores ) to process messages in parallel by different instances the task .this is typically possible for tasks that do not maintain state across multiple messages .the _ number of tasks _ in the dataflow graph indicates the size of the streaming application .tasks are mapped to computing resources , and depending of their degree of parallelism and resource usage , it determines the cores / vms required for executing the application .the _ length of the dataflow _ is the latency of the critical ( i.e. , longest ) path through the dataflow graph , if the graph does not have cycles .this gives an estimate of the expected latency for each message and also influences the number of network hops a message on the critical path has to take in the cluster .we list a few characteristics of the input data streams that impact the runtime performance of streaming applications , and help classify iot message streams .the _ input throughput _ in messages / sec is the cumulative frequency at which messages enter the source tasks of the dataflow .input throughputs can vary by application domain , and are determined both by the number of streams of messages and their individual rates .this combined with the dataflow selectivity will impact the load on the dataflow and the output throughput . _throughput distribution _ captures the variation of input throughput over time . in real - world settings ,the input data rate is usually not constant and dsps need to adapt to this .there may be several common data rate distributions besides a _ uniform _ one. there may be _ bursts _ of data coming from a single sensor , or a coordinated set of sensors . a _ saw - tooth _behavior may be seen in the ramp - up/-down before / after specific events . _normal _ distribution are seen with diurnal ( day vs. night ) stream sources , with _bi - modal _ variations capturing peaks during the morning and evening periods of human activity .lastly , the _ message size _ provides the average size of each message , in bytes .often , the messages sizes remain constant for structured messages arriving from specific sensor or observation types , but may vary for free - text input streams or those that interleave messages of different types .this size help assess the communication cost of transferring messages in the dataflow .iot covers a broad swathe of domains , many of which are rapidly developing .so , it is not possible to comprehensively capture all possible iot application scenarios .however , dsps have clear value in supporting the real - time processing , analytics , decision making and feedback that is intrinsic to most iot domains . here , we attempt to categorize these common processing and analytics tasks that are performed over real - time data streams. * parse .* messages are encoded on the wire in a standard text - based or binary representation by the stream sources , and need to be parsed upon arrival at the application .text formats in particular require string parsing by the tasks , and are also larger in size on the wire .the tasks within the application may themselves retain the incoming format in their streams , or switch to another format or data model , say , by projecting a subset of the fields .industry - standard formats that are popular for iot domains include csv , xml and json text formats , and exi and cbor binary formats . ** messages may require to be filtered based on specific attribute values present in them , as part of data quality checks , to route a subset of message types to a part of the dataflow graph , or as part of their application logic .value and band - pass filters that test an attribute s _ numerical value ranges _ are common , and are both compact to model and fast to execute .since iot event rates may be high , more efficient bloom filters may also be used to process _ discrete values _ with low space complexity but with a small fraction of false positives .* statistical analytics . *groups of messages within a sequential time or count window of a stream may require to be aggregated as part of the application .the aggregation function may be _ common mathematical operations _ like average , count , minimum and maximum .they may also be _ higher order statistics _such finding outliers , quartiles , second and third order moments , and counts of distinct elements .data cleaning _ like linear interpolation or denoising using kalman filters are common for sensor - based data streams .some tasks may maintain just local state for the window width ( e.g. , local average ) while others may maintain state across windows ( e.g. , moving average ). when the state size grows , here again approximate aggregation algorithms may be used .* predictive analytics . * predicting future behavior of the system based on past and current messages is an important part of iot applications .various statistical and machine - learning algorithms may be employed for predictive analytics over sensor streams .the _ predictions _ may either use a recent window of messages to estimate the future values over a time or count horizon in future , or train models over streaming messages that are periodically used for predictions over the incoming messages .the _ training _ itself can be an online task that is part of an application . for e.g. , arima and linear regression use statistical methods to predict uni- or multi - variate attribute values , respectively .classification algorithms like decision trees , neural networks and nave bayes can be trained to map discrete values to a category , which may lead to specific actions taken on the system .external libraries like weka or statistical packages like r may be used by such tasks .* pattern detection . *another class of tasks are those that identify patterns of behavior over several events .unlike window aggregation which operate over static window sizes and perform a function over the values , pattern detection matches user - defined predicates on messages that may not be sequential or even span streams , and returned the matched messages .these are often modeled as _ state transition automata _ or _ query graphs_. common patterns include contiguous or non - contiguous sequence of messages with specific property on each message ( e.g. , high - low - high pattern over 3 messages ) , or a join over two streams based on a common attribute value .complex event processing ( cep ) engines may be embedded within the dsps task to match these patterns .* visual analytics .* other than automated decision making , iot applications often generate _ charts and animations _ for consumption by end - users or system managers .these visual analytics may be performed either at the client , in which case the processed data stream is aggregated and provided to the users .alternatively , the streaming application may itself periodically generate such plots and visualizations as part of the dataflow , to be hosted on the web or pushed to the client .charting and visualization libraries like d3.js or jfreechart may be used for this purpose .* io operations . *lastly , the iot dataflow may need to access external storage or messaging services to access / push data into / out of the application .these may be to store or load trained models , archive incoming data streams , access historic data for aggregation and comparison , and subscribe to message streams or publish actions back to the system .these require access to _ file storage , sql and nosql databases , and publish - subscribe messaging systems_. often , these may be hosted as part of the cloud platforms themselves .the tasks from the above categories , along with other domain - specific tasks , are composed together to form streaming iot dataflows .these domain dataflows themselves fall into specific classes based on common use - case scenarios , and loosely map to the observe - orient - decide - act ( ooda ) phases ._ extract - transform - load ( etl ) and archival _ applications are front - line `` observation '' dataflows that receive and pre - process the data streams , and if necessary , archive a copy of the data offline .pre - processing may perform data format transformations , normalize the units of observations , data quality checks to remove invalid data , interpolate missing data items , and temporally reorder messages arriving from different streams .the pre - processed data may be archived to table storage , and passed onto subsequent dataflow for further analysis ._ summarization and visualization _applications perform statistical aggregation and analytics over the data streams to understand the behavior of the iot system at a coarser granularity . such summarization can give the high - level pulse of the system , and help `` orient '' the decision making to the current situation .these tasks are often succeeded by visualizations tasks in the dataflow to present it to end - users and decision makers ._ prediction and pattern detection _ applications help determine the future state of the iot system and `` decide '' if any reaction is required .they identify patterns of interest that may indicate the need for a correction , or trends based on current behavior that require preemptive actions .for e.g. , a trend that indicates an unsustainably growing load on a smart power grid may decide to preemptively shed load , or a detection that the heart - rate from a fitness watch is dangerously high may trigger a slowdown in physical activities ._ classification and notification _ applications determine specific `` actions '' that are required and communicate them to the iot system .decisions may be mapped to specific actions , and the entities in the iot system that can enact those be notified . for e.g. , the need for load shedding in the power grid may map to specific residents to request the curtailment from , or the need to reduce physical activities may lead to a treadmill being notified to reduce the speed .iot data streams are often generated by physical sensors that observe physical systems or the environment . as a result, they are typically time - series data that are generated periodically by the sensors . the sampling rate for these sensors may vary from once a day to hundreds per second , depending on the domain .the number of sensors themselves may vary from a few hundred to millions as well .iot applications like smart power grids can generate high frequency plug load data at thousands of messages / sec from a small cluster of residents , or low frequency data from a large set of sensors , such as in smart transportation or environmental sensing . as a result, we may encounter a wide range of input throughputs from to messages / sec . in comparison ,streaming web applications like twitter deal with tweets / sec from 300 m users .at the same time , this event rate itself may not be uniform across time .sensors may also be configured to emit data only when there is a change in observed value , rather than unnecessarily transmitting data that has not changed .this helps conserve network bandwidth and power for constrained devices when the observations are slow changing .further , if data freshness is not critical to the application , they may sample at high rate but transmit at low rates but in a burst mode .e.g. smart meters may collecting kwh data at 15 min intervals from millions of residents but report it to the utility only a few times a day , while the fitbit smart watch syncs with the cloud every few minutes or hours even as data is recorded every few seconds .message variability also comes into play when human - related activity is being tracked .diurnal or bimodal event rates are seen with single peaks in the afternoons , or dual peaks in the morning and evening .e.g. , sensors at businesses may match the former while traffic flow sensors may match the latter .there may also be a variety of observation types from the same sensor device , or different sensor devices generating messages .these may appear in the same message as different fields , or as different data streams .this will affect both the message rate and the message size .these sensors usually send well - formed messages rather than free - text messages , using standards like senml .hence their sizes are likely to be deterministic if the encoding format is not considered text formats tend to bloat the size and also introduce size variability when mapping numbers to strings .however , social media like tweets and crowd - sourced data are occasionally used by iot applications , and these may have more variability in message sizes .we identify and formalize commonly - used quantitative performance measures for evaluating dsps for the iot workloads . *latency . *latency for a message that is generated by task is the time in seconds it took for that task to process one or more inputs to generate that message . if is the selectivity for a task , the time it took to consume messages to _ causally produce _those output messages is the latency of the messages , with the _ average latency _ per message given by . when we consider the average latency of the dataflow application , it is the average of the time difference between each message consumed at the source tasks and all its causally dependent messages generated at the sink tasks .the latency per message may vary depending on the input rate , resources allocated to the task , and the type of message being processed . while this task latency is the inverse of the mean throughput , the _ end - to - end latency _ for the task within a dataflow will also include the network and queuing time to receive a tuple and transmit it downstream .* throughput . *the output throughput is the aggregated rate of output messages emitted out of the sink tasks , measured in messages per second .the throughput of a dataflow depends on the input throughput and the selectivity of the dataflow , provided the resource allocation and performance of the dsps are adequate . ideally , the output throughput , where is the input throughput for a dataflow with selectivity .it is also useful to measure the _ peak throughput _ that can be supported by a given application , which is the maximum stable rate that can be processed using a fixed quanta of resources .both throughput and latency measurements are relevant only under_ stable conditions _ when the dsps can sustain a given input rate , i.e. , when the latency per message and queue size on the input buffer remain constant and do not increase unsustainably .* the ideal output throughput may deviate due to variable rate of the input streams , change in the paths taken by the input stream through the dataflow ( e.g. , at a ` hash ` pattern ) , or performance variability of the dsps .we use jitter to track the variation in the output throughput from the expected output throughput , defined for a time interval as , where the numerator is the observed difference between the expected and actual output rate during interval , and the denominator is the expected long term average output rate given a long - term average input rate . in an ideal case , jitter will tend towards zero . *cpu and memory utilization .* streaming iot dataflows are expected to be resource intensive , and the ability of the dsps to use the distributed resources efficiently with minimal overhead is important .this also affects the vm resources and consequent price to be paid to run the application using the given stream processing platform .we track the cpu and memory utilization for the dataflow as the average of the cpu and memory utilization across all the vms that are being used by the dataflow s tasks .the per - vm information can also help identify which vms hosting which tasks are the potential bottlenecks , and can benefit from data - parallel scale - out .we propose benchmark workloads to help evaluate the metrics discussed before for various dsps .these benchmarks are in particular targeted for emerging iot applications , to help them distinguish the capabilities of contemporary dsps on cloud computing infrastructure .the benchmarks themselves have two parts , the dataflow logic that is executed on the dsps and the input data streams that they are executed for .we next discuss our choices for both .we have identified two real - world iot data streams available in the public domain as candidates for our benchmarking workload .these correspond to smart cities domain , which a fast - growing space within iot .their features are shown in table [ tbl : datasets ] and event rate distribution in fig .[ fig : data - distribution ] ..smart cities data stream features and rates at scaling [ cols="^,^,^,^,^,^,^",options="header " , ] [ tbl : tasts ] we include a single xml parser as a representative parsing operation within our suite .the bloom filter is a more practical filter operation for large discrete datasets , and we prefer that to a simple value range filter .we have several statistical analytics and aggregation tasks .these span simple averaging over a single attribute value to and second order moments over time - series values , to kalman filter for denoising of sensor data and approximate count of distinct values for large discrete attribute values .predictive analytics using a multi - variate linear regression model that is trained offline and a sliding window univariate model that is trained online are included. a decision tree machine learning for discrete attribute values is also used for classification , based on offline training .lastly , we have several io tasks for reading and writing to cloud file and nosql storage , and to publish to an mqtt publish - subscribe broker for notifications . due to limited space , we skip pattern matching and visual analytics task categories . a micro - benchmark dataflow is composed for each of these tasks as a sequence of a source task , the benchmark task and a sink task .as can be seen , these tasks also capture different dataflow patterns such as transform , filter , aggregate , flat map , source and sink .application benchmarks are valuable in understanding how non - trivial and meaningful iot applications behave on dsps .application dataflows for a domain are most representative when they are constructed based on real or realistic application logic , rather than synthetic tasks . in case applications use highly - custom logic or proprietary libraries , this may not be feasible or reusable as a community benchmark .however , many of the common iot tasks we have proposed earlier are naturally composable into application benchmarks that satisfy the requirements of a ooda decision making loop .we propose application benchmarks that capture two common iot scenarios : a _ data pre - processing and statistical summarization ( stats ) _ application and a _ predictive analytics ( pred ) _ application. stats ( fig .[ fig : app - stats ] ) ingests incoming data streams , performs data filtering of outliers on individual observation types using a bloom filter , and then does three concurrent types of statistical analytics on observations from individual sensor / taxi ids : sliding average over a event window for city / taxi ( mins native time window ) , kalman filter for smoothing followed by a sliding window linear regression , and an approximate count of distinct readings .the outcomes from these statistics are published by an mqtt task , which can separately be subscribed to and visualized on a client browser or a mobile app .the dummy sink task is used for logging .+ the pred dataflow captures the lifecycle of online prediction and classification to drive visualization and decision making for iot applications .it parses incoming messages and forks it to a decision tree classifier and a multi - variate regression task .the decision tree uses a trained model to classify messages into classes , such as good , average or poor air quality , based on one or more of their attribute values .the linear regression uses a trained model to predict an attribute value in the message using several others .it then estimates the error between the predicted and observed value , normalized by the sliding average of the observations .these outputs are then grouped and plotted , and the output file written to cloud storage for hosting on a portal .one realistic addition is the use of a separate stream to periodically download newly trained classification and regression models from cloud storage , and push them to the prediction tasks . as such , these applications leverage many of the compositional capabilities of dsps .the dataflows include _ single and dual sources _ , tasks that are _ composed sequentially and in parallel _ , _ stateful and stateless _ tasks , and _ data parallel tasks _ allowing for concurrent instances .each message in the city and taxi streams contains multiple observation fields , but several of these tasks are applicable only on univariate streams and some are meaningful only from time - series data from individual sources .thus , the initial parse task for stats uses a _flat map _ pattern ( ) to create observation - specific streams early on .these streams are further passed to task instances , grouped by their observation type and optionally their sensor i d using a _ hash pattern_.we implement the 13 micro - benchmarks as generic java tasks that can consume and produce objects .these tasks are building blocks that can be composed into micro - dataflows and the stats and pred dataflows using any dsps that is being benchmarked . to validate our proposed benchmark , we compose these dataflows on the apache storm open source dsps , popular for fast - data processing , using its java apis .we then run these for the two stream workloads and evaluate them based on the metrics we have defined .the benchmark is available online at ` https://github.com/dream-lab/bm-iot ` . in storm ,each task logic is wrapped by a _ bolt _ that invokes the task for each incoming tuple and emits zero or more response tuples downstream .the dataflow is composed as a _topology _ that defines the edges between the bolts , and the _ groupings _ which determine duplicate or hash semantics .we have implemented a scalable data - parallel event generator that acts as a source task ( _ spout _ ) .it loads time - series tuples from a given csv file and replays them as an input stream to the dataflow at a constant rate , at the maximum possible rate , or at intervals determined by the timestamps , optionally scaled to go faster or slower .we generate random integers as tuples at maximum rate for the micro - benchmarks , and replay the original city and taxi datasets at their native rates for the application benchmarks , following the earlier distribution .we use apache storm running on openjdk 1.7 , and hosted on _ centos _ virtual machines ( vms ) in the singapore data center of microsoft azure public cloud . for the micro - benchmarks , storm runs the task being benchmarked on one exclusive ` d1 ` size vm ( intel xeon e5 - 2660 core at 2.2 ghz , gib ram , gib ssd ) , while the supporting source and sink tasks and the master service run on a ` d8 ` size vm ( intel xeon e5 - 2660 core at 2.2 ghz cores , gib ram , gib ssd ) .the larger vm for the supporting tasks and services ensures that they are not the bottleneck , and helps benchmark the peak rate supported by the micro - benchmark task on a single core vm . for the stats and pred application benchmark, we use ` d8 ` vms for all the tasks of the dataflow , while reserving additional ` d8 ` vms to exclusively run the spout and sink tasks , and master service .we determine the number of cores and data parallelism required by each task based on the peak rate supported by the task on a single core as given by the micro - benchmarks , and the peak rate seen by that task for a given dataflow application and stream workload . in some cases that are i / o bound ( e.g. , mqtt , azure storage ) rather than cpu bound, we run multiple task instances on a single core .we log the i d and timestamp for each message at the source and the sink tasks in - memory to calculate the latency , throughput and jitter metrics , after correcting for skews in timestamps across vms .we also sample the cpu and memory usage on all vms every 5 secs to plot the utilization metrics .each experiment runs for mins of wallclock time that translates to about days of event data for the city and taxi datasets at scaling ] .[ fig : storm : micro : bm ] shows plots of the different metrics evaluated for the micro - benchmark tasks on storm when running at their peak input rate supported on a single ` d1 ` vm with one thread .the _ peak sustained throughput _ per task is shown in fig .[ fig : storm : micro : peakthru ] in _ log - scale_. we see that most tasks can support msg / sec or higher rate , going up to msg / sec for blf , kal and dac .xml parsing is highly cpu bound and has a peak throughput of only msg / sec , and the azure operations are i / o bound on the cloud service and even slower .the inverse of the peak sustained throughput gives the _mean latency_. however , it is interesting to examine the _ end - to - end latency _ , calculated as the time taken between emitting a message from the source , having it pass through the benchmarked task , and arrive at the sink task .this is the effective time contributed to the total tuple latency by this task running within storm , including framework overheads .we see that while the mean latencies should be in sub - milliseconds for the observed throughputs , the box plot for end - to - end latency ( fig .[ fig : storm : micro : latency ] ) varies widely up to ms for q3 .this wide variability could be because of non - uniform task execution times due to which slow executions queue up incoming tuples that suffer higher queuing time , such as for dtc and mlr that both use the weka library . or tasks supporting a high input rate in the order of msg / sec , such as dac and kal , may be more sensitive to even small per - tuple overhead of the framework , say , caused by thread contention between the storm system and worker threads , or queue synchronization .the azure tasks that have a lower throughput also have a higher end - to - end latency , but much of which is attributable directly to the task latency .+ the box - plot for _ jitter _ ( fig .[ fig : storm : micro : jitter ] ) shows values close to zero in all cases .this indicates the long - term stability of storm in processing the tasks at peak rate , without unsustainable queuing of input messages .the wider whiskers indicate the occasional mismatch between the expected and observed output rates .the box plots for cpu utilization ( fig .[ fig : storm : micro : cpu ] ) shows the single - core vm effectively used at or above in all cases except for the azure tasks that are i / o bound . the memory utilization ( fig .[ fig : storm : micro : mem ] ) appears to be higher for tasks that support a high throughput , potentially indicating the memory consumed by messages waiting in queue rather than consumed by the task logic itself .+ the stats and pred application benchmarks are run for the city and taxi workloads at their native rates , and the performance plots shown in fig .[ fig : apps ] .the end - to - end latencies of the applications depend on the sum of the end - to - end latencies of each task in the critical path of the dataflow .the peak rates supported by the tasks in stats is much higher than the input rates of city and taxi .so the latency box plot for stats is tightly bound ( fig .[ fig : storm : stats : latency ] ) and its median much lower at ms compared to the end - to - end latency of the tasks at their peak rates .the jitter is also close to zero in all cases .so storm can comfortably support stats for city and taxi on 7 and 5 vms , respectively .the distribution of vm cpu utilization is also modest for stats .city has a median with a narrow box ( fig .[ fig : storm : stats : city : cpu ] ) , while taxi has a low median with a wide box ( fig .[ fig : storm : stats : taxi : cpu ] ) this is due to its bi - modal distribution with low input rates at nights , with lower utilization , and high in the day with higher utilization . for the pred application , we see that the latency box plot is much wider , and the median end - to - end latency is between ms for city and taxi ( fig .[ fig : storm : pred : latency ] ) .this reflects the variability in task execution times for the weka tasks , dtc and mlr , which was observed in the micro - benchmarks too .the azure blob upload also adds to the absolute increase in the end - to - end time .the jitter however remains close to zero , indicating sustainable performance .the cpu utilization is also higher for pred , reflecting the more complex task logic present in this application relative to stats .in this paper , we have proposed a novel application benchmark for evaluating distributed stream processing systems ( dsps ) for the internet of things ( iot ) domain .fast data platforms like dsps are integral for the rapid decision making needs of iot applications , and our proposed workload helps evaluate their efficacy using common tasks found in iot applications , as well as fully - functional applications for statistical summarization and predictive analytics .these are combined with two real - world data streams from smart transportation and urban monitoring domains of iot .the proposed benchmark has been validated for the highly - popular apache storm dsps , and the performance metrics presented . as future work , we propose to add further depth to some of the iot task categories such as parsing and analytics , and also add two further applications on archiving real - time data and detecting online patterns .we also plan to include more data stream workloads having different temporal distributions and from other iot domains , with a possible generalization of the distributions to allow for synthetic data generation .the benchmark can also be used to evaluate other popular dsps such as apache spark streaming .we acknowledge detailed inputs provided by tarun sharma of nvidia corp . and formerly from iisc in preparing this paper .the experiments on microsoft azure were supported through a grant from azure for research .
|
internet of things ( iot ) is a technology paradigm where millions of sensors monitor , and help inform or manage , physical , environmental and human systems in real - time . the inherent closed - loop responsiveness and decision making of iot applications makes them ideal candidates for using low latency and scalable stream processing platforms . distributed stream processing systems ( dsps ) are becoming essential components of any iot stack , but the efficacy and performance of contemporary dsps have not been rigorously studied for iot data streams and applications . here , we develop a benchmark suite and performance metrics to evaluate dsps for streaming iot applications . the benchmark includes common iot tasks classified across various functional categories and forming micro - benchmarks , and two iot applications for statistical summarization and predictive analytics that leverage various dataflow compositional features of dsps . these are coupled with stream workloads sourced from real iot observations from smart cities . we validate the iot benchmark for the popular apache storm dsps , and present empirical results .
|
an important common challenge facing retailers is to understand customer preferences in the presence of stockouts .when an item is out of stock , some customers will leave , while others will substitute a different product . from the transaction data collected by retailers, it is challenging to determine exactly what the customer s original intent was , or , because of customers that leave without making a purchase , even how many customers there actually were .the task that we consider here is to infer both the customer arrival rate , including the unobserved customers that left without a purchase , and the substitution model , which describes how customers substitute when their preferred item is out of stock .furthermore , we wish to infer these from sales transaction and stock level data , which data are readily available for many retailers .these quantities are a necessary input for inventory management and assortment planning problems .stockouts are a common occurrence in some retail settings , such as bakeries and flash - sale retailers .not properly accounting for the data truncation caused by stockouts can lead to poor stocking decisions .navely estimating demand as the number of items sold underestimates the demand of items that stock out , while overestimating the demand of their substitutes .this could lead the retailer to set the stock for the substitute items too high , while leaving the stock of the stocked - out item too low , potentially losing customers and revenue .there are several key features of our model and inference that make it successful in problem settings where prior work in the area has not been .first , prior work has assumed the arrival rate to be constant within each time period .our model allows for arbitrary nonhomogeneous arrival rate functions , which is important for our bakery case study where sales have strong peaks at lunch time and between classes .second , prior work has required a particular choice model , whereas our model can incorporate whichever choice model is most appropriate. there are a wide variety of choice models , econometric models describing how a customer chooses one of several alternatives , with different properties and which are applicable in different settings .third , we model multiple customer segments , each with their own substitution models which can be used to borrow strength across data from multiple stores .fourth , unlike prior work which has used point estimates , our inference is fully bayesian . because we do full posterior inference , we are able to compute the posterior predictive distributions for decision quantities of interest , such as lost sales due to stock unavailability .this allows us to incorporate the uncertainty in estimation directly into uncertainty in our decision quantities , thus leading to more robust decisions .our contributions are four - fold .first , we develop a bayesian hierarchical model that uses the censoring caused by stockouts and their induced substitutions to gain useful insight from transaction data .our model is flexible and powerful enough to be useful in a wide range of retail settings .second , we show how recent advances in mcmc for topic models can be adapted to our model to provide a sampling procedure that scales to large transaction databases .third , we provide a simulation study which shows that we can recover the true generating values and which demonstrates the scalability of the inference procedure .finally , we make available actual retail transaction data from a bakery and use these data for a case study showing how the model and sampling work in a real setting . in the case study we evaluate the predictive power of the model , and show that our model can make accurate out - of - sample predictions whereas the baseline method can not .we finally show how the methods developed here can be useful for decision making by producing a posterior predictive distribution of the bakery s lost sales due to stock unavailability .we begin by introducing the notation that we use to describe the observed data .we then introduce the nonhomogeneous model for customer arrivals , followed by a discussion of various possible choice models .section [ sec : mixtures ] discusses how multiple customer segments are modeled .finally , section [ sec : likelihood ] introduces the likelihood model and section [ sec : prior ] discusses the prior distributions . we suppose that we have data from a collection of stores . for each store , data come from a number of time periods , throughout each of which time varies from to . for example , in our experiments a time period was one day .we consider a collection of items .we suppose that we have two types of data : purchase times and stock levels .we denote the number of purchases of item in time period at store as .then , we let be the observed purchase times of item in time period at store . for notational convenience ,we let be the collection of all purchase times for that store and time period , and let be the complete set of arrival time data . we denote the known initial stock level as and assume that stocks are not replenished throughout the time period .that is , and equality implies a stockout . as before ,we let and represent respectively the collection of initial stock data for store and time period , and for all stores and all time periods . given and , we can compute a stock indicator as a function of time .we define this indicator function as the generative model for these data will be that customers arrive at the store according to some arrival process .each customer belongs to a particular segment , and chooses an item to purchase ( or no - purchase ) based on the preferences of his or her segment and the available stock .when the customer purchases item , the arrival time is recorded in .when a customer leaves without making a purchase , for instance because his or her preferred item is out of stock , the arrival time is not recorded .we now present the two main components of this model : the customer arrival process and the choice model .we model the times of customer arrivals using a nonhomogeneous poisson process ( nhpp ) .an nhpp is a generalization of the poisson process that allows for the intensity to be described by a function as opposed to being constant .we assume that the intensity function has been parameterized , with parameters potentially different for each store .the most basic parameterization is , producing a homogeneous poisson process of rate . as another example, we can produce an intensity function that rises to a peak and then decays by letting which is the derivative of the hill equation . the posterior of will be inferred . to do this we use the log - likelihood function for nhpp arrivals , which for arrival times over interval ] , and ] .for each store , we simulated time periods , each of length and with the initial stock for each item chosen uniformly between and , independently at random for each item , time period , and store .purchase data were then generated according to the generative model in section [ sec : likelihood ] .figure [ fig : sim1_5 ] shows the posterior means estimated from the mcmc samples across the repeats of the simulation , each with different segment distributions and rate parameters .this figure shows that across the full range of parameter values used in these simulations the posterior mean was close to the true , generating value . in the second set of simulations we used the hill rate function with the nonparametric choice model , with 3 items .we used all sets of preference rankings of size and , which for items requires a total of segments .we simulated data for a single store , with the segment proportion set to for preference rankings , , and : the first segment prefers item and will leave with no purchase if item is not available , the second segment prefers item but is willing to substitute to item , and the third segment prefers item but is willing to substitute to item .the segment proportions for the remaining preference rankings were set to zero . with this simulationwe study the effect of the number of time periods used in the inference , . was taken from , and for each of these values 10 simulations were done .as in figure [ fig : sim1_5 ] , the posterior densities for the segment proportions were concentrated near their true values .figure [ fig : sim3_3 ] shows how the posteriors depended on the number of time periods of available data .the top panel shows that the posterior means for the non - zero segment proportions tended closer to the true value as more data were made available .the bottom panel shows the actual concentration of the posterior , where the interquartile range of the posterior decreased with the number of time periods . because we use a stochastic gradient approximation , using more time periods came at no additional computational cost : we used 3 time periods for each gradient approximation regardless of the available numberwe now provide the results of the model applied to real transaction data . as part of our case study, we evaluate the predictive power of the model and sample the posterior distribution of lost sales due to stockouts .we obtained one semester of sales data from the bakery at 100 main marketplace , a cafe located at mit , for a collection of cookies : oatmeal , double chocolate , and chocolate chip .the data set included all purchase times for 151 days ; we treated each day as a time period ( 11:00 a.m. to 7:00 p.m. ) , and there were a total of 4084 purchases .stock data were not available , only purchase times , so for the purpose of these experiments we set the initial stock for each time period equal to the number of purchases for the time period - thus every item was treated as stocked out after its last recorded purchase. this may be a reasonable assumption for these cookies given that they are perishable baked goods which are meant to stock out by the end of the day , but in any case the experiments still provide a useful illustration of the method .the empirical purchase rate for the cookies , shown in figure [ fig : lnch_2 ] , was markedly nonhomogeneous : there is a broad peak at lunch time and two sharp peaks at common class ending times .we modeled the rate function with a combination of the hill function ( [ eq : hill ] ) and a fixed function consisting of only two peaks at the two afternoon peak times , , obtained via a spline .the hill function has three parameters , and then a fourth parameter provided the weight of the fixed peaks that were added in : .we fit the model separately with the exogenous and nonparametric choice models .figure [ fig : lnch_2 ] shows 20 posterior samples for the model s predicted average purchase rate over all time periods , which equals , from the fit with the nonparametric choice model .these samples show that the model provides an accurate description of the arrival rate .the variance in the samples provides an indication of the uncertainty in the model , which further motivates the use of the posterior predictive distribution over a point estimate for making predictions .figure [ fig : lnch_3 ] shows the posterior density for the substitution rate , obtained by fitting the model with the exogenous choice model .the substitution rate is very low , indicating that most customers left without a purchase if their preferred cookie was not in stock .the posterior distribution of the item preference vector is given in [ fig : lnch_4 ] .chocolate chip cookies were the strong favorite , followed by double chocolate and lastly oatmeal .the next set of experiments establish that the model has predictive power on real data .we evaluated the predictive power of the model by predicting out - of - sample purchase counts during periods of varying stock availability .we took the first of time periods ( 120 time periods ) as training data and did posterior inference .the latter 31 time periods were held out as test data , the goal being to use data from the first part of the semester to make predictions about the latter part .we considered each possible level of stock unavailability , _i.e. _ , ] , etc . for each stock level, we found all of the time intervals in the test periods with that stock .the prediction task was , given only the time intervals and the corresponding stock level , to predict the total number of purchases that took place during those time intervals in the test periods .the actual number of purchases is known and thus predictive performance can be evaluated .there were no intervals where only chocolate chip cookies were out of stock , but predictions were made for every other stock combination .this is a meaningful prediction task because good performance requires being able to accurately model both the arrival rate as a function of time and how the actual purchases then depend on the stock .we compare predictive performance to a baseline model that has previously been proposed for this problem by , which is the maximum likelihood model with a homogeneous arrival rate and the mnl choice model .we discuss this and other related works in more detail in section [ sec : relatedwork ] . for the mnl baseline ,the parameter is unidentifiable and can not be estimated .we fit the model for each fixed , and show here the results with the value of that minimized the out - of - sample absolute deviation between the model expected number of purchases and the true number of purchases , which was .that is , we show here the results that would have been obtained if we had known _ a priori _ the best value of , and thus show the best possible performance of the baseline . for our model , for each choice model ( nonparametric and exogenous ) posterior samples obtained from the mcmc procedure were used to estimate the posterior predictive distribution for the number of purchases under each stock level . for the maximum likelihood baseline , we used simulation to estimate the distribution of purchase counts conditioned on the point estimate model .these posterior densities , smoothed with a kernel density estimate , are given in figure [ fig : pred_1 ] . despite their very different natures , the predictions made by the exogenous and nonparametric models are quite similar , and both haveposterior means close to the true values for all stock levels . the baseline maximum likelihood model with a homogeneous arrival rate and mnl choice performs very poorly .our purpose in inferring the model is to use it to make better stocking decisions .an important starting point is to use the inferred parameters to estimate what the sales would have been had there not been any stockouts .this allows us to know how much revenue is being lost with our current stocking strategy .we estimated posterior densities for the number of purchases of each item across 151 time periods , with full stock .figure [ fig : pred_4 ] compares those densities to the actual number of cookie purchases in the data .for each of the cookies , the actual number of purchases was significantly less than the posterior density for purchases with full stock , indicating that there were substantial lost sales due to stockouts . with the nonparametric model , the difference between the full - stock posterior mean and the actual number of purchases was 791 oatmeal cookies , 707 double chocolate cookies , and 1535 chocolate chip cookies .figure [ fig : lnch_3 ] shows that customers were generally unwilling to substitute , which would have contributed to the lost sales .the primary work on this problem , estimating demand and substitution from sales transaction data with stockouts and unobserved no - purchases , was done by .they model customer arrivals using a homogeneous poisson process within each time period , meaning the arrival rate is constant throughout each time period .customers then choose an item , or an unobserved no - purchase , according to the mnl choice model .they derive an em algorithm to solve the corresponding maximum likelihood problem . in the prediction task of section [ sec : predictions ] we compared our results with this model as the baseline and found that it was unable to make accurate predictions with our case study data .our model overcomes several limitations of this model , thereby substantially advancing the power of the inference and the settings in which the model can be used .first , figure [ fig : lnch_2 ] shows that the arrivals are significantly nonhomogeneous throughout the day , and modeling the arrival rate as constant throughout the day is likely the reason the baseline model failed the prediction task .the work in proposes extending their model to a nonhomogeneous setting by choosing sufficiently small time periods that the arrival rate can be approximated as piecewise constant .however , with the level of nonhomogeneity seen in figure [ fig : lnch_2 ] it is implausible that accurate estimation could be done for the number of segments ( and thus separate rate parameters ) required to model the arrival rate with a piecewise - constant function .second , our model does not require using the mnl choice model , which avoids the issue with the parameter being unidentifiable .this parameter represents the proportion of arrivals that do not purchase anything even when all items are in stock , and is not something that a retailer would necessarily know .finally , we take a bayesian approach to inference and produce posterior predictive distributions . this becomes especially important in this setting where the parameters themselves are of secondary interest to using the model to make predictions about lost revenue and to make decisions about stocking strategies .other work in this area includes , where customer arrivals are modeled with a homogeneous poisson process and purchase probabilities are modeled explicitly for each stock combination , as opposed to using a choice model .their model does not scale well to a large number of items as the likelihood expression includes all stock combinations found in the data .the work of is extended in to incorporate nonparametric choice models , for which maximum likelihood estimation becomes a large - scale concave program that must be solved via a mixed integer program subproblem .there is a large body of work on estimating demand and choice in settings different than that which we consider here , such as discrete time , panel or aggregate sales data , negligible no purchases , and online learning with simultaneous ordering decisions .these models and estimation procedures do not apply to the setting that we consider here , which is retail transaction data with stockouts and unobserved no - purchases ; provide a review of the various threads of research in the larger field of demand and choice estimation .our work fits into a growing body of work in advancing the use of statistics in areas of business .these areas include marketing , market analysis , demand forecasting , and pricing .these works , and ours , address a real need for rigorous statistical methodologies in business , as well as a substantial opportunity for impact .we have developed a bayesian model for inferring primary demand and consumer choice in the presence of stockouts .the model can incorporate a realistic model of the customer arrival rate , and is flexible enough to handle a variety of different choice models .our model is conceptually related to topic models like latent dirichlet allocation .variants of topic models are regularly applied to very large text corpora , with a large body of research on how to effectively infer these models . that research was the source of the stochastic gradient mcmc algorithm that we used , which allows inference from even very large transaction databases .the simulation study showed that when data were actually generated from the model , we were able to recover the true generating values .it further showed that the posterior bias and variance decreased as more data were made available , an improvement that came without any additional computational cost due to the stochastic gradient . in the case study we applied the model and inference to real sales transaction data from a local bakery .the daily purchase rate in the data was clearly nonhomogeneous , with several peak periods .these data clearly demonstrated the importance of modeling nonhomogeneous arrival rates in retail settings . in a prediction task that required accurate modeling of both the arrival rate and the choice model, we showed that the model was able to make accurate predictions and significantly outperform the baseline approach .finally , we showed how the model can be used to estimate lost sales due to stockouts .the posterior provided evidence of substantial lost cookie sales .the model and inference procedure we have developed provide a new level of power and flexibility that will aid decision makers in using transaction data to make smarter decisions .we are grateful to the staff at 100 main marketplace at the massachusetts institute of technology who provided data for this study .b. letham , w. sun , and a. sheopuri .latent variable copula inference for bundle pricing from retail transaction data . in _ proceedings of the 31st international conference on machine learning _ ,icml14 , 2014 .we consider the complete arrivals , which include both the observed arrivals as well as the unobserved arrivals that left as no - purchase , which we here denote .we define an indicator equal to if the customer at time purchased item , or if this customer left as no - purchase . for store and time period , \mid \boldsymbol{\tilde{t}^{\sigma , l } } , \boldsymbol{\eta}^{\sigma}\right ) \\ & \quad \times \prod_{j=1}^{\tilde{m}^{\sigma , l } } p(\tilde{t}^{\sigma , l}_j \mid \tilde{t}^{\sigma , l}_{<j } , \boldsymbol{\eta}^{\sigma } ) p(\tilde{i}^{\sigma , l}_j\mid \tilde{t}^{\sigma , l}_{<j } , \boldsymbol{\theta}^{\sigma } , \boldsymbol{\phi } , \boldsymbol{\tau } , \boldsymbol{n})\\ % & = \exp(-\lambda(\tilde{t}_{\tilde{m}^{\sigma , l}},t \mid \boldsymbol{\eta}^{\sigma } ) ) \prod_{j=1}^{\tilde{m}^{\sigma , l } } p(\tilde{t}^{\sigma , l}_j \mid \tilde{t}^{\sigma , l}_{j-1 } , \boldsymbol{\eta}^{\sigma } ) \pi_{\tilde{i}^{\sigma , l}_j}(\tilde{t}^{\sigma , l}_j)\\ & = \exp(-\lambda(\tilde{t}_{\tilde{m}^{\sigma , l}},t \mid \boldsymbol{\eta}^{\sigma } ) ) \\ & \quad \times \lambda(\tilde{t}^{\sigma , l}_1 \mid \boldsymbol{\eta}^{\sigma})\exp(-\lambda(0,\tilde{t}^{\sigma , l}_1 \mid \boldsymbol{\eta}^{\sigma } ) ) \pi_{\tilde{i}^{\sigma , l}_1}(\tilde{t}^{\sigma , l}_1)\\ & \quad \times \prod_{j=2}^{\tilde{m}^{\sigma , l } } \lambda(\tilde{t}^{\sigma , l}_j \mid \boldsymbol{\eta}^{\sigma})\exp(-\lambda(\tilde{t}^{\sigma , l}_{j-1},\tilde{t}^{\sigma , l}_j \mid \boldsymbol{\eta}^{\sigma } ) ) \pi_{\tilde{i}^{\sigma , l}_j}(\tilde{t}^{\sigma , l}_j)\\ & = \exp(-\lambda(0,t \mid \boldsymbol{\eta}^{\sigma } ) ) \prod_{i=0}^n \prod_{j : \tilde{i}^{\sigma , l}_j = i } \lambda(\tilde{t}^{\sigma , l}_j \mid \boldsymbol{\eta}^{\sigma } ) \pi_{i}(\tilde{t}^{\sigma , l}_j ) \\ & = \exp(-\lambda(0,t \mid \boldsymbol{\eta}^{\sigma } ) ) \prod_{i=0}^n \prod_{j=1}^{m^{\sigma , l}_i } \lambda(t^{\sigma , l}_{i , j } \mid \boldsymbol{\eta}^{\sigma } ) \pi_{i}(t^{\sigma , l}_{i , j})\\ & = \left(\exp(-\tilde{\lambda}_0^{\sigma , l}(0,t ) ) \prod_{j=1}^{m^{\sigma , l}_0 } \tilde{\lambda}_0^{\sigma , l}(t^{\sigma , l}_{0,j } ) \right ) \\ & \quad \times \left ( \prod_{i=1}^n \exp(-\tilde{\lambda}_i^{\sigma , l}(0,t ) ) \prod_{j=1}^{m^{\sigma , l}_i } \tilde{\lambda}_i^{\sigma , l}(t^{\sigma , l}_{i , j } ) \right).\end{aligned}\ ] ] we have then that since the last integrand is exactly the joint density for the arrivals from an nhpp with rate , and so integrates to .we now show how can be expressed analytically in terms of . for convenience , in this section we suppress in the notation the dependence of the stock on past arrivals and initial stock levels and will write as simply .we consider each of the time intervals where the stock is constant .let the sequence of times demarcate the intervals of constant stock .that is , = \bigcup_{r=1}^{q^{\sigma , l}-1 } [ q^{\sigma , l}_r , q^{\sigma , l}_{r+1}]$ ] and is constant for for .then , with this formula , the likelihood function can be computed for any parameterization desired so long as it is integrable .
|
when an item goes out of stock , sales transaction data no longer reflect the original customer demand , since some customers leave with no purchase while others substitute alternative products for the one that was out of stock . here we develop a bayesian hierarchical model for inferring the underlying customer arrival rate and choice model from sales transaction data and the corresponding stock levels . the model uses a nonhomogeneous poisson process to allow the arrival rate to vary throughout the day , and allows for a variety of choice models . model parameters are inferred using a stochastic gradient mcmc algorithm that can scale to large transaction databases . we fit the model to data from a local bakery and show that it is able to make accurate out - of - sample predictions , and to provide actionable insight into lost cookie sales .
|
signal reconstruction from noisy data is one of the _ raisons dtre _ of applied statistics .if the signal is a gaussian random field , and the signal and noise covariances are known in advance , wiener filtering is the theoretically optimal method for estimating the signal from noisy data . in this simple casethe solution is a linear operator that acts on the data vector and returns the minimum variance , maximum likelihood and maximum a posteriori estimator of the signal given the data . what ought to be done , however , if the signal covariance is not known in advance , and the signal covariance must be estimated from the data ?in fact there are applications where covariance estimation is the primary goal and signal reconstruction is secondary .these cases have traditionally been treated separately . for stationary signals ,the covariance of the signal is best specified in the fourier basis since this basis diagonalizes the covariance matrix . in these casescovariance estimation becomes power spectrum estimation .one such example is cosmic microwave background data ( cmb ) analysis which motivated this analysis .i will return to it in section [ cmb ] .other examples are time series analysis , spatial analysis of censored data , such as geological surveys , power spectrum estimation and signal reconstruction for helioseismology , image reconstruction based on a stochastic model of the form of pixel - pixel correlations , etc .the method described here generalizes the results of and should therefore also be useful for the applications discussed there . in this talk i will first review the common structure that underlies these apparently different statistical problems ( section [ review ] ) .i will then summarize the main advances realized by the new method in section [ method ] .the subsequent section contains the results from the application of this new approach to the first all - sky cmb data set .further details and examples can be found in our paper and online materials at the conference www site .the ideas in this paper were developed from a bayesian perspective .there are pros and cons of bayesian estimation .the pros are many : it maximizes the use of all available information and treats measurements , constraints and model on the same footing as information .the result of a bayesian estimation is a probability density , not just a number , so one automatically obtains uncertainty information about the estimate .however , if bayesian methods are implemented naively , these advantages come at the price of heavy computation especially for multivariate problems . however the results presented in this paper are an example that it is possible to overcome these computational challenges and make bayesian techniques work in a highly multivariate ( ) problem .in this section i will review the problems of signal reconstruction and covariance estimation from a bayesian perspective .first , some notation .let us assume that the data were taken according to the model equation where the -vector contains the data samples , the ( ) matrix is the observation matrix , the -vector is the ( discretized ) signal , the -vector represents any contaminants ( `` foregrounds '' ) one has to contend with , and the -vector is the instrumental noise .i model the signal stochastically ( vs. a deterministic functional form ) and `` infer '' its covariance properties from the data . in particular , the signal is modeled through its covariance properties , encoded in , the signal covariance matrix .then i can write the bayesian posterior as where is the noise covariance matrix .i will now discuss the various terms in eq .[ bayes ] .the likelihood specifies how the data is related to the quantities in the model .given the model equation eq .[ model ] specifies that as a shorthand for the multivariate gaussian density ] .the other terms in eq .[ bayes ] specify information about the components of the model .the term contains information about the covariance of .if is a gaussian random field with zero mean ( examples from cosmology are the cmb or other probes of the density fluctuations of matter on cosmological scales ) .note that it is not assumed that is known .partial knowledge ( or ignorance ) about is quantified in terms of the prior . for a stationary field simply represent the fact that i parameterize the covariance matrix in terms of power spectrum coefficients .. [ bayes ] also assumes that the signal , noise and the contaminants are stochastically independent of each other .further , the equations as written are conditioned on perfect knowledge of the noise covariance . and can be usefully obtained from the data is determined by the structure of the observation matrix . ]lastly , encodes the knowledge or ignorance about foregrounds .note that from a bayesian perspective all that is required is that accurately represents knowledge about . therefore assuming a gaussian form for does not assume that actually has gaussian statistics .in particular the mode of the gaussian corresponds to the most probable ( a priori ) foreground model and the covariance to the uncertainty in the model .the ability to specify uncertainties in the foregrounds ( which will then be taken into account when the method is applied ) is a key feature of this approach which guards against biases from including incorrect foreground templates without the ability to account for the uncertainty in these templates .having specified the forms of the various terms on the right hand side of eq .[ bayes ] , the task is to explore the joint posterior density .however , traditionally the problem is treated in three different limits .if , as an expression of prior ignorance , i take and then all the information is in the likelihood . in this casethe best one can do if is gaussian , is to summarize what is known about in terms of the maximum likelihood estimate and quote the associated noise covariance matrix . in the cmb literaturethe process of obtaining and from the data are known as `` map making . ''if on the other hand , the signal covariance is perfectly known and foregrounds are neglected then the joint posterior becomes where this posterior for peaks at , the well - known wiener filter reconstruction of , so this is known as `` wiener filtering . '' in the third limit , `` power spectrum estimation , '' one does not know but have some information about how it is parameterized , namely that in the fourier basis is diagonal with the diagonal elements equal to the power spectrum coefficients .if we ignore foregrounds again and set we can integrate out ( `` marginalize over '' ) and obtain the usual starting point for maximum likelihood power spectrum estimation the density is considered as a multivariate function of all the power spectrum coefficients up to some band limit .it represents all the information about contained in the data .one can again summarize what is known about by quoting the set of power spectrum estimates for which is maximum ( equivalent to the maximum likelihood estimates ) and include a summary of the width of the marginal distribution of for each power spectrum coefficient . however , in this case for any larger than a few thousand this procedure is computationally prohibitive . since the determinant in eq .[ pofsgivend ] depends on , it needs to be evaluated if the shape of the likelihood is to be explored .determinant evaluation scales as . as a result , to evaluate eq .[ pofsgivend ] just once for a million pixel map would take several years , even if one achieved perfect parallelization across thousands of processors on the most powerful supercomputing platforms in the world . to find its maximum in a parameterization of 1000 power spectrum coefficients and compute marginalized confidence intervals for each by integrating out all others is a lost cause .the maximum likelihood techniques that are currently described in the literature avoid the determinant calculation in eq .[ pofsgivend ] by finding the zero of the first derivative of using an approximate newton - raphson iteration scheme .however , for realistic data , the computational complexity is not reduced because the first derivative contains traces of matrix products that also require of order operations . in these treatmentsthe error bars on the power spectrum coefficients are approximated by the second derivative of the likelihood at the peak even though the likelihood of is non - gaussian .this second derivative is again hard to compute , requiring of order operations .even these expensive methods do not provide a way of accurately summarizing and publishing the `` data product , ''. there are various approximate techniques for doing this in the literature but it is not well understood how good these approximations are away from the peak of the likelihood .how does one overcome these computational challenges ?the answer i propose is to _ sample _ from the full joint density .this may seem even more challenging , since this a function of millions arguments and general techniques of generating samples from complicated multivariate densities are very computationally intensive .however , the special structure of the gaussian priors in eq . [ bayes ] allows exact sampling from the conditional densities of .exact sampling is made possible by solving systems of equations using the preconditioned conjugate gradient method .this means the _ gibbs sampler _ can be used to construct a markov chain which will converge to sampling from .the gibbs sampler is an iterative scheme for generating samples from a joint posterior density by iterating over the components of the density ( such as , , and ) and sampling each of them in turn from their conditional distributions while keeping the other components fixed .given a set of monte carlo samples from the joint posterior , any desired feature of the posterior density can be computed with accuracy only limited by the sample size . after having obtained a sample from the joint posterior , it is trivial to generate samples from the marginal posteriors or .integration over a sampled representation of a function just corresponds to ignoring the dimensions that are being integrated over ! for the problem at hand things are even better than this , since the conditional density has a very simple analytical form . as a result , one can compute an analytical approximation to using the monte carlo samples this is known as the blackwell - rao estimator of which is guaranteed to have lower variance than a binned estimator .in fact one can show that for perfect data ( complete and without noise ) this approximation is exact for a monte carlo sample of size 1 ! for realistic data, the approximation converges to the true power spectrum posterior given enough samples .my collaborators and i call the approach and the set of tools we have developed to implement this approach the `` magic '' method , since magic allows global inference from correlated data .we give a detailed description of the technique in the context of cmb covariance analysis in .figure [ performance ] shows the performance of magic compared to power spectrum estimation techniques ( which do not include the signal reconstruction and foreground separation features of magic).the main advantages of the magic method are the following : for from the top to the bottom on the right side of the figure .brute force methods require and approximate methods require computational time .for the wmap data pixels.,width=302 ] 1 . massive speed - up compared to brute force methods . for an ( unrealistic )pre - factor of 1 a single operation would take seconds on a 1 gflop computer .an unoptimized implementation running in the background on a desktop athlonxp1800 + cpu currently requires less than seconds per sample .massive reduction in memory use : since we only need to compute matrix - vector products ( not matrix - matrix products , matrix inverses or determinants ) only the parametrizations of the covariance matrices need to be stored ( e.g. noise power spectrum for and the signal power spectrum for ) .this reduces the memory requirements from order to at most order which is usually many orders of magnitude less .3 . allows modeling realistic observational strategies and instruments .straightforward parallelization ( run several magic codes on separate processors to generate several times the number of samples in the same time ) .allows treating the statistical inference problem globally , that is it keeps the full set of statistical dependencies in the joint posterior given the data .generalizes wiener filter signal reconstruction to situations where the signal covariance is not known a priori but automatically discovered from the data at the same time as the actual signal is reconstructed .allows computing marginal credible intervals , either for individual power spectrum estimates or for combinations of any set of dimensions in the very high dimensional parameter space .allows incorporating uncertainties ( e.g. about the foregrounds ) in the analysis in such a way that they are propagated correctly through to the results .makes it possible to build in physical constraints in a straightforward way .10 . generates an unbiased functional approximation to , as shown in eq .it has the advantage of being a controlled and improvable approximation and removes the need for parametric fitting functions such as the offset log - normal or hybrid approximations .11 . generates a _ sampled _ representation of the joint posterior eq .[ bayes ] , which simplifies further statistical analyses .since magic is a markov chain method , one also has to discuss the issue of burn - in and correlations of subsequent steps in the chain .steps in the power spectrum coefficients are proportional to the width of the perfect data posterior . in other words ,the number of steps it takes to generate two uncorrelated power spectrum samples is proportional to where is the rms signal to noise ratio for the power spectrum coefficient .conveniently , the samples are nearly uncorrelated over the range in where the data is informative . in numerical experiments with the wmap data it took about 15 - 20 steps for the chain to burn - in ( for the range in where or greater ) from a wildly wrong initial guess of the power spectrum ( ) . for the cobe - dmr data .this is a generalized wiener filter which does not require knowing the signal covariance a priori .b : one sample drawn from the conditional posterior .the posterior mean signal map , shown in panel a , has been removed .c : the sample pure signal sky at the same iteration .this is the pixel - by - pixel sum of the maps in panels a and b. d : the wmap data smoothed to 5 degrees ( less than a , more than c ) .the corresponding features in parts a and d are clearly visible.,width=234 ]in the online materials for this talk ( see footnote 1 ) i present the results of applying the magic method to a synthetic data set which covers an unsymmetrically shaped part on the celestial sphere .i used magic to reconstruct the signal on the full sky and to make movies of the gibbs sampler iterations .this is an example where the signal is automatically discovered in the data by the algorithm , without specification of the signal covariance .figures 2 and 3 show the results of analyzing the cobe - dmr data , one of the most analyzed astronomical data sets .this allowed us to perform consistency checks between the magic method , other methods and the recent results from the wilkinson microwave anisotropy probe ( wmap ) . from the cobe - dmr data .at each the fluctuations in the at all other were integrated out .the axis ranges are the same for all panels.,width=302 ] i am also very interested in evaluating claims that the wmap data favors theories which predict a lack of large scale fluctuation power in the cmb .this claim , if true , would have far - reaching consequences for our understanding of the universe . since cosmologists only have one sky to study , we have to be very careful to account for our limited ability to know the ensemble averaged power spectrum on large scales .the wmap team estimated the fluctuation power on large scales using several techniques and consistently found it to be low .however , in all of these techniques , the variance of the estimates was computed in an approximate way ( e.g. in terms of the curvature at the peak ) and relies on theory for the assessment of statistical significance . using magicone can easily integrate over the posterior density of the power spectrum given the data .therefore it is easy to compute the probability for the power spectrum coefficients in any given -range to be smaller than any given value . using the magic method it was straightforward to generate a preliminary sample of the power spectrum coefficients from the wmap posterior using only the w1 channel , one of the cleanest channels in the wmap data , in terms of systematic error estimates . for the cleaned w1 data and masking regions of galactic emission ( mask _ kp0 _ in the wmap data release ) the quadrupole and octopole power is not obviously discrepant from theoretical expectations . choosing a more aggressive mask could change this since that reduces the sampling variance .one should bear in mind that the power spectrum likelihood has infinite variance for even for perfect all - sky data , unless a prior is put on s value .therefore , in an exact assessment of the quadrupole issue claims of a significant discrepancy ought to be based on the actual shapes of posterior density , not a chi - square test ( compare the detailed discussion of cosmic variance in ) .i will address the issue of low power in the low cosmological multipoles in a future publication .of course , if desired , additional prior information about our universe can be added to the analysis .for example instead of viewing the power spectrum as the quantity of interest , its shape could be parameterized as a function of the cosmological parameters which span the space of cosmological theories .then instead of sampling from the power spectrum coefficients given the signal , one would run a short metropolis - hastings markov chain at each gibbs iteration to obtain a sample from the space of cosmological parameters given the data .these parameter samples , in turn define a density over the space of power spectra with considerably tighter error bars .the result is the non - linearly optimal filter for reconstructing the mean of the power spectrum incorporating physical information about the origin of the cmb anisotropies .another important direction is the analysis of image distortions .the treatment as detailed so far does not allow for the cmb to be lensed gravitationally by the mass distribution through which it streams on its way to us .this distortion itself contains very valuable cosmological information . extending the formalism to account for lensing of the cmb andestimate the statistical properties of the lensing masses from the lensed cmb would be an important extension of this approach .i thank my students and collaborators arun lakshminarayanan , david larson , and ian odwyer , as well as tom loredo for his suggestions .this work has been partially supported by the national computational science alliance under grant number ast020003n and the university of illinois at urbana - champaign .n. wiener , `` extrapolation , interpolation , and smoothing of stationarytime series with engineering applications , '' mit press , cambridge , ma,1949 .g. b. rybicki and w. h. press , `` interpolation , realization , andreconstruction of noisy , irregularly sampled data , '' apj 398 , 169 ( 1992 ) m. tegmark , phys .d55 , 5895 ( 1997 ) j. r. bond , a. h. jaffe , and l. knox , physical review d 57 , 2117 ( 1998 ) j. r. bond , a. h. jaffe , and l. knox , astrophys .j. 533 , 19 ( 2000 ) l. verde , _ et al ._ , ap.j.suppl .148 , 195 ( 2003 ) lewis , a. , astro - ph/0310186 william h. press , _ et al ._ , _ numerical recipes _ , cambridge university press , cambridge , uk .( 1992 ) tanner , _tools for statistical inference : methods for the exploration of posterior distributions and likelihood functions _ , springer verlag , heidelberg , germany .( 1996 ) c. l. bennett _ et al .j. 464 , l1 ( 1996 ) c. l. bennett _ et al ._ , ap.j.suppl .148 , 1 ( 2003 )
|
in this talk i describe magic , an efficient approach to covariance estimation and signal reconstruction for gaussian random fields ( magic allows global inference of covariance ) . it solves a long - standing problem in the field of cosmic microwave background ( cmb ) data analysis but is in fact a general technique that can be applied to noisy , contaminated and incomplete or censored measurements of either spatial or temporal gaussian random fields . in this talk i will phrase the method in a way that emphasizes its general structure and applicability but i comment on applications in the cmb context . the method allows the exploration of the full non - gaussian joint posterior density of the signal and parameters in the covariance matrix ( such as the power spectrum ) given the data . it generalizes the familiar wiener filter in that it automatically discovers signal correlations in the data as long as a noise model is specified and priors encode what is known about potential contaminants . the key methodological difference is that instead of attempting to evaluate the likelihood ( or posterior density ) or its derivatives , this method generates an asymptotically exact monte carlo sample from it . i present example applications to power spectrum estimation and signal reconstruction from measurements of the cmb . for these applications the method achieves speed - ups of many orders of magnitude compared to likelihood maximization techniques , while offering greater flexibility in modeling and a full characterization of the uncertainty in the estimates .
|
synchronization is one of the most commonly occurring phenomenon in various physical and biological networks and it has been a topic of active research in recent years . for instance , synchronization is one of the most crucial dynamical aspects in social networks , neuronal networks , cardiac pacemakers , circadian rhythms , ecological systems , power grids , etc .the emergence of collective oscillations in these kinds of coupled systems can be described by the kuramoto model , which is a mathematically tractable model . in this model ,when the oscillators are uncoupled they oscillate at their own frequencies .when the oscillators are coupled by the sine of their phase differences , the frequencies of the oscillators are modified leading to synchronization depending upon the coupling strength .most of the previous studies in this area have been devoted to the case where the coupling strength between the different oscillators in the network is fixed . however , considering the self - organizational nature of most complex systems ( like that of social systems and neuronal networks ) , inclusion of an adaptive coupling among the oscillators appears to be more realistic .the emergence of synchronization in complex networks , for example social networks involving opinion formation or brain , is due to the fact that coupling between different entities in the network varies dynamically .the coupling usually varies due to the interplay between the dynamical states and the network topology .such networks are called adaptive networks .the presence of adaptive coupling is one of the most crucial factors that control the dynamics and streamline various processess in complex networks .for instance in the brain , where synchronization is not always desirable , the presence of adaptive coupling can lead to desynchronization by the alteration of coupling naturally when the system senses the occurrence of such undesirable synchronization .very recently aoki and aoyagi have found that systems with adaptive coupling give rise to three basic synchronization states due to the self - organizational nature of the dynamics .they have discovered the existence of a two - cluster state , a coherent state ( in which the oscillators are having a fixed phase relationship with each other ) , and a chaotic state in which the coupling weights and the relative phases between the oscillators are chaotically shuffled .this implies that the existence of adaptive coupling accounts for the feature rich , yet self - organized behaviour of real - world complex networks . what is more interesting in the context of social and biological networks is their ability to adapt themselves and learn from their own dynamics , given that there is no one to administrate their activity and control them .for example , opinion formation in social networks is a complex phenomenon which not only depends on the number of conformists and contrarians but also on how the opinion evolves in time .this opinion evolution is essentially the cause of self - organization which gives rise to social clusters / groups and also synchronization in the network .this kind of adaptive evolution of opinion not only plays a crucial role in the social network but is also found to influence the dynamics of autocatalytic interactions ( where the chemical reaction rates are dynamic and depend on the reaction products ) , biological networks , and so on . in this paper , motivated by the above mentioned facts , we explore the role played by adaptive coupling on the synchronization in coupled oscillator systems .we find that adaptive coupling can induce the occurrence of multi - stable states in a system of coupled oscillators .we also find that the weight of the coupling strength and the plasticity of the coupling play a crucial role in controlling the occurrence of multi - stable states .further , we also find that the effect of asymmetry on the multi - stable states is such that it can drive the system from a multi - stable via phase desynchronized to a multi - stable state . here by a desynchronized state we essentially mean a coherent state as identified by aoki and aoyagi which corresponds to a fixed phase relationship between different oscillators as a function of time .however , in this state the oscillators are also distributed over the entire range and so the corresponding state is also called as a phase desynchronized state in the literature . in the following we use this later terminology with the understanding that it can also be described as coherent state .the plan of the paper is as follows : in the following sec . [ model ] we introduce the model of identical coupled phase oscillators which we take into consideration .we also give details about the numerical simulation we use . in sec .[ multi ] we demonstrate the occurrence of multi - stable states in the model we consider .we first ( sec . [ amulti ] ) show the absence of multi - stable states in the system without coupling plasticity .we then introduce coupling plasticity ( sec .[ bmulti ] ) and show how two different synchronization states coexist in the system . in sec .[ cmulti ] we demonstrate how the coupling time scale influences the occurrence of multi - stable states . in sec .[ phase lag ] we introduce a phase asymmetry in the system and demonstrate how it takes the system from a multi - stable state through a desynchronized state to a multi - stable state . in sec .[ ana ] , we provide an analytical basis for our numerical results . in sec .[ nio ] a brief discussion on the phase evolution in the case of nonidentical oscillators is given .finally we present the summary of our findings in sec .let us consider a system of coupled phase oscillators described by the following kuramoto - type evolution equations where is the phase of the oscillator and is a -periodic coupling function . here is the natural frequency of the oscillators and is the coupling strength between the oscillators .if is positive , it denotes attractive interaction between the oscillators and if is negative , it implies repulsive interaction . assuming that a given population of coupled oscillators in real life systems have both kinds of couplings ( attractive and repulsive or equivalently contrarian and conformist ), we replace by , where represents the strength of the coupling between the oscillators in the system and the are the coupling weights between the oscillators . can also be assumed to have spin glass - type coupling .however in this type of coupling there will be no room for characterizing each oscillator in the system by an intrinsic property .nevertheless , models with such spin glass type interactions have been extensively studied by various authors .we consider the coupling weights to be dynamic , represented by the following dynamical equation where is a -periodic function and can be called as the plasticity function which determines how the coupling weight depends on the relative timing of the oscillators . is the plasticity parameter .the evolution of the coupling is slower than the evolution of the oscillators when and the time scale of the coupling dynamics is given by .here we choose the simplest possible periodic functions for and as and .such a coupling implies the fastest learning configuration in the system , that is , when the oscillators are in phase , the coupling coefficient grows fastest and when the oscillators are out - of - phase , the coupling coefficient decays fastest ( hebbian - like ) . with this choice ,the model equations can be rewritten as synchronization in the system can be quantified using the kuramoto order parameter , here is the coherence parameter that represents the strength of synchronization in the system .a complete synchronization in the system corresponds to , while complete phase desynchronization ( that is , every oscillator in the system has a corresponding oscillator that is in anti - phase synchronization with it ) corresponds to .when there is a partial synchronization in the system , takes a value between and and quantifies the strength of synchronization ; that is , the value of is directly proportional to the number of oscillators that are in synchrony . in our simulations ,we consider oscillators , ( except in fig .[ fig7 ] where we take n=1000 ) .we use a fourth order runge kutta routine to numerically simulate the system and we fix the time step to be 0.01 .we have discarded the first time steps and continued our simulations for another time steps , though in some figures ( figs .[ fig1](b , e ) , [ fig2](b , e ) , [ figp ] , [ fig4](a , b ) , [ fig6](a , b , c ) , [ fig11 ] , [ fignon ] and [ fig8 ] ) we also indicate the effect of transients . the results are shown in the various figures in the text for a small window of time whenever transients are not shown ( which we label to start with 0 in the figures for convenience ) towards the end of the total simulation time . in all our numerical simulations , the initial values of coupling weights are uniformly distributed in [ -1,1 ] by imposing the condition so that whenever the value of goes outside the interval [ -1,1 ] as it evolves , it is immediately brought back to the bound value in the interval . in the following sectionlet us explore the changes in the synchronization dynamics of system ( [ cho03 ] ) as affected by the plasticity in the coupling .we find that multi - stable synchronization states are generated by the system due to the presence of coupling plasticity . in order to demonstrate the same ,first let us consider system ( [ cho03 ] ) without coupling plasticity . when there is no coupling plasticity in the system , eq .( [ cho02 ] ) becomes where are the integration constants which are nothing but the initial conditions and are uniformly distributed in [ -1,1 ] . with this coupling function ,let us consider the dynamics of system ( [ cho03 ] ) , to begin with . in fig .[ fig1 ] , the left column ( ( a)-(c ) ) and the right column ( ( d)-(f ) ) panels are plotted for two different initial conditions , namely uniform and random distributions for the initial oscillator phases , respectively . for the uniform distribution of initial phases, we have set the values as uniformly distributed between and among all the n oscillators .as an example , we choose the initial phase of the n=1 oscillator as and then for the subsequent oscillators the phase is increased in units of so that for the oscillator it is .( we have also checked that the results are invariant for another set of initial conditions corresponding to a uniform distribution of intial phases between and ) . on the other hand for the random state of initial phaseswe have used a random number generator which generates random numbers with normal distribution between 0 to .( again we have confirmed similar results for another set of random initial conditions in the range [ -1,+1 ] ) .we have plotted in fig .[ fig1 ] the phase space evolution of the oscillators in panels ( a ) and ( d ) which shows every oscillator has a corresponding oscillator that is in anti - phase relationship with it , and hence this system is in a phase desynchronized state , the time evolution of the order parameters , , and the average of coupling weights ( defined below ) in panels ( b ) and ( e ) , and the time evolution of the oscillator phases in panels ( c ) and ( f ) . here is the order parameter of the whole system , while is the order parameter of the two cluster state ( where the oscillators are grouped into two clusters ) given as .when , the oscillators converge to a state of two synchronized clusters that are in antiphase relationship with each other .that is there are two groups of oscillators with phases and . in this state , the order parameter is given as where and and are the number of oscillators phase locked in and , respectively .if when this means that the two clusters have equal number of oscillators in them ( ) . on the other hand if when then the clusters contain different number of oscillators . in panels ( b ) and( e ) the solid , dashed and dot - dashed lines represent , and , respectively . here is the average of the coupling weights given as we have in addition plotted the average rate of change of the coupling weights ( over all the connections ) , and the auto - correlation function of the oscillator phases , as insets in panels ( b ) and ( e ) . in figure[ fig1 ] , it is obvious that the oscillators remain in the desynchronized state irrespective of the initial conditions , in the absence of coupling plasticity .the phase portraits in panels ( a ) and ( d ) show that the oscillators are uniformly distributed on a unit circle , implying desynchronization .this is also confirmed by the time evolution of the phases shown in panels ( c ) and ( f ) .the order parameters and also take values close to 0 in this case . and remain zero , since all the oscillators are uniformly distributed in [ -1,1 ] and the correlation remains at 1 indicating that this desynchronized state is a steady state ; that is the phase relation between any two oscillators in the system is fixed for all times .the transient behaviour of the order parameters , and the coupling strength for the case of no coupling plasticity is also included in panels ( b ) and ( e ) . in the absence of coupling plasticity the system takes much longer period of time to reach the asymptotic regime in the case of random distribution of initial phases .now let us introduce plasticity in the adaptive coupling in the system . in this case, the dynamics of the coupling weights depends upon the phase relation between the oscillators , and hence the initial phases of the oscillators strongly affect the synchronization states in the system .we have plotted fig .[ fig2 ] exactly in the same manner as fig .[ fig1 ] except that we have now introduced the coupling plasticity in the system with the strength .here we see that for a uniformly distributed initial oscillator phases , the left panels ( a , b , c ) resemble that of fig .[ fig1 ] , that is , the oscillators remain phase desynchronized .however in the right panels ( d , e , f ) , when the initial phases are distributed randomly , we clearly see the emergence of a two - cluster synchronization state where the clusters are in antiphase relationship with each other , which is confirmed by .since , even though the two clusters are in antiphase relationship , this implies that . in short , the two clusters do not have an equal number of oscillators in them .however the autocorrelation remains constant at a value close to 1 for both the initial conditions indicating that the phase relationship between the oscillators is asymptotically stable irrespective of whether the oscillators are synchronized or desynchronized .we call the existence of a desynchronized state and a two - cluster state as a multi - stable regime . in figure [ fig2](e ) , we have also shown the transient behaviour of order parameters , and the coupling strength in the presence of adaptive coupling . by comparing panel ( e ) of figures [ fig1 ] and [ fig2 ] ,it is clear that the presence of adaptive coupling enhances the system to reach the stable states in a much shorter period of time in the case of random initial conditions .also in fig .[ figp ] , we have plotted the time evolution of the order parameters and as well as the average coupling ( a ) in the absence ( ) and ( b ) in the presence ( ) of dynamic coupling for a slightly perturbed set of uniform initial conditions in the range to ( instead of to discussed earlier ) .we find that the system continues to be in a phase desynchronized state asymptotically , showing the stable nature of the underlying dynamics .further , we have also included a gaussian white noise in the model eq .( [ cho03 ] ) to study the stable nature of the multistable state in the presence of external perturbations . in this casethe model eq .( [ cho03 ] ) becomes where the variables correspond to independent white noise that satisfies =0 , and . hered is the noise strength . for our studywe choose d=0.0001 . in fig .[ fign ] , we have briefly presented the order parameters , and the average coupling strength as a function of time in the presence of the above external noise for both the cases of absence and presence of adaptive coupling . on comparing the present dynamics with figs .[ fig1 ] and [ fig2 ] corresponding to the absence of noise , we can easily see that the qualitative nature of the dynamics remains unchanged .= 0 , ) of the oscillators both in the case of ( a ) absence ( ) and ( b ) presence ( ) of coupling plasticity for a slightly perturbed set of initial phases distributed uniformly between to exhibiting similar behaviour of desynchroization as in panels ( a ) , ( b ) , ( c ) in figs .[ fig1 ] and [ fig2 ] ., width=302,height=264 ] further in fig .[ figk ] we have plotted the time evolution of coupling weights in the absence and presence of dynamic coupling . fig .[ figk](a ) clearly shows that in the absence of dynamic coupling ( ) , while figs .[ figk](b ) and [ figk](c ) show the random flunctuation of s as a function of time for a random initial distribution of phases .note that for the case of uniform distribution of initial phases , even in the presence of dynamic coupling ( ) s continue to remain constant as shown in fig .[ figk](a ) . after leaving out sufficient transients ( t= ) . ( a ) snapshot of at a particular time in the absence of dynamic coupling ( ) .( b ) snapshot of at a particular time in the presence of dynamic coupling ( ) and ( c ) time evolution of a sample set of three coupling weights for a short range of time in the presence of dynamic coupling ( ) for a random distribution of initial phases.,width=321,height=226 ] upon the introduction of coupling plasticity , multi - stable states are found to occur in system ( [ cho03 ] ) as we have discussed in the previous section .now let us see how the time scale of the coupling , denoted by , affects the multi - stable states .we see that the introduction of causes the occurrence of a two - clustered state , the clusters being in antiphase relationship with each other .it is also obvious that as increases increases while stays close to 1 .this essentially means that as increases , the size of one of the clusters grows and as a result , the two cluster state tends to become a single clustered state .so the size difference of the two clustered state also grows . in order to explain this behaviour , we move the system to a rotating frame by introducing the transformation and then dropping the primes for convenience . with this transformation , eq .( [ cho03 ] ) becomes where and are as defined in eq .( [ cho01a ] ) .now if we choose a random distribution for the initial phases of the oscillators , the presence of stabilizes a two - cluster state in the system ; one cluster is locked in and the other is in .this can be clearly seen * from fig .[ fig4 ] where we have plotted the asymptotic * time evolution of for two different values of ; and . in panel( a ) we have plotted the time evolution of for the two values of ; ( dashed line ) and ( solid line ) .the inset shows the time evolution of for the same two values of and it is found to be equal to 1 in both the cases .however we see that takes different values for different indicating that the number difference between the two clusters increases with increasing see panels ( b),(c ) and ( d ) . in order to visualize this clearly, we have plotted the time evolution of the fraction of oscillators in the two clusters , and in panel ( b ) for the two values of ; ( dashed line ) and ( solid line ) . here is the total number of oscillators .since there are only two clusters in the system of sizes and , we choose a random oscillator ( say , the first oscillator whose phase is denoted by ) , and compare the phases of all the other oscillators in the system with that of oscillator 1 .now , if the phase of any oscillator is equal to , then that oscillator is in the same cluster as that of the 1st oscillator and let us assume that this cluster has oscillators . on the other hand , if the phase of the oscillator is not equal to , then that oscillator is in the other cluster whose size is .this is how we count the number of oscillators in the two clusters numerically and have plotted the corresponding fractions and in panel ( b ) . in panels ( c ) and ( d )we have shown the number of oscillators in each of the two clusters as a function of time in the asymptotic regime for and , respectively .the number difference between the two clusters increases for increasing and the value of matches the value of for the corresponding , in panel ( a ) .thus the coupling time scale affects the size of the clusters in the two - cluster state .we now introduce phase asymmetry parameters and in ( [ cho03 ] ) . the transmission interlude or delay of the couplingcan be represented by the phase difference . on the other hand ,the characteristic of plasticity can be continuously changed or controlled by varying a second asymmetry parameter .thus this parameter , which is the plasticity delay , enables one to investigate the coevolving dynamics . with the introduction of the asymmetry parameters ,the coupling function and the plasticity function in ( [ cho01 ] ) and ( [ cho02 ] ) become and . with these asymmetry parameters , the evolution equations of system ( [ cho01 ] ) and ( [ cho02 ] ) become and we choose and . in order to study the influence of the plasticity asymmetry parameter on the occurrence of multi - stable states in the absence of the coupling asymmetry parameter ,we have plotted the time averaged value of the order parameters , k=1,2 , given as against for different values of in fig .[ fig5 ] . in panel( a ) we have plotted and we see that as increases , the two - clustered state exists until approaches a critical value of . at the same timewe see that decreases with increasing . in this window, we see that the corresponding value of ( plotted in panel ( b ) ) stays at 1 implying that the size difference between the two clusters decreases .after the transition point the two - cluster state loses its stability and only the desynchronized state is stable .this can be confirmed by the decreasing after the transition point .when takes a value in the window only the desynchronized state is stable ; this is evident from the values of which is zero and which takes a value less than 1 . when , the two clustered state becomes stable again leading to the occurrence of multi - stability . in this window ( )as increases increases and takes a value close to 1 indicating that the size difference between the two clusters increases .thus we find that in the absence of , increasing for a given causes the system to go from a two - cluster state ( region i ) to desynchronization ( region ii ) and then again to a two - cluster state ( region iii ) .it is also evident that the occurrence of multi - stable states due to the asymmetry parameter is unaffected by . in fig .[ figphase ] we have plotted the time evolution of phases for two different values of , namely ( in panel ( a ) ) and ( in panel ( b ) ) for a fixed value of . in both the cases the time averaged order parameters takes the value zero and takes some finite value other than zero , =0.81 ( ) and ) , respectively .these figures clearly show the phase desynchronized nature of the oscillators , irrespective of the non zero values of . similarly , the corresponding phase portraits in panels ( a ) and ( b ) respectively , for both the values of clearly show the uniformly distributed nature of phases between 0 and 2 .we have also confirmed similar features for and for the same value of .one may also observe that the autocorrelation function , , asymptotically takes a unit value , confirming a fixed phase relationship between the phases , and excluding any chaotic behaviour ( see below ) . due to this fact ,we call the corresponding state ( region ii ) as a desynchronized state even though takes some finite value other than zero . in order to explain the above mentioned states that occur due to the effect of more clearly ,let us refer to fig .[ fig6 ] . in panels ( a)-(c )we have plotted the time evolution of ( dashed black line ) , ( solid blue line ) , ( dotted red line ) , ( solid grey line ) and the autocorrelation ( dot - dashed green line ) .the asymptotic time evolution of the oscillator phases are plotted in panels ( d)-(f ) .the panels ( a ) and ( d ) correspond to , panels ( b ) and ( e ) correspond to and panels ( c ) , and ( f ) correspond to .when the two cluster state is stable ; this state corresponds to region i in fig .[ fig5 ] and the two clusters are of different sizes ( panel ( d ) ) . in this state and .when , the two cluster state loses its stability leaving behind only the desynchronized state ; this state corresponds to region ii in fig .since the oscillators are desynchronized and since there are no clusters in the system , we can not define and in this state as shown in panel ( e ) . in this state and . when , near the ii - iii transition point , the two cluster state becomes stable again and this state corresponds to region iii in fig .note that in all these cases the autocorrelation function takes a unit value asymptotically , confirming the fixed phase relationship nature of the oscillators .now , in order to study the effect of the coupling asymmetry parameter in the presence of we have plotted the strengths of ( panel ( a ) ) and ( panel ( b ) ) , by varying both and in fig .[ fig7 ] . while varying from 0 to , we see that the regions i and iii ( corresponding to fig .[ fig5 ] ) shrink while region ii expands .further , as increases we find that the size of the clusters in regions i and iii keeps varying which is evident from the varying strengths ( colors ) of in panel ( a ) while the strength of remains the same in panel ( b ) .thus we find that the coupling asymmetry affects the influence of on the two cluster and desynchronization states ( regions i , ii and iii ) .in this section we wish to investigate analytically the linear stability of the desynchronized state and the two clustered synchronized state .numerically , we find that the oscillator phases maintain a fixed relationship among themselves and the coupling weights remain almost stable when ( see insets in fig .2 ( b ) , ( e ) ) .hence in the limit , the coupling weights can be regarded as invariant and remain at a constant value . under this condition ,the dynamics of system ( [ asy01a])-([asy01b ] ) with the asymmetry parameters and is given by where .since the coupling weights are fixed , they satisfy the following relation now we consider two different state configurations separately , ( i ) uniform distribution of phases and ( ii ) a two cluster state and analyze the linear stability of the underlying states .now let us assume that the phases are uniformly distributed in ] , the integral in eq .( [ ana05 ] ) vanishes and hence ( when the plasticity is absent ) . in this case , let us rewrite eq .( [ ana01 ] ) as where and is the collective frequency of oscillation of the population after the system approaches a stationary state . using the order parameter relation ( [ cho01a ] ) into eq .( [ ana01a ] ) we get where and is as defined in eq .( [ cho01a ] ) .now , we consider the thermodynamic limit . in this casethe order parameter can be written as where is the density distribution of the individual phases with coupling . in the stationary state , and are time independent constants and hence eq . ( [ ana01c ] ) becomes where we have taken without loss of generality .when , eq .( [ ana01a ] ) has a stable fixed point at with .now one can consider the relation into eq .( [ ana01d ] ) to arrive at the self - consistency equation and from the above relations ( [ ana01f ] ) and ( [ ana01 g ] ) , the synchronization threshold is defined by the condition that the average coupling strength as shown in ref. and the synchronized state exists when . here in our case in the absence of coupling plasticity , andthese are distributed uniformly between -1 and + 1 .consequently and that is the average coupling strength vanishes and therefore the synchronization state does not exist as it violates the threshold condition ( [ kc ] ) and the phases remain desynchronized showing the stable nature of the desynchronized state . on the other hand , in the presence of coupling plasticity, in order to study the stability of the two cluster state , let us assume that the the size of the two clusters are and whose phases are and , respectively . in the two cluster state , eq .( [ ana01 ] ) is written as and the coupling weights ( eq .( [ ana02 ] ) ) become the phase difference between and can be written as now , the stability condtion for the two cluster state is determined from ( [ ana08 ] ) as on using ( [ ana07 ] ) in ( [ ana08 ] ) , we obtain numerically we find that , in the two cluster state , the phase difference between the clusters is ( for instance , see figs . 4 and 6 ) .hence , for the case the stability condition becomes this condition represents the stability of the two cluster state .when ] and $ ] .the two cluster state loses stability and only the desynchronized state becomes stable in the window .for increasing the plasticity asymmetry parameter the two cluster - desynchronization - two cluster transition occurs ( as shown in fig .[ fig5 ] ) .we see that our numerical observations ( fig .[ fig7 ] ) are in good agreement with the analytical results in the region of phase asymmetry , approximately , beyond which there arises disagreement where a nonlinear stability theory will be required . in order to explain the above disagreement, we have performed numerical simulations of eq .( [ ana01 ] ) obeying eqs .( [ ana06])-([ana07 ] ) , that is initial phases are chosen to be in a two cluster state .[ fig11](b ) shows the existence of a two cluster state of the system in the asymmetry region = 1.5 and = 1.0 , which does not exist in fig .[ fig7 ] as well as in panel ( a ) of fig .[ fig11 ] , where we have chosen random initial phases .thus we find that the two cluster state arises only for the special choice of initial conditions near to the above state , while for an arbitrary initial distribution of phases one obtains a partially synchronized state , indicating the multistable nature of the underlying system .in the previous sections , we have studied the effect of plastic coupling between the group of identical oscillators( = constant ) . in the present sectionwe briefly consider the case of nonidentical oscillators whose frequencies ( ) are distributed in lorentzian form given by ^{-1},\end{aligned}\ ] ] where is the half width at half maximum and is the central frequency .hence the equation of motion for the nonidentical oscillators can be written as fig .[ fignon ] shows the presence of a multistable state for the case of nonidentical oscillators with a very low half width ( ) for oscillators and . in the absence of adaptive couplingthe system is asymptotically stable in the desynchronized state ( ) which is evident in panels ( a ) and ( c ) .however , in the presence of adaptive coupling ( ) , for the uniform distribution of initial phases the system remains in desynchronized state ( ) see panel ( b ) , whereas for a random distribution the system sets into a different multicluster state ( and ) . in fig .[ fig8 ] , for a higher value of half width ( ) we find that the multicluster state loses its stability .the full details of the classification of this multicluster state for the case of nonidentical oscillators will be presented elsewhere .in this paper , we have demonstrated the occurrence of multi - stable states in a system of phase oscillators that are dynamically coupled .we find that the presence of coupling plasticity induces the occurrence of such multi - stable states ; that is , the existence of a desynchronized state and a two - clustered state where the clusters are in anti - phase relationship with each other .the multi - stable states occur for randomly distributed initial phases while for uniformly distributed initial phases only the desynchronization state exists .we also find that the phase relationship between the oscillators is asymptotically stable irrespective of whether there is synchronization or desynchronization in the system .we find that the effect of coupling time scale ( ) is not only to introduce a two clustered state but also to change the number of oscillators in the two clusters .more precisely , we see that the difference between the number of oscillators in the two clusters increases with increasing . in short ,the coupling time scale is found to affect the size of the clusters in the two cluster state .we have also investigated the effect of coupling asymmetry and plasticity asymmetry on the multi - stable states .we find that , in the absence of coupling asymmetry , increasing plasticity asymmetry takes the system to transit from a multi - stable state through a desynchronized state to a multi - stable transition . if the coupling asymmetry is present , we find that the regions corresponding to two cluster states shrink and the region corresponding to desynchronization state expands . in our model , for a uniform distribution of initial phases ,the desynchronized state is always stable . for random initial conditions ,the system goes from a two cluster synchronization state ( the desynchronization state exists also in this state ) to a desynchronized state and then again to a two cluster state , upon increasing the plasticity asymmetry parameter .thus the desynchronization state is always stable for the uniform distribution initial condition .thus the two cluster - desynchronization - two cluster transition can also be termed as multi - stable desynchronization multi stable state .we have also analytically investigated the linear stability of the desynchronized and the two cluster states in the limit and have found the occurrence of multi - stable desynchronization multi - stable state transition .our analytical results are in good agreement with our numerical observations .we strongly believe that the model discussed here and the results therein will help fill the gap in the understanding of the dynamical aspects of adaptively coupled systems which are found to be most common in real world complex networks .this understanding will hopefully be helpful in elucidating the mechanism underlying the self - organizational nature of real world complex systems .the work is supported by the department of science and technology ( dst)ramanna program ( ml ) , dst irhpa research project ( ml , vkc and bs ) . ml is also supported by a dae raja ramanna fellowship .his work has also been supported by the alexander von humboldt foundation , germany under their renewed visit program to visit potsdam where the work was completed .jhs is supported by a dst fast track young scientist research project .jk acknowledges the support from linc(eu itn ) .w. singer , neuronal synchrony : a versatile code for the definition of relations ?, _ neuron _ * 24 * ( 1999 ) 49 ; p. fries , a mechanism for cognitive dynamics : neuronal communication through neuronal coherence , trends . cogn* 9 * ( 2005 ) 474 ; y. yamaguchi , n. sato , h. wagatsuma , z. wu , c. molter and y. aota , a unified view of theta - phase coding in the entorhinal - hippocampal system , curr .. neurobiol .* 17 * ( 2007 ) 197 .l. timmermann , j. gross , m. dirks , j. volkmann , h. freund and a. schnitzler , the cerebral oscillatory network of parkinsonian resting tremor , brain * 126 * ( 2003 ) 199 : j. a. goldberg , t. boraud , s. maraton , s. n. haber , e. vaadia , and h. bergman , enhanced synchrony among primary motor cortex neurons in the 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine primate model of parkinson s disease , j. neurosci .* 22 * ( 2002 ) 4639 .b. percha , r. dzakpasu , and m. zochowski , transition from local to global phase synchrony in small world neural network and its possible implications for epilepsy , phys .e. * 72 * ( 2005 ) 031909 ; m. zucconi , m. manconi , d. bizzozero , f. rundo , c. j. stam , l. ferini - strambi , r. ferri , eeg synchronisation during sleep - related epileptic seizures as a new tool to discriminate confusional arousals from paroxysmal arousals : preliminary findings , neurol .* 26 * ( 2005 ) 199 .g. pfurtscheller and c. neuper , event - related synchronization of mu rhythm in the eeg over the cortical hand area in man , neurosci . lett .* 174 * ( 1994 ) 93 ; c. m. krause , h. lang , m. laine and b. prn , event - related .eeg desynchronization and synchronization during an auditory memory task , electroencephalogr .. neurophysiol .* 98 * ( 1996 ) 319 ; l. leocani , c. toro , p. manganotti , p. zhuang and m. hallet , event - related coherence and event - related desynchronization / synchronization in the 10 hz and 20 hz eeg during self - paced movements , electroencephalogr .. neurophysiol . * 104 * ( 1997 ) 199 - 206 ; g. pfurtschelle and f. h. lopes da silva , event - related eeg / meg synchronization and desynchronization : basic principles , clin . neurophysiol .* 110 * ( 1999 ) 1842 .m. gilson , burkitt an , grayden db , thomas da and van hemmen jl , emergence of network structure due to spike - timing - dependent plasticity in recurrent neuronal networks iii : partially connected neurons driven by spontaneous activity , biol . cybern . * 101 * ( 2009 ) 411 .h. hong and s. h. strogatz , kuramoto model of coupled oscillators with positive and negative coupling parameters : an example of conformist and contrarian oscillators , phys .* 106 * ( 2011 ) 054102 .h. hong and s. h. strogatz , conformists and contrarians in a kuramoto model with identical natural frequencies phys .e * 84 * ( 2011 ) 046202 .daniel m. abrams , rennie mirollo , steven h. strogatz and daniel a. wiley , solvable model for chimera states of coupled oscillators phys .* 101 * ( 2008 ) 084103 .
|
adaptive coupling , where the coupling is dynamical and depends on the behaviour of the oscillators in a complex system , is one of the most crucial factors to control the dynamics and streamline various processes in complex networks . in this paper , we have demonstrated the occurrence of multi - stable states in a system of identical phase oscillators that are dynamically coupled . we find that the multi - stable state is comprised of a two cluster synchronization state where the clusters are in anti - phase relationship with each other and a desynchronization state . we also find that the phase relationship between the oscillators is asymptotically stable irrespective of whether there is synchronization or desynchronization in the system . the time scale of the coupling affects the size of the clusters in the two cluster state . we also investigate the effect of both the coupling asymmetry and plasticity asymmetry on the multi - stable states . in the absence of coupling asymmetry , increasing the plasticity asymmetry causes the system to go from a two clustered state to a desynchronization state and then to a two clustered state . further , the coupling asymmetry , if present , also affects this transition . we also analytically find the occurrence of the above mentioned multi - stable - desynchronization - multi - stable state transition . a brief discussion on the phase evolution of nonidentical oscillators is also provided . our analytical results are in good agreement with our numerical observations .
|
with the expansion of the internet services , people are becoming increasingly dependent on the internet with an information overload .consequently , how to efficiently help people find information that they truly need is a challenging task nowadays .being an effective tool to address this problem , the recommender system has caught increasing attention and become an essential issue in internet applications such as e - commerce system and digital library system .motivated by the practical significance to the e - commerce and society , the design of an efficient recommendation algorithm becomes a joint focus from engineering science to mathematical and physical community .various kinds of algorithms have been proposed , such as correlation - based methods , content - based methods , spectral analysis , iteratively self - consistent refinement , principle component analysis , network - based methods , and so on . for a review of current progress ,see ref . and the references therein .one of the most successful recommendation algorithms , called _ collaborative filtering _ ( cf ) , has been developed and extensively investigated over the past decade .when predicting the potential interests of a given user , such approach firstly identifies a set of similar users from the past records and then makes a prediction based on the weighted combination of those similar users opinions . despite its wide applications , collaborative filtering suffers from several major limitations including system scalability and accuracy , some physical dynamics , including mass diffusion ( md ) and heat conduction ( hc ) , have found their applications in personalized recommendations . based on md and hc ,several effective network - based recommendation algorithms have been proposed .these algorithms have been demonstrated to be of both high accuracy and low computational complexity .however , the algorithmic accuracy and computational complexity may be very sensitive to the statistics of data sets .for example , the algorithm presented in ref . runs much faster than the standard cf if the number of users is much larger than that of objects , while when the number of objects is huge , the advantage of this algorithm vanishes because its complexity is mainly determined by the number of objects ( see ref . for details ) .since the cf algorithm has been extensively applied in the real e - commerce systems , it s meaningful to find some ways to increase the algorithmic accuracy of cf .we therefore present a modified collaborative filtering ( mcf ) method , in which the user correlation is defined based on the diffusion process .recently , liu _ studied the user and object degree correlation effect to cf , they found that the algorithm accuracy could be remarkably improved by adjusting the user and object degree correlation . in this paper , we argue that the high - order correlations should be taken into account to depress the influence of mainstream preferences and the accuracy could be improved in this way .the correlation between two users is , in principle , an integration of many underlying similar tastes .for two arbitrary users , the very specific yet common tastes shall contribute more to the similarity measure than those mainstream tastes .figure 1 shows an illustration of how to find the specific tastes by eliminating the mainstream preference . to the users and , the commonly selected objects1 and 2 could reflect their tastes , where 1 denotes the mainstream preference shared by all , and , and 2 is the specific taste of and .both 1 and 2 contribute to the correlation between and .since 1 is the mainstream preference , it also contributes to the correlations between and , as well as and .tracking the path , the mainstream preference 1 could be identified by considering the second - order correlation between and .statistically speaking , two users sharing many mainstream preferences should have high second - order correlation , therefore we can depress the influence of mainstream preferences by taking into account the second - order correlation .the numerical results show that the algorithm involving high - order correlations is much more accurate and provides more diverse recommendations .[ 0.5 ] , and are correlated because they have collected some common objects , where object has been collected by all of the three users , while object 2 is only collected by user and ., title="fig : " ]denote the object set as and the user set as = , a recommender system can be fully described by an adjacent matrix , where if is collected by , and otherwise . for a given user , a recommendation algorithm generates an ordered list of all the objects he / she has not collected before . to test the recommendation algorithmic accuracy, we divide the data set into two parts : one is the training set used as known information for prediction , and the other one is the probe set , whose information is not allowed to be used .many metrics have been proposed to judge the algorithmic accuracy , including _ precision _ , _ recall _ , _ f - measure _ , _ average ranking score _ , and so on .since the average ranking score does not depend on the length of recommendation list , we adopt it in this paper .indeed , a recommendation algorithm should provide each user with an ordered list of all his / her uncollected objects .for an arbitrary user , if the entry - is in the probe set ( according to the training set , is an uncollected object for ) , we measure the position of in the ordered list .for example , if there are uncollected objects for , and is the 10th from the top , we say the position of is , denoted by .since the probe entries are actually collected by users , a good algorithm is expected to give high recommendations , leading to small .therefore , the mean value of the position , ( called _ average ranking score _ ) , averaged over all the entries in the probe , can be used to evaluate the algorithmic accuracy : the smaller the ranking score , the higher the algorithmic accuracy , and vice verse . for a null model with randomly generated recommendations , .besides accuracy , the average degree of all recommended objects , , and the mean value of hamming distance , , are taken into account to measure the algorithmic popularity and diversity . the smaller average degree ,corresponding to the less popular objects , are preferred since those lower - degree objects are hard to be found by users themselves .in addition , the personal recommendation algorithm should present different recommendations to different users according to their tastes and habits .the diversity can be quantified by the average hamming distance , , where , is the length of recommendation list , and is the overlapped number of objects in s and s recommendation lists .the higher indicates a more diverse and thus more personalized recommendations .in the standard cf , the correlation between and can be evaluated directly by the well - known cosine similarity index where is the degree of user .inspired by the diffusion process presented by zhou _ , the user correlation network can be obtained by projecting the user - object bipartite network .how to determine the edge weight is the key issue in this process .we assume a certain amount of resource ( e.g. , recommendation power ) is associated with each user , and the weight represents the proportion of the resource would like to distribute to .this process could be implemented by applying the network - based resource - allocation process on a user - object bipartite network where each user distributes his / her initial resource equally to all the objects he / she has collected , and then each object sends back what it has received to all the users who collected it , the weight ( the fraction of initial resource eventually gives to ) can be expressed as : where denotes the degree of object . for the user - object pair ,if has not yet collected ( i.e. , ) , the predicted score , , is given as based on the definitions of and , given a target user , the mcf algorithm is given as following ( i ) : : calculating the user correlation matrix based on the diffusion process , as shown in eq .( 2 ) ; ( ii ) : : for each user , based on eq . ( 3 ) , calculating the predicted scores for his / her uncollected objects ; ( iii ) : : sorting the uncollected objects in descending order of the predicted scores , and those objects in the top will be recommended .the standard cf and the mcf have similar process , and their only difference is that they adopt different measures of user - user correlation ( i.e. , for the standard cf and for mcf ) . [ 0.4 ] and the improvement ( ip ) vs. the sparsity of the training sets .all the data points are averaged over ten independent runs with different data - set divisions .the results corresponding to netflix data are marked.,title="fig : " ]we use two benchmark data sets , one is _ _ movielens _ _ , which consists of 1682 movies ( objects ) and 943 users .the other one is __ netflix _ _ , which consists of 3000 movies and 3000 users ( we use a random sample of the whole netflix dataset ) . the users vote movies by discrete ratings from one to five .here we applied a coarse - graining method : a movie is set to be collected by a user only if the giving rating is larger than 2 . in this way , the _ movielens _ data has 85250 edges , and the _ netflix _ data has 567456 edges .the data sets are randomly divided into two parts : the training set contains percent of the data , and the remaining part constitutes the probe . implementing the standard cf and mcfwhen , the average ranking scores on _ movielens _ and _ netflix _ data are improved from from 0.1168 to 0.1038 and from 0.2323 to 0.2151 , respectively .clearly , using the simply diffusion - based simlarity , subject to the algorithmic accuracy , the mcf outperforms the standard cf . the corresponding average object degree and diversityare also improved ( see fig.[fig22 ] and fig.[fig23 ] below ) . [ 0.4 ] , vs. when .squares , circles and triangles represent lengths and , respectively .the black point ( ) corresponds to the average degree obtained by the standard cf with .all the data points are averaged over ten independent runs with different data - set divisions.,title="fig : " ] [ 0.4 ] vs. when .squares , circles and triangles represent the lengths and , respectively .the black point ( ) corresponds to the diversity obtained by the standard cf with .all the data points are averaged over ten independent runs with different data - set divisions ., title="fig : " ]to investigate the effect of second - order user correlation to algorithm performance , we use a linear form to investigate the effect of the second - order user correlation to mcf , where the user similarity matrix could be demonstrated as where is the newly defined correlation matrix , is the first - order correlation defined as eq . ( 2 ) , and is a tunable parameter . as discussed before , we expect the algorithmic accuracy can be improved at some negative . when , the algorithmic accuracy curves of _ movielens _ and _ netflix _ have clear minimums around and , which strongly support the above discussion .compared with the routine case ( ) , the average ranking scores can be further reduced to 0.0826 ( improved 20.45% ) and 0.1436 ( improved 33.25% ) at the optimal values .it is indeed a great improvement for recommendation algorithms .since the data sparsity can be turned by changing , we investigate the effect of the sparsity on the two data sets respectively , and find that although we test the algorithm on two different data sets , the optimal are strongly correlated with the sparsity in a uniform way for both _movielens _ and _ netflix_. figure [ fig24 ] shows that when the sparsity increases , will decrease , and the improvement of the average ranking scores will increase .these results can be treated as a good guideline for selecting optimal of different data sets .figure [ fig22 ] reports the average degree of all recommended objects as a function of .one can see from fig .[ fig22 ] that when the average object degree is positively correlated with , thus to depress the influence of mainstream interests gives more opportunity to the less popular objects , which could bring more information to the users than the popular ones . when the list length , , bing equal to 20 , at the optimal point , the average degree is reduced by 29.3% compared with the standard cf . when , fig .[ fig23 ] exhibits a negative correlation between and , indicating that to consider the second - order correlations makes the recommendation lists more diverse . when , the diversity is increased from 0.592 ( corresponding to the standard cf ) to 0.880 ( corresponding to the case in the improved algorithm ) .figure [ fig22 ] and figure [ fig23 ] show how the parameter affects the average object degree and diversity , respectively .clearly , the smaller leads to less popularity and higher diversity , and thus the present algorithm can find its advantage in recommending novel objects with diverse topics to users , compared with the standard cf . generally speaking , the popular objects must have some attributes fitting the tastes of the masses of the people .the standard cf may repeatedly count those attributes and assign more power for the popular objects , which increases the average object degree and reduces the diversity . the present algorithm with negative to some extent eliminate the redundant correlations and give higher chances to less popular objects and the objects with diverse topics different from the mainstream .cccc algorithms & & & + grm & 0.1390 & 0.398 & 259 + cf & 0.1168 & 0.549 & 246 + nbi & 0.1060 & 0.617 & 233 + heter - nbi & 0.1010 & 0.682 & 220 + cb - cf & 0.0998 & 0.692 & 218 + imcf & 0.0877 & 0.826 & 175 +in this paper , a modified collaborative filtering algorithm is presented to improve the algorithmic performance .the numerical results indicate that the usage of diffusion based correlation could enhance the algorithmic accuracy .furthermore , by considering the second - order correlations , , we presented an effective algorithm that has remarkably higher accuracy .indeed , when the simulation results show that the algorithmic accuracy can be further improved by 20.45% and 33.25% on _ movielens _ and _ netflix _ data .interestingly , we found even for different data sets , the optimal value of exhibits a uniform tendency versus sparsity . therefore , if we know the sparsity of the training set , the corresponding optimal could be approximately confirmed .in addition , when the sparsity gets less than 1% , the improved algorithm would nt be effective any more , while as the sparsity increases , the improvement of the presented algorithm is enlarged . ignoring the degree - degree correlation in user - object entries ,the algorithmic complexity of mcf is , where and denote the average degrees of users and objects .the first term accounts for the calculation of user correlation , and the second term accounts for the one of the predictions .it approximates to for .clearly , the computational complexity of mcf is much less than that of the standard cf especially for the systems consisted of huge number of objects . in the improved algorithm , in order to calculate the second - order correlations , the diffusion process must flow from the user to the objects twice , therefore , the algorithmic complexity of the improved algorithm is .since the magnitude order of the object is always much larger than the ones of and , the improved algorithm is also as comparably fast as the standard cf . beside the algorithmic accuracy , two significant criteria of algorithmic performance , average degree of recommended objects and diversity ,are taken into account .a good recommendation algorithm should help the users uncovering the hidden ( even dark ) information , corresponding those objects with very low degrees .therefore , the average degree is a meaningful measure for a recommendation algorithm .in addition , since a personalized recommendation system should provide different recommendations lists according to the user s tastes and habits , diversity plays a crucial role to quantify the personalization .the numerical results show that the present algorithm outperforms the standard cf in all three criteria . how to automatically find out relevant information for diverse users is a long - standing challenge in the modern information science, we believe the current work can enlighten readers in this promising direction . how to automatically find out relevant information for diverse users is a long - standing challenge in the modern information science , the presented algorithm also could be used to find the relevant reviewers for the scientific papers or funding applications , and the link prediction in social and biological networks .we believe the current work can enlighten readers in this promising direction .we acknowledge grouplens research group for providing us the data .this work is partially supported by and national basic research program of china ( no .2006cb705500 ) , the national natural science foundation of china ( nos . 10905052 , 70901010 , 60744003 ), the swiss national science foundation ( project 205120 - 113842 ) , and shanghai leading discipline project ( no .s30501 ) . t.z . acknowledges the national natural science foundation of china under grant nos .10635040 and 60973069 . j. -g .liu , m. z. q. chen , j. chen , f. deng , h. -t .zhang , z. zhang , t. zhou .5(2 ) ( 2009 ) 230 .d. sun , t. zhou , j .-liu , c .- x .jia , b .- h .wang , phys .e 80 ( 2009 ) 017101 .
|
in this paper , we introduce a modified collaborative filtering ( mcf ) algorithm , which has remarkably higher accuracy than the standard collaborative filtering . in the mcf , instead of the cosine similarity index , the user - user correlations are obtained by a diffusion process . furthermore , by considering the second - order correlations , we design an effective algorithm that depresses the influence of mainstream preferences . simulation results show that the algorithmic accuracy , measured by the average ranking score , is further improved by 20.45% and 33.25% in the optimal cases of movielens and netflix data . more importantly , the optimal value depends approximately monotonously on the sparsity of the training set . given a real system , we could estimate the optimal parameter according to the data sparsity , which makes this algorithm easy to be applied . in addition , two significant criteria of algorithmic performance , diversity and popularity , are also taken into account . numerical results show that as the sparsity increases the algorithm considered the second - order correlation can outperform the mcf simultaneously in all three criteria . , , , , recommender systems , bipartite networks , collaborative filtering .
|
in spite of substantial effort to improve the efficiency of markov chain monte carlo ( mcmc ) methods , spatial correlations remain a major impediment .these correlations can severely restrict the possible configurations of a system by imposing complicated relationships between variables .it is well known that judicious elimination of variables by renormalization can reduce long range correlations ( see ) .the remaining variables are distributed according to the marginal distribution , where is the full distribution . given the values of the variables and the marginal distribution the variables are distributed according to the conditional distribution for systems exhibiting critical phenomena , the path through the space of distributions taken by marginal distributions under repeated renormalization can yield essential information about critical indices and the location of critical points ( see ) . more generally , because these marginal distributions exhibit shorter correlation lengths and weaker local correlations , they are useful in the acceleration of markov chain monte carlo methods . as explained in the next section , parallel marginalization takes advantage of the shorter correlation lengths present in marginal distributions of the target density .the use of monte carlo updates on lower dimensional spaces is not a new concept .in fact this is a necessary procedure in high dimensions .one simply constructs a chain with steps that preserve the conditional probability density of the full measure .this is usually accomplished by perturbing a few components of the chain while holding all other components of the chain constant . in other wordsthe chain takes steps of the form where and the move preserves there have been many important attempts to use proposals in more general sets of projected coordinates .the multi - grid monte carlo method presented in is one such method .these techniques do not incorporate marginal densities . in ,brandt and ron propose a multi - grid method which approximates successive marginal distributions of the ising model and then uses these approximations to generate large scale movements of the markov chain sampling the full joint distribution of all variables .their method , while demonstrating the efficacy of incorporating information from successive marginal distributions , suffers from two limitations .first , the method used to approximate the marginal distributions is specific to a small class of problems .for example , it can not be easily generalized to systems in continuous spaces . second , information from the approximate marginal distributions is adopted by the markov chain in a way which does not preserve the target distribution of all variables .the design of a generally applicable method which approximates the marginal distributions was addressed in by chorin , and in by stinis .both authors approximate the renormalized hamiltonian of the system given by the formula , thus is the marginal distribution of the variables .chorin determines the coefficients in an expansion of by first expanding the derivatives , which can be expressed as conditional expectations with respect to the full distribution .stinis shows that a maximum likelihood approximation to the renormalized hamiltonian can be found by minimizing the error in the expectations of the basis functions in an expansion of . for applications of related ideas to mcmc simulationssee and .two parallel marginalization algorithms are developed in the next section along with propositions that guarantee that the resulting markov chains satisfy the detailed balance condition . in the final sectionthe conditional path sampling problem is described and numerical results are presented for the bridge sampling and smoothing / filtering problems . a brief introduction to parallel marginalization can be found in .in this section , it is assumed that appropriate approximate marginal distributions are available .how to find these marginal distributions depends on the application and will be discussed here only in the context of the examples presented in this paper .a new markov chain monte carlo method is introduced which uses approximate marginal distributions of the target distribution to accelerate sampling .auxiliary markov chains that sample approximate marginal distributions are evolved simultaneously with the markov chain that samples the distribution of interest . by swapping their configurations , these auxiliary chains pass information between themselves and with the chain sampling the original distribution .assume that the system of interest has a probability density , , where lies in some space suppose further that , by the metropolis - hastings or any other method ( see ) , one can construct a markov chain , , which has as its stationary measure .that is , for two points where is the probability density of a move to given that . here, is the algorithmic step . in order to take advantage of the shorter spatial correlations exhibited by marginal distributions of , a collection of lower dimensional markov chains which approximately sample marginal distributions of is considered .suppose the random variable has components .divide these into two subsets , where has components and has components .recall that the variables are distributed according to the marginal density , and that given the value of the variables , the variables are distributed according to the conditional density , label the domain of the variables .suppose further that an approximation to the marginal distribution of the variables , is available .the sense in which approximates is intentionally left vague . in applications of parallel marginalizationthe accuracy of the approximation manifests itself through an acceptance rate .now let be independent of the random variables and drawn from .notice that represents the same physical variables as though its probability density is not the exact marginal density .continue in this way to remove variables from the system by decomposing into proper subsets as and defining to be independent of the random variables and drawn from an approximation to .clearly each represents fewer physical variables than .just as one can construct a markov chain to sample , one can also construct markov chains to sample .in other words , for each choose a transition probability density such that for all the chains can be arranged in parallel to yield a larger markov chain , the probability density of a move to given that for is given by since the stationary distribution of is the next step in the construction is to allow interactions between the chains and to thereby pass information from the rapidly equilibrating chains on the lower dimensional spaces ( large ) down to the chain on the original space ( ) .this is accomplished by swap moves . in a swap move between levels and , a subset , , of the variables is exchanged with the variables .the remaining variables are resampled from the conditional distribution . for the full chain, this swap takes the form of a move from to where and the variables are drawn from and the ellipses represent components of that remain unchanged in the transition .if these swaps are undertaken unconditionally , the resulting chain may equilibrate rapidly , but will not , in general , preserve the product distribution . to remedy this the swap acceptance probability is introduced . recall that is the function resulting from the integration of over the variables as in equation .given that , the probability density of , after the proposal and either acceptance with probability or rejection with probability , of a swap move , is given by for . is the dirac delta function .we have the following proposition .the transition probabilities satisfy the detailed balance condition for the measure i.e. where fix such that . when and are both zero unless for all except and and .therefore it is enough to check that the function is symmetric in and when .plugging in the definition of rearranging terms gives , recall from , that therefore , since the final formula is clearly symmetric in and .the detailed balance condition stipulates that the probability of observing a transition is equal to that of observing a transition and guarantees that the resulting markov chain preserves the distribution .if the chain is also harris recurrent then averages over a trajectory of will converge to averages over .in fact , chains generated by swaps as described above can not be recurrent and must be combined with another transition rule to generate a convergent markov chain . since if is harrisrecurrent with invariant distribution , averages over can be calculated by taking averages over the trajectories of the first components of .notice that the formula for requires the evaluation of at the points while the approximation of by functions on is in general a very difficult problem , its evaluation at a single point is often not terribly demanding .in fact , in many cases , including the examples in chapter 3 , the variables can be chosen so that the remaining variables are conditionally independent given despite this mitigating factor , the requirement that be evaluated before acceptance of any swap is inconvenient . fortunately , and somewhat surprisingly , this requirement is not necessary .in fact , standard strategies for approximating the point values of the marginals yield markov chains that also preserve the target measure . thus even a poor estimate of the ratio appearing in can give rise to a method that is exact in the sense that the resulting markov chain will asymptotically sample the target measure . before moving on to the description of the resulting markov chain monte carlo algorithmsconsider briefly the general problem of evaluating marginal densities .let and be the densities of two equivalent measures with marginal densities , and respectively . for any integrable function & = \int \gamma(x , y )p_2(x , y ) p_1(y\vert x ) dy\\ & = \frac{\overline{p}_2(x)}{\overline{p}_1(x ) } \int \gamma(x , y ) p_2(y\vert x ) p_1(x , y ) dy\\ & = \frac{\overline{p}_2(x)}{\overline{p}_1(x ) } \mathbf{e}_{p_2}\left [ \gamma\left(x , y\right ) p_1\left(x , y\right ) \vert \left\{x = x\right\}\right]\end{aligned}\ ] ] thus given the value of at can be obtained through the formula , } { \mathbf{e}_{p_1}\left [ \gamma\left(x , y\right ) p_2\left(x , y\right ) \vert \left\{x = x\right\}\right]}\ ] ] of course , the usual importance sampling concerns apply here .in particular , the approximation of the conditional expectations in will be much easier when lives in a lower dimensional space .similar approximations can be inserted into our acceptance probabilities in place of the ratio for example , if is a reference density approximating then the choice yields where the are samples from thus if are samples from then {a.s . }\frac{\mathbf{e}_{p_l}\left[\frac{\pi_l\left(x_{l+1},\widetilde{x}_l\right ) } { p_l\left(\widetilde{x}_l\vert x_{l+1}\right)}\ \vert \left\{\widehat{x}_l = x_{l+1}\right\}\right ] } { \mathbf{e}_{p_l}\left[\frac{\pi_l\left(\hat{x}_l,\widetilde{x}_l\right ) } { p_l\left(\widetilde{x}_l\vert \hat{x}_l\right)}\ \vert \left\{\widehat{x}_l=\hat{x}_l\right\}\right ] } = \frac{\overline{\pi}_l(x_{l+1})}{\overline{\pi}_l(\hat{x}_l)}\ ] ] in the numerical examples presented here , is a gaussian approximation of .how is chosen depends on the problem at hand ( see numerical examples below ) . in general should be easily evaluated and independently sampled , and it should `` cover '' in the sense that regions where is not negligible should be contained in regions where is not negligible . in the case mentioned above that the variables can be chosen so that the remaining variables are conditionally independent given the conditional density can be written as a product of many low dimensional densities . as mentioned above ,the problem of finding a reference density for importance sampling is much simpler in low dimensional spaces . the following algorithm results from replacing in with approximation of the form .assume that the current position of the chain is where algorithm [ pm1 ] will result in either or where and is approximately drawn from [ pm1 ] the chain moves from to as follows : 1 .let for be independent random variables sampled from ( recall that the swap is between and which are both in ) .2 . evaluate the weights the choice of made above affects the variance of these weights , and therefore the variance of the acceptance probability below .3 . draw the random index according to the probabilities = \frac{w_u^j}{\sum_{m=1}^{m } w_u^m}.\ ] ] set notice that is an approximate sample from 4 .let and draw for independently from notice that the variables depend on while the variables depend on .define the weights 6 . set with probability and with probability . the transition probability density for the above swap move from to for given by \ \prod \delta_{\left\{y_j = x_j\right\}}\\ + \mathbf{p}\left[\left\{\text{swap is accepted}\right\}\cap \left\{\widetilde{y}'=\tilde{y}_l\right\ } \right]\\ \times \delta_{\left\{\left(\hat{y}_l , y_{l+1}\right ) = \left(x_{l+1},\hat{x}_l\right)\right\ } } \prod_{j\notin\left\{i , i+1\right\}}\delta_{\left\{y_j = x_j\right\}},\end{gathered}\ ] ] where is again the dirac delta function .notice that to find the probability density ] can not and need not be evaluated .algorithm [ pm1 ] can be derived from algorithm [ pm2 ] by setting and notice also that if where , for each is a conditional density satisfying then = \int \pi_l(u^j , x_{l+1 } ) q^j\left((u_1,\dots , u_{j-1})\vert \hat{x}_l , x_{l+1}\right)\prod_{i=1}^j du^i = \overline{\pi}_l(x_{l+1}).\ ] ] thus , if the generate a sequence that satisfies a law of large numbers , then the same holds for the so that more general choices of lead to which converge to correpondingly more general acceptance probabilities than of course , expression points the way to even more general algorithms .algorithms [ pm1 ] and [ pm2 ] correspond to choices of in that make the conditional expectation on the bottom of equal to one .other choices of may improve the variance of the resulting weights .the transition probabilities satisfy the detailed balance condition for the measure fix such that .for such that , \\ \times \delta_{\left\{\left(\hat{y}_l , y_{l+1}\right ) = \left(x_{l+1},\hat{x}_l\right)\right\ } } \prod_{j\notin\left\{i , i+1\right\}}\delta_{\left\{y_j = x_j\right\}},\end{gathered}\ ] ] as in the previous two proofs it can be assumed that for all except and and . since in this case for all it remains to show that if then \end{gathered}\ ] ] is symmetric in and .summing over disjoint events , = \\ \sum_{j=1}^{m}\mathbf{p}\left[\left\{\text{swap is accepted}\right\}\cap \left\{\tilde{y}_l = u^j \right\ } \cap \left\{j = j\right\ } \right]\end{gathered}\ ] ] thus will be symmetric if for each the function \end{gathered}\ ] ] is symmetric . recall the definition of the weights and the fact that and for thus , definition implies that for all , thus , which can be rewritten , plugging and into this expression yields , by the symmetry property of this expression is symmetric in and .clearly a markov chain that evolves only by swap moves can not sample all configurations , ie .the chain generated by is not -irreducible for any non trivial measure these swap moves must therefore be used in conjunction with a transition rule that can reach any region of space .more precisely , let from expression be harris recurrent with stationary distribution ( see ) .the the transition rule for parallel marginalization is where and is the probability that a swap move occurs . dictates that , with probability , the chain attempts a swap move between levels and where is a random variable chosen uniformly from .next , the chain evolves according to . with probability the chain moves only according to and does not attempt a swap .the next result guarantees the invariance of under evolution by .it is not difficult to verify that the chain generated by has invariant measure and is harris recurrent if the chain generated by has these properties .thus by combining standard mcmc steps on each component , governed by the transition probability , with swap steps between the components governed by , an mcmc method results that not only uses information from rapidly equilibrating lower dimensional chains , but is also convergent .in this section i consider applications of parallel marginalization to two conditional path sampling problems for a one dimensional stochastic differential equation , where and are real valued functions of one must first approximate by a discrete process for which the path density is readily available .let be a mesh on which one wishes to calculate path averages .one such approximate process is given by the linearly implicit euler scheme ( a balanced implicit method , see ) , here is an approximation to at time the reader should note that the rate of convergence of the above scheme to the solution of would not be effected by the insertion in of a non - negative constant in front of the term .the choice of made here seemed to improve the stability of the resulting scheme for large values of the are independent gaussian random variables with mean 0 and variance 1 , and is assumed to be a power of 2 .the choice of this scheme over the euler scheme ( see ) is due to its favorable stability properties as explained later .it is henceforth assumed that instead of is the process of interest .the first of the conditional sampling problems discussed here is the bridge sampling problem in which one generates samples of transition paths between two states .this problem arises , for example , in financial volatility estimation where , given a sequence of observations , with the goal is to estimate the diffusion term ( assumed here to be constant ) appearing in the stochastic differential equation .since in general one can not easily evaluate the transition probability between times and ( and thus the likelihood of the observations ) it is necessary to generate samples between the observations , where ( assumed to be an integer ) and denotes the value of the process at time .it is then easy to evaluate the likelihood of a path given a particular value of the volatility , the filtering / smoothing problem is similar to the financial volatility example of the previous paragraph except that now it is assumed that the observations are noisy functions of the underlying process .for example , one may wish to sample possible trajectories taken by a rocket given somewhat unreliable gps observations of its position .if the conditional density of the observations given the position of the rocket is known , it is possible to generate conditional samples of the trajectories . in the bridge path sampling problem one seeks to approximate conditional expectations of the form \ ] ] where is a real valued function , and is solution to . without the condition above , generating an approximate sample path is a relatively straitforward endeavor .one simply generates a sample of , then evolves with this initial condition .however , the presence of information about complicates the task . in general ,some sampling method which requires only knowlege of a function proportional to conditional density of must be applied .the approximate path density associated with discretization is where ^ 2 } { 2\sigma^2\left(x\right)\triangle}\ ] ] at this point the parallel marginalization sampling procedure is applied to the density .however , as indicated above , a prerequisite for the use of parallel marginalization is the ability to estimate marginal densities . in some important problems homogeneities in the underlying system yield simplifications in the calculation of these densities by the methods in .these calculations can be carried out before implementation of parallel marginalization , or they can be integrated into the sampling procedure . in some cases , computer generation of the can be completely avoided .the examples presented here are two such cases .let ( recall is a power of 2 ) .decompose as where and in the notation of the previous sections , where and in words , the hat and tilde variables represent alternating time slices of the path . for all fix and .we choose the approximate marginal densities where for each , is defined by successive coarsenings of .that is , since will be sampled using a metropolis - hastings method with and fixed , knowlege of the normalization constants is unnecessary .notice from that , conditioned on the values of and , the variance of is of order .thus any perturbation of which leaves fixed for and which is compatible with joint distribution must be of the order .this suggests that distributions defined by coarser discretizations of will allow larger perturbations , and consequently will be easier to sample .however , it is important to choose a discretization that remains stable for large values of .for example , while the linearly implicit euler method performs well in the experiments below , similar tests using the euler method were less successful due to limitations on the largest allowable values of . in this numerical example bridge pathsare sampled between time 0 and time 10 for a diffusion in a double well potential the left and right end points are chosen as . is the level of the parallel marginalization markov chain at algorithmic time .there are 10 chains ( ) .the observed swap acceptance rates are reported in table .notice that the swap rates are highest at the lower levels but seems to stabilize at the higher levels . 1clevels & 1c0/1&1c1/2 & 1c2/3&1c3/4 & 1c4/5&1c5/6 & 1c6/7&1c7/8 & 1c8/9 1c & 1c0.86&1c0.83 & 1c0.75&1c0.69 & 1c0.54&1c0.45 & 1c0.30&1c0.22 & 1c0.26 swaps between levels and [ swaprates1 ] let denote the midpoint of the path defined by ( i.e. an approximate sample of the path at time 5 ) . in figure [ fig1 ] the autocorrelation of \ ] ] is compared to that of a standard metropolis - hastings rule using 1 dimensional gaussian random walk proposals . in the figure ,the time scale of the autocorrelation for the metropolis - hastings method has been scaled by a factor of 1/10 to more than account for the extra computational time required per iteration of parallel marginalization .the relaxation time of the parallel chain is clearly reduced . for metropolis - hastings method with 1-d gaussian random walk proposals ( solid ) and parallel marginalization ( dotted ) .the x - axis runs from 0 to 10000 iterations of the metropolis - hastings method and from 0 to 1000 iterations of parallel marginalization .this rescaling more than compensates for the extra work for parallel marginalization per iteration ., width=384 ] in these numerical examples , parallel marginalization is applied with a slight simplification as detailed in the following algorithm .the chain moves from to as follows : 1 .generate m independent gaussian random paths with independent components of mean 0 and variance .2 . for each and let 3 .define the weights where is defined by the choice in step 1 as 4 .choose according to the probabilities = \frac{w_u^j}{\sum_{k=1}^{m } w_u^k}.\ ] ] set 5 .set and for set 6 .define the weights 7 . set with probability and with probability .this simplification reduces by half the number of gaussian random variables needed to evaluate the acceptance probability but may not be appropriate in all settings . for this problem , the choice of in , the number of samples and , seems to have little effect on the swap acceptance rates . in the numerical experiment for swaps between levels and .the results of the metropols - hastings and parallel marginalization methods applied to the above bridge sampling problem after a run time of 10 minutes on a standard workstation are presented in figures [ bridgesample ] and [ parbridge ] . apparently the sample generated by parallel marginalization is a reasonable bridgepath while the metropolis - hastings method has clearly not converged . in the non - linear smoothing and filtering problemone seeks to approximate conditional expectations of the form \ ] ] where the real valued processes and are given by the system , , , and are real valued functions of .the are real valued independent random variable drawn from the density and are independent of the brownian motion and the process is a hidden signal and the are noisy observations .the idea of computing the above conditional expectation by conditional path sampling has been suggested in .popular alternatives include particle filters ( see ) and ensemble kalman filters ( see ) . again ,begin by discretizing the system .assume that is an integer and let the linearly implicit euler scheme gives where represents the discrete time approximation to for the are independent gaussian random variables with mean 0 and variance 1 .the are independent of the . is again assumed to be a power of 2 .the approximate path measure for this problem is the approximate marginals are chosen as where , and are as defined in the previous section . in this example , samples of the smoothed path are generated between time time 0 and time 10 for the same diffusion in a double well potential .the densities and are chosen as the function in is the identity function .the observation times are with for and for . there are 8 chains ( ) .the observed swap acceptance rates are reported in table .notice that the swap rates are again highest at the lower levels but , for this problem , become unreasonably small at the highest level . 1clevels & 1c0/1&1c1/2 & 1c2/3&1c3/4 & 1c4/5&1c5/6 & 1c6/7 1c & 1c0.86&1c0.83 & 1c0.74&1c0.65 & 1c0.46&1c0.23 & 1c0.04 swaps between levels and [ swaprates2 ] again , denotes the midpoint of the path defined by ( i.e. an approximate sample of the path at time 5 ) . in figure [ fig2 ] the autocorrelation of compared to that of a standard metropolis - hastings rule .the figure has been adjusted as in the previous example .the relaxation time of the parallel chain is again clearly reduced . for metropolis - hastings method with 1-d gaussian random walk proposals ( solid ) and parallel marginalization ( dotted ) .the x - axis runs from 0 to 10000 iterations of the metropolis - hastings method and from 0 to 1000 iterations of parallel marginalization .this rescaling more than compensates for the extra work for parallel marginalization per iteration ., width=384 ] the algorithm is modified as in the previous example . for this problem ,acceptable swap rates require a higher choice of in than needed in the bridge sampling problem . in this numerical experiment for swaps between levels and .the results of the metropols - hastings and parallel marginalization methods applied to the smoothing problem above after a run time of 10 minutes on a standard workstation are presented in figure [ pathsmooth ] and [ parsmooth ] .apparently the sample generated by parallel marginalization is a reasonable bridgepath while the metropolis - hastings method has clearly not converged .a markov chain monte carlo method has been proposed and applied to two conditional path sampling problems for stochastic differential equations .numerical results indicate that this method , parallel marginalization , can have a dramatically reduced equilibration time when compared to standard mcmc methods .note that parallel marginalization should not be viewed as a stand alone method .other acceleration techniques such as hybrid monte carlo can and should be implemented at each level within the parallel marginalization framework .as the smoothing problem indicates , the acceptance probabilities at coarser levels can become small .the remedy for this is the development of more accurate approximate marginal distributions by , for example , the methods in and .i would like to thank prof .a. chorin for his guidance during this research , which was carried out while i was a ph.d .student at u. c. berkeley .i would also like to thank prof .o. hald , dr .p. okunev , dr .p. stinis , and dr .xuemin tu , for their very helpful comments .this work was supported by the director , office of science , office of advanced scientific computing research , of the u. s. department of energy under contract no .de - ac03 - 76sf00098 and national science foundation grant dms0410110 .
|
monte carlo sampling methods often suffer from long correlation times . consequently , these methods must be run for many steps to generate an independent sample . in this paper a method is proposed to overcome this difficulty . the method utilizes information from rapidly equilibrating coarse markov chains that sample marginal distributions of the full system . this is accomplished through exchanges between the full chain and the auxiliary coarse chains . results of numerical tests on the bridge sampling and filtering / smoothing problems for a stochastic differential equation are presented .
|
mathematical information retrieval ( mir ) is an important emerging area of information retrieval research .technical documents often include a substantial amount of mathematics , but math is difficult to use directly in queries . for the most part , large - scale search engines do not support formula search other than indirectly , e.g. , through matching latex strings .formula queries allow documents with similar expressions or mathematical models to be discovered automatically , providing a new way to search and browse technical literature . for mathematical non - experts, querying based on the appearance of expressions may also be useful , for example when students try to interpret unfamiliar notation .many have had the experience of wishing they could search through technical documents for similar formulae rather than find words to describe them .figure [ fig : q14 ] shows the top of a results page from the new _ tangent _ formula retrieval engine .the 17 hits shown are grouped by their structure ( exact match , variable substitution , operator substitution ) , and groups are ordered by the similarity of the contained formulae to the query .efficient and effective retrieval becomes more difficult when the best matches are even less similar to the query formula ( e.g. , the repository includes larger expressions that include pieces similar to one or more parts of the query formula ) or when wildcards that can match arbitrary symbols or subexpressions are included in the query . for scalability , tangent now employs a two - level cascading search system that provides both query runtime efficiency and ranking effectiveness for formula search .the first level is the _ core engine _ , which uses an uncompressed inverted index over tuples representing pairs of symbols in an expression .this level provides limited support for wildcard symbols and can quickly produce an ordered list of candidate results using a simple ranking algorithm .the second level re - ranks the top candidate results using _ maximum subtree similarity _ ( mss ) , a new metric for computing the similarity of mathematical formulae based on their appearance .the system architecture is summarized in figure [ fig : arch ] .* contributions .* this paper includes three primary contributions .our first is the incorporation of substantially smaller indices than those used previously ( section [ sec : coreengine ] ) , which can obtain strong retrieval results in a scalable system .the second contribution is the mss metric ( section [ sec : similarity ] ) , which produces an intuitive ordering for retrieved formula based on the visual structure of expressions , taking unifiable symbol types into account .the third is a new symbol pair retrieval model ( section [ sec : structure ] ) that incorporates the first two contributions in an efficient and effective two - stage cascaded implementation , as demonstrated experimentally ( section [ sec : experiments ] ) .in addition , we believe that the form of output adopted , namely grouping results by similarity and match structure , is an improvement over existing mir interfaces .interest in mathematical information retrieval ( mir ) has been increasing in recent years , as witnessed by the ntcir-10 and ntcir-11 math retrieval tasks held in 2013 and 2014 , respectively .math representations are naturally hierarchical , and often represented by trees that may be encoded as text strings . as a result ,approaches to query - by - expression may be categorized as _ tree - based _ or _ text - based _ , as determined by the structures used to represent formulae .the encoded hierarchies commonly represent either the arrangement of symbols on writing lines ( as in latex or presentation mathml ) or the underlying mathematical semantics as nested applications of operations to arguments ( as in openmath or content mathml ) .both appearance and semantic representations have been used for retrieval . * text - based approaches . * in text - based approaches ,math expression trees are linearized , and often normalized , before indexing and retrieval .common normalizations include defining synonyms for symbols ( e.g. , function names ) , using canonical orderings for commutative operators and spatial relationships ( e.g. , to group ` + b } with { \verb b+ ` and ` _ i^2 } with { \verb ` ^2_i ) , enumerating variables , and replacing symbols by their mathematical type ( e.g. , numbers , variables , and classes of operators ) .although linearization masks significant amounts of structural information , it allows text and math retrieval to be carried out efficiently by a single search engine ( commonly lucene ) . as a result , most text - based formula retrieval methods use tf - idf ( term frequency - inverse document frequency ) retrieval after linearizing expressions . in an alternative approach ,the largest common substring between the query formula and each indexed expression is used to retrieve latex strings .this captures more structural information , but also requires evaluating all expressions in the index using a quadratic algorithm. * tree - based approaches .* tree - based formula retrieval approaches use explicit trees to represent expression appearance or semantics directly .these approaches index complete formula trees , often along with their subexpressions to support partial matching .methods have been developed to compress tree indices by storing identical subtrees uniquely and to match expressions using tree - edit distances with early stopping for fast retrieval .the _ substitution tree _ data structure , first designed for unification of predicates , has been used to create tree - structured indices for formulae .descendants of an index tree node contain expressions that unify with the parameterized expression stored at that node .a recent tree - based technique adapts tf - idf retrieval for vectors of subexpressions and generalized subexpressions in which arguments are represented by untyped placeholders . in this methoda symbol layout tree is modified to capture some semantic properties , normalizing the order of arguments for commutative operators and representing operator precedences explicitly .* ` spectral ' tree - based approaches . *an emerging sub - class of the tree - based approach uses paths or small subtrees rather than complete subtrees for retrieval .one system converts sub - expressions in operator trees to words representing individual arguments and operator - argument triples .a lattice over the sets of generated words is used to define similarity , and a breadth - first search constructs a neighbor graph traversed during retrieval .another system employs an inverted index over paths in operator trees from the root to each operator and operand , using exact matching of paths for retrieval .the large number of possible unique paths combined with exact matching make this technique brittle . rather than indexing paths from the root of the tree , the tangent math retrieval system stores _ relative _ positions of symbol pairs in symbol layout trees to create a `` bag of symbol pairs '' representation .this symbol pair representation supports partial matches in a flexible way , while preserving enough structural information to return exact matches for queries .set agreement metrics are applied to the bags of symbol pairs to compute formula similarities . for example , the harmonic mean for the percentage of matched pairs in the query and a candidate ( i.e. , _ dice s coefficient _ for set similarity and a candidate tree , let and , respectively , denote a set of their features ( such as a set of node and edge labels ) and let denote the set of features they have in common .dice s coefficient of similarity ( ) can then serve as the _ score _ for . ] ) prefers large matches of the query with few additional symbols in the candidate .tangent ( starting with version 2 ) also accommodates matrices , isolated symbols , and wildcard symbols and augments formula search with text search .formula retrieval based on bags of symbol pairs combined with keyword retrieval using lucene allowed tangent to produce the highest precision result for the ntcir-11 math-2 main retrieval task with combined text and formula queries ( 92% ) . in this paper, we address needed improvements for tangent , described in the next section .the math retrieval task we address is to search a corpus to produce a ranked list of formulae ( and the pages on which those formulae are located ) that match a query formula expressed in latex or presentation mathml , with or without the inclusion of wildcard symbols .formulae ranked highly should match the query formula exactly or , failing that , closely resemble it . the system should be scalable in terms of index size , indexing speed , and querying speed . * scalability and retrieval effectiveness .* as originally implemented , tangent is not scalable : indexing time is less than 200 formulae per second , producing indices of over 1 gb for the ntcir-11 wikipedia corpus and 30 gb for the ntcir-11 arxiv corpus . retrieval time is also slow , averaging 5 seconds per query for the wikipedia task ( under 400 thousand distinct formulae ) and averaging 3 _ minutes _ per query for the ntcir main task ( 3 million distinct formulae ) .furthermore , while retrieval effectiveness is very good , there is substantial room for improvement .* symbols and containers .* tangent uses a symbol layout tree ( slt ) to represent the appearance of a mathematical formula .tree nodes represent individual symbols and visually explicit aggregates , such as fractions , matrices , function arguments , and parenthesized expressions . in tangent version 3 ,all symbols except those representing operators or separators ( e.g. , commas ) are prefixed with their type , represented by a single character followed by an exclamation point .more specifically , slt nodes represent : typed mathematical symbols : numbers ( n! ) ; variable names ( v! ) ; text fragments , such as _ lim _ , _ otherwise _ , and _ such that _ ( t! ) fractions ( f ! ) container objects : radicals ( r ! ) ; matrices , tabular structures , and parenthesized expressions ( m! ) explicitly specified whitespace ( w ! ) wildcard symbols ( ? ) mathematical operators because of their visual similarity , all tabular structures , including matrices , binomial coefficients , and piecewise defined functions are encoded using the matrix indicator m!. if a matrix - like structure is surrounded by fence characters , then those symbols are indicated after the exclamation mark .finally , the indicator includes a pair of numbers separated by an , indicating the number of rows and the number of columns in the structure .for example , m!2x3 represents a 2x3 table with no surrounding delimiters and m!()1x5 represents a 1x5 table surrounded by parentheses . importantly , _ all _ parenthesized subexpressions are treated as if they were 1x1 matrices surrounded by parentheses , and , in particular , the arguments for any -ary function are represented as if they were a 1x matrix surrounded by parentheses . as well as associating a label ( e.g. , v!x ) with every slt node , every node has an associated type ( _ number _ , _ variable _ , _ operator _ , etc . ) .a node s type is reflected in its label , usually represented by the part of the label up to an exclamation point ( e.g. , v ! ) , but node labels preceded by a question mark ( ? ) have type _ wildcard _ ; a matrix node s type includes the matrix dimensions , but not its fence characters ( e.g. , m!2x3 ) ; and other node labels without exclamation marks have type _operator_. * spatial relationships .* labeled edges in the slt capture the spatial relationships between objects represented by the nodes : * next * ( ) references the adjacent object that appears to the right on the same line * within * ( ) references the radicand of a root or to the first element appearing in row - major order in a structure represented by m !* element * ( ) references the next element appearing in row - major order in a structure represented by m ! * above * ( ) references the leftmost object on a higher line ( e.g. , superscript , over symbol , fraction numerator , or index for a radical ) * below * ( ) references the leftmost object on a lower line ( e.g. , subscript , under symbol , fraction denominator ) * pre - above * ( ) references the leftmost object of a prescripted superscript * pre - below * ( ) references the leftmost object of a prescripted subscript an slt is rooted at the leftmost object on the main baseline ( writing line ) of the formula it represents .figure [ fig : slt ] shows an example of an slt , where for simplicity , unlabeled edges represent the _ next _ relationship and types other than _ wildcard _ are not displayed . + + ( a ) query formula and symbol layout tree ( slt ) + [ cols="^,^,<,^,^",options="header " , ] we now consider how well mss - based rankings correspond to human perceptions of formula similarity , through evaluating top-10 results .to first select which combinations of parameters to use for our human evaluation , we examined the mss scores of formulae returned by the core before re - ranking . in figure[ fig : goldstandard]a , from the top-100 hits returned by the core for the ntcir-11 wikipedia task , we compute normalized discounted cumulative gain ( ndcg ) distributions for the maximum subtree similarity scores in each of the top-100 hits compared to an mss ` gold standard . ' in the gold standardall formulae in the wikipedia collection have been scored for each of the 100 wikipedia queries , and the top-100 formulae for each query are used for normalization .we again consider a number of different window size and eol parameters ( , with and without eol tuples ) .the first five columns show increasing values without eol tuples , followed by the same range of values with eol tuples . adding eol tuples shifts values around the median down .we took this as evidence that including eol tuples was not helping return more similar formulae as measured by ndcg over the mss scores .further , as moving from to reduces the variance most dramatically , to keep the number of hits for individual participants to evaluate reasonable , we chose to consider only and . * data . * a set of 10 queries were selected using random sampling from the wikipedia query set .five of the queries contained wildcards , and the other five did not .some queries were manually rejected and then randomly replaced to insure that a diverse set of expression sizes and structures were collected . using the wikipedia collection , for the three versions of the core compared ( , no eol tuples ) , we applied reranking to the top-100 hits , and then collected the top-10 hits returned by each query for rating . * evaluation protocol .* participants completed the study alone in a private , quiet room with a desktop computer running the evaluation interface in a web browser .the web pages first provided an overview , followed by a demographic questionnaire , instructions on evaluating hits , and then familiarization trials ( 10 hits ; 5 for each of two queries ) .after familiarization participants evaluated hits for the 10 queries , and finally completed a brief exit questionnaire .participants were paid $ 10 at the end of their session .participants rated the similarity of queries to results using a five - point likert scale ( very dissimilar , dissimilar , neutral , similar , very similar ) .it has been shown that presenting search results in an ordered list affects the likelihood of hits being identified as relevant .instead we presented queries along with each hit in isolation . to avoid other presentation order effects , the order of query presentation was randomized , followed by the order in which hits for each query were presented .* demographics and exit questionnaire . *21 participants ( 5 female , 16 male ) were recruited from the computing and science colleges at our institution .their age distribution was : 18 - 24 ( 8) , 25 - 34 ( 9 ) , 35 - 44 ( 1 ) , 45 - 54 ( 1 ) , 55 - 64 ( 1 ) and 65 - 74 ( 1 ) .their highest levels of education completed were : bachelor s degree ( 9 ) , master s degree ( 9 ) , phd ( 2 ) , and professional degree ( 1 ) .their reported areas of specialization were : computer science ( 13 ) , electrical engineering ( 2 ) , psychology ( 1 ) , sociology ( 1 ) , mechanical engineering ( 1 ) , computer engineering ( 1 ) , math ( 1 ) and professional studies ( 1 ) . in the post - questionnaire , participants rated the evaluation task as very difficult ( 3 ) , somewhat difficult ( 10 ) , neutral ( 6 ) , somewhat easy ( 2 ) or very easy ( 0 ) .they reported different approaches to assessing similarity .many considered whether operations and operands were of the same type or if two expressions would evaluate to the same result .others reported considering similarity primarily based on similar symbols , and shared structure between expressions .c | l l l l l & + & * 1 * & * 2 * & * 3 * & * 4 * & * 5 * + & 4.54 ( 0.78 ) & 3.79 ( 1.16 ) & 3.48 ( 1.31 ) & 3.20 ( 1.30 ) & 2.83 ( 1.24 ) + & 4.54 ( 0.78 ) & 3.71 ( 1.22 ) & 3.48 ( 1.30 ) & 3.16 ( 1.28 ) & 2.90 ( 1.25 ) + & 4.54 ( 0.78 ) & 3.78 ( 1.18 ) & 3.59 ( 1.22 ) & 3.27 ( 1.15 ) & 2.98 ( 1.24 ) + & * 6 * & * 7 * & * 8 * & * 9 * & * 10 * + & 2.94 ( 1.22 ) & 2.65 ( 1.19 ) & 2.78 ( 1.20 ) & 2.78 ( 1.20 ) & 2.85 ( 1.24 ) + & 2.93 ( 1.19 ) & 2.85 ( 1.25 ) & 2.57 ( 1.18 ) & 2.74 ( 1.22 ) & 2.80 ( 1.13 ) + & 2.92 ( 1.23 ) & 2.80 ( 1.17 ) & 2.98 ( 1.23 ) & 2.92 ( 1.21 ) & 2.87 ( 1.17 ) + [ likert ] * similarity ratings . *as seen in table [ likert ] , the likert - based similarity rating distributions are very similar , and identical in a number of places . in all three conditions , average ratings increase consistently from the 5th to 1st hits .the top 4 formula hits all have an average rating higher than 3, suggesting that a number of participants felt these formula had some similarity with the query expression . after thisthe ratings are less than ` 3 ' and sometimes shift . perhaps because matches were not highlighted , in a number of cases exact matches were rated as ` 4 ' rather than ` 5 . 'as was found for the ntcir-11 wikipedia benchmark , it appears that a window size of 1 is able to obtain strong results .this is appealing , because this requires the smallest index size and has the fastest retrieval times .we have presented a new technique for ranking appearance - based formula retrieval results , using the candidate formula subtree with the harmonic mean for matched symbols and edges after greedy unification of symbols by type .this maximum subtree similarity ( mss ) metric prefers large connected matches of the query within the formula . in an experiment we found that for the top-10 hits , the human ratings of similarity were consistent with the ranking produced by our metric .we have also described an efficient two - stage implementation of our retrieval model that produces state - of - the - art results for the ntcir-11 wikipedia formula retrieval task , using a much smaller index . in the futurewe plan to explore using end - of - line symbols , but only for small expressions . this will not require much additional space in the index , while greatly reducing the cost of wildcard end - of - line tuples. we also plan to support multiple copies of a formula in a document , devise new methods for ranking documents based on multiple matches and/or query expressions , and integrate our formula retrieval system with keyword search .this material is based upon work supported by the national science foundation ( usa ) under grant no .iis-1016815 and hcc-1218801 . financial support from the natural sciences and engineering research council of canada under grant no . 9292/2010 , mitacs , and the university of waterloois gratefully acknowledged .n. pattaniyil and r. zanibbi .combining tf - idf text retrieval with an inverted index over symbol pairs in math expressions : the tangent math search engine at ntcir 2014 . in _ ntcir _ , pages 135142 , 2014 .
|
with the ever - increasing quantity and variety of data worldwide , the web has become a rich repository of mathematical formulae . this necessitates the creation of robust and scalable systems for mathematical information retrieval , where users search for mathematical information using individual formulae ( query - by - expression ) or a combination of keywords and formulae . often , the pages that best satisfy users information needs contain expressions that only approximately match the query formulae . for users trying to locate or re - find a specific expression , browse for similar formulae , or who are mathematical non - experts , the similarity of formulae depends more on the relative positions of symbols than on deep mathematical semantics . we propose the maximum subtree similarity ( mss ) metric for query - by - expression that produces intuitive rankings of formulae based on their appearance , as represented by the types and relative positions of symbols . because it is too expensive to apply the metric against all formulae in large collections , we first retrieve expressions using an inverted index over tuples that encode relationships between pairs of symbols , ranking hits using the dice coefficient . the top- formulae are then re - ranked using mss . our approach obtains state - of - the - art performance on the ntcir-11 wikipedia formula retrieval benchmark and is efficient in terms of both index space and overall retrieval time . retrieval systems for other graphical forms , including chemical diagrams , flowcharts , figures , and tables , may also benefit from adopting our approach . [ query processing ]
|
controlling chaotic transport is a key challenge in many branches of physics like for instance , in particle accelerators , free electron lasers or in magnetically confined fusion plasmas . for these systems , it is essential to control transport properties without significantly altering the original system under investigation nor its overall chaotic structure . herewe review a control strategy for hamiltonian systems which is based on building barriers by adding a small apt perturbation which is localized in phase space , hence confining all the trajectories . for more details on the methods ( including exact results and numerical implementations , we refer to refs . ) .we consider the class of hamiltonian systems that can be written in the form i.e. an integrable hamiltonian ( with action - angle variables ) plus a small perturbation . generically , it is expected that the phase space of is a mixture between regular and chaotic behaviors .the transition to hamiltonian chaos occurs by successive break - ups of invariant kam tori which foliate the phase space of . for two degrees of freedom ,it is worthwhile noticing that this transition is very similar to phase transitions in statistical mechanics , in the sense that it can be described by hyperbolic invariant sets of a renormalization operator .more than 25 years ago , chirikov stated an empirical criterion based on the overlap of primary resonances in order to get an idea on how chaos arises in hamiltonian systems and to get rough estimates of the threshold of large scale diffusion .the idea of chirikov is that chaos is obtained when the primary resonant islands overlap , i.e. , when the distance between two of them is smaller than the sum to their half - widths .we now consider a small modification of the original system ( where is smaller than with an appropriate norm of functions ) .the perturbation introduces generically additional resonances .therefore from chirikov s criterion , it is expected that there will be more overlaps of resonant islands and hence more chaos .the aim of our control strategy is to tailor appropriate perturbations acting in the opposite direction , namely such that the hamiltonian has more smooth invariant tori and hence is more regular than contrary to the conventional wisdom inherited from chirikov s criterion . for practical purposes ,the control term should be small with respect to the perturbation , and localized in phase space ( i.e. the subset of phase space where is non - zero is finite and small ) . in this article, we highlight a method of control on a specific and paradigmatic example , the forced pendulum .we compute explicitly the formula of the control term which is able to create an isolated barrier of transport .we show that not only our method of control provides a control term which is much smaller than the perturbation but it also provides the explicit location of the created invariant torus which is a crucial step in order to localize the action of the control .the localized control method has been extensively described in ref . where the corresponding rigorous mathematical results were proved .we state here the main result of this paper . for a hamiltonian system written in action - angle variables with degrees of freedom ,the perturbed hamiltonian is where and is a non - resonant vector of . without loss of generality, we consider a region near ( by translation of the actions ) and , since the hamiltonian is nearly integrable , the perturbation has constant and linear parts in actions of order , i.e. where is of order .we notice that for , the hamiltonian has an invariant torus with frequency vector at for any not necessarily small .the controlled hamiltonian we construct is where is a smooth characteristic function of a region around a targeted invariant torus .we notice that the control term we construct only depends on the angle variables and is given by where is a linear operator defined as a pseudo - inverse of , i.e. acting on as note that is of order .this can be seen from eq .( [ eqn : e4v ] ) since can be rewritten as and is quadratic in the actions . herethe dependence of the control in the actions is in the function . for any perturbation ,hamiltonian ( [ eqn : gene ] ) has an invariant torus with frequency vector close to .the equation of the torus which is constructed by the addition of is which is of order since is of order .there is a significantly large freedom in choosing the function .it is sufficient to have for .for instance , would be a possible and simpler choice , however representing a long - range control since the control term would be applied on all phase space . on the opposite way, we can design a function such that the control is localized around the created invariant torus .the support of the function is reduced to a small neighborhood of the torus .the main advantage of this step function is that it needs fewer energy ( only in the part of phase space where the control is localized ) and also it does not change the other part of phase space , not altering the overall chaotic structure of the system . in particular , the dynamics is not changed in the region of the dominant resonances .therefore a clever choice for the localization is where the parameters and are small and is sufficiently smooth on ( for instance of class ) .one example of is depicted on fig .[ fig : omegaloc ] . _remark : _ the amplitude of the control term is proportional to . however , if the aim of the control is to create a given invariant torus ( e.g. , specified by its frequency ) , the control only makes sense if the invariant torus with frequency does not already exist in the original ( uncontrolled ) system . in other words , for where is the threshold of break - up of the invariant torus , there is no need of control since the uncontrolled hamiltonian has an invariant torus acting as a barrier ( or an effective barrier ) to diffusion .however , we point out that , below the critical threshold , the specific location of the invariant torus in the uncontrolled system is not known exactly in general ( this is in particular the case for the forced pendulum described below ) . with a small modification of the potential ( adding the proposed control term ) , there is an exact formula ( [ eqn : eto ] ) giving the equation of the torus . in the next section, we will see on a specific example that the amplitude of the control term is small compared with the perturbation .we notice that it was the case for drift motion which was considered in refs . and that it was also the case for the experimental control in the traveling wave tube ( twt ) .we consider the following forced pendulum model . \label{hpend}\ ] ] if we consider the chirikov s criterion , it leads to an estimate of the threshold of large scale chaos .in fact , the mechanism of the transition to chaos is much more complex than what it is hinted by this empirical criterion .several methods have been developed since to get much deeper insight into the mechanism and also to get more accurate values of the transition to chaos ( e.g. , greene s residue criterion , frequency map analysis and renormalization method ) .the large scale diffusion is due to the break - up of the last kam torus . for this model, the last invariant torus is the one with frequency and its critical threshold is .a poincar surface of section is depicted on fig .[ fig1 ] for .as expected , a large chaotic zone takes place between the two primary resonances since all the kam tori are broken for this value of . a first and simple idea to control the system would be to reduce the amplitude of one of the two primary resonances by considering the following control term which is of order : where is an additional control parameter . in this casethe control term becomes .\ ] ] figure [ fig4 ] shows a poincar section of hamiltonian ( [ eqn : hcdumb ] ) for and . as expected the size of the upper resonance decreases but this significant reduction of a primary resonance ( which results in a decrease of the chirikov parameter ) does not succeed in creating an invariant torus in between the two resonances . in order to create an invariant torus , one needs to choose which requires too much energy for the control .in addition , even if satisfies the required condition , there is no explicit formula for the equation of the created invariant torus which is crucial in order to localize the control term and hence to reduce drastically the amount of energy necessary for the control .therefore one has to design another strategy which has two main goals : * find a small control term which creates an invariant torus between the two resonances , * know explicitly the location of the created invariant torus in order to localize the control .in what follows , we show that the method proposed in ref . gives a much smaller control term ( of order ) such that it creates an invariant torus whose equation is explicitly known . in order to compute this apt control term, this hamiltonian with 1.5 degrees of freedom is mapped in the usual way into an autonomous hamiltonian with two degrees of freedom by considering that is an additional angle variable .we denote its conjugate action .the autonomous hamiltonian is .\ ] ] the aim of the localized control is to modify locally hamiltonian ( [ eqn : h2dof ] ) and therefore ( [ eqn : fp ] ) , in order to reconstruct an invariant torus with frequency .we assume that is sufficiently irrational ( diophantine ) in order to fulfill the hypotheses of the kam theorem .first , the momentum is shifted by in order to define a localized control in the region , i.e. to get the invariant torus located near for hamiltonian ( [ eqn : h2dof ] ) for sufficiently small .the operator is defined from the integrable part of the hamiltonian which is linear in the actions : and hamiltonian ( [ eqn : h2dof ] ) is where +\frac{p^2}{2}.\ ] ] the action of on a function of , and defined as is given by for the forced pendulum ( [ eqn : fp ] ) , we have .\ ] ] the control term given by eq .( [ eqn : exf ] ) is equal to with the addition of the control term given by eq .( [ eqn : fpa ] ) , the controlled hamiltonian has an invariant torus whose equation is .\ ] ] the controlled hamiltonian is given by -\frac{\varepsilon^2}{2}\left ( \frac{\cos x}{\omega}+\frac{\cos(x - t)}{\omega-1}\right)^2 \omega(p , x , t),\ ] ] where we will consider two cases for : and ] ) , it leads to the following control term : if we fix the degree of chaoticity constant and eliminate the parameter , the amplitude of the control term is proportional to .therefore the control term is of order for the model ( [ eqn : fpd ] ) .the fact that is also proportional to which is in general small , makes the control term for hamiltonian ( [ eqn : fpd ] ) very small compared with the size of the perturbation . + we consider the following norm of a function as the norm of the perturbation is and the norm of the control term given by eq .( [ eqn : fpa ] ) is therefore the ratio between the norm of the control term and the perturbation is given by the ratio is smaller when is close to 1/2 and large when it is close to the primary resonances or 1 .for instance , for and close to , this ratio is approximately 7% . for the numerical computations we have chosen and . for these values of the parameters ,the ratio is about 15% . a poincar section of the controlled hamiltonian ( [ eqn : hc ] ) for shows that a lot of invariant tori are created with the addition of the control term precisely in the lower region of phase space where the localization has been done ( see fig . [ figcg ] ) .+ the next step is to localize given by eq . ( [ eqn : fpa ] ) around the invariant torus created by whose equation is given by eq .( [ eqn : torefp ] ) .more precisely , we have chosen for , for and a third order polynomial for \alpha,\beta[$ ] for which is a -function , i.e. .we have chosen and .the support in momentum of the localized control is of order compared with the support of the global control which is of order 1 . therefore the localized control term requires less than 0.5% of the energy of the perturbation .( 15,6.3)(0,0 ) ( 0,0 ) ( 8,0 ) figure [ fig2 ] shows that the phase space of the controlled hamiltonian is very similar to the one of the uncontrolled hamiltonian .we notice that there is in addition an isolated invariant torus . a complete barrier to diffusionis then created .since the control is robust , we notice that there is also the possibility of reducing the amplitude of the control ( by a few percent , see ref . ) and still get an invariant torus of the desired frequency for a perturbation parameter significantly greater than the critical value in the absence of control . also , there is the possibility in selecting appropriate fourier coefficients of the control term and still get the selected invariant torus .( 15,6.3)(0,0 ) ( 0,0 ) ( 8,0 ) figure [ figcgloc ] shows a poincar surface of section of the controlled hamiltonian ( [ eqn : hc ] ) in a small region around the invariant torus given by eq .( [ eqn : torefp ] ) .in addition to the creation of the invariant torus , we notice that the effect of the control term is also to regularize a small neighborhood of the invariant torus .we have represented this local dynamics for two different values of the parameter : for on the left panel and for on the right panel .we notice that this regularized neighborhood can be chosen arbitrarily small ( by choosing arbitrarily small values for ) .moreover , we notice that this barrier persists to arbitrarily large values of the coupling parameter .we acknowledge useful discussions with j r cary , f doveil , ph ghendrih , j laskar , x leoncini , a macor , m pettini and the nonlinear dynamics group at cpt .this work is supported by euratom / cea ( contract eur 344 - 88 - 1 fua f ) .9 ciraolo g , chandre c , lima r , vittot m , pettini m , figarella c and ghendrih ph 2004 controlling chaotic transport in a hamiltonian model of interest to magnetized plasmas_ j. phys . a : math .gen . _ * 37 * 3589 ( _ preprint _nlin.cd/0402048 ) ciraolo g , briolle f , chandre c , floriani e , lima r , vittot m , pettini m , figarella c and ghendrih ph 2004 control of hamiltonian chaos as a possible tool to control anomalous transport in fusion plasmas _ phys .e _ * 69 * 056213 ( _ preprint _nlin.cd/0312037 ) vittot m 2004 perturbation theory and control in classical or quantum mechanics by an inversion formula _ j. phys . a : math .gen . _ * 37 * 6337 ( _ preprint _nlin.cd/0303051 ) ciraolo g , chandre c , lima r , vittot m and pettini m 2004 control of chaos in hamiltonian systems _ celest .( in press , _ preprint _nlin.cd/0311009 ) vittot m , chandre c , ciraolo g and lima r 2005 localized control for non - resonant hamiltonian systems _ nonlinearity _ * 18 * 423 ( _ preprint _nlin.cd/0405056 ) mackay r s 1993 _ renormalisation in area - preserving maps _( singapore : world scientific ) escande d f 1985 stochasticity in classical hamiltonian systems : universal aspects _ phys .* 121 * 165 chandre c and jauslin h r 2002 renormalization - group analysis for the transition to chaos in hamiltonian systems _ phys .* 365 * 1 chirikov b v 1979 a universal instability of many - dimensional oscillator systems _ phys .rep . _ * 52 * 263 chandre c , ciraolo g , doveil f , lima r , macor a and vittot m 2004 channelling chaos by building barriers ( _ preprint _ccsd-00002248 ) zaslavsky g m and filonenko n n 1968 stochastic instability of trapped particles and conditions of applicability of the quasi - linear approximation _soviet physics jetp _ * 25 * 851 greene j m 1979 a method for determining a stochastic transition _ j. math .phys . _ * 20 * 1183 mackay r s 1992 greene s residue criterion _ nonlinearity _ * 5 * 161 laskar j , froeschl c and celletti a 1992 the measure of chaos by numerical analysis of the fundamental frequencies .application to the standard mapping _physica d _ * 56 * 253 laskar j 1999 introduction to frequency map analysis in _hamiltonian systems with three or more degrees of freedom _ ed c sim ( dordrecht : kluwer )
|
we review a method of control for hamiltonian systems which is able to create smooth invariant tori . this method of control is based on an apt modification of the perturbation which is _ small _ and _ localized _ in phase space .
|
one central theme in multi - user information theory ( it ) is the pursuit of single - letter characterizations of the capacity regions for channel coding problems , or the achievable rate regions ( possibly under certain distortion constraints ) for source coding problems .however , some useful properties of these regions can be identified , e.g. , convexity , even when a single - letter characterization is not available . an immediate question to askis whether there exist other properties of the capacity region that do not rely on a single letter characterization .the following question is of interest in this regard : in a particular multi - user it problem , can the achievability of a rate vector imply the achievability of any rate vector in some region is implied in a channel coding problem , but this trivial case is not interesting .note here we do not take the subscript of rate to have any specific meaning associated with the user indices , but merely as an integer label to enumerate the rates in question . ] , regardless of the exact probabilistic channel model ?we show that indeed this is true for the symmetric broadcast problem , and this region can be rather non - trivial .we denote the largest of such regions as in a channel coding problem , and call it the _ latent capacity region _ implied by ; the _ latent achievable rate region _ can be similarly defined , possibly under certain distortion constraints , for a source coding problem though it is not our main focus . for broadcast and multiple access channels ,a precise problem formulation was given in a recent work by grokop and tse , called _multicast region _ , which provides a framework to answer the above question .complete solutions were found in for broadcast and multiple access channels with * two * and * three * users , but the problem remains open for more than three users .we believe this problem formulation reveals a more general concept not limited to only these two channels , and thus rename it as the latent capacity ( or latent achievable rate ) region problem to make explicit this generality .our perspective is different from in that we wish to highlight the importance of the latent capacity region concept in its maximum implication " meaning , and thus we shall define the region in an alternative ( but equivalent ) manner to emphasize this perspective ; our interest in this problem is partially due to an observation made during an earlier work , as we shall discuss shortly .one may wonder how a single achievable rate vector can imply the achievability of a certain region . in some cases ,it is perhaps best explained by the familiar rate transfer argument , that the rate to transmit common messages can be used to transmit individual messages instead , and vice versa .for example , for a two user broadcast channel , if a common message rate , and individual message rates and are achievable , respectively , then it is not difficult to see that the region of given below is achievable by transferring between common and individual rates ( see also ) however , for more than two users , such a naive rate transfer argument is not sufficient , and additional processing is needed , as observed in for the three user case .in fact , this was exactly the perspective taken in , where the goal is to exhaust all such rate transfer operations .the perspective taken in and that taken here are complementary to each other , and one may suit certain problems better than the other . because of this relation , it is not surprising that the achievability proof of our result also relies on a generalized version of rate transfer operations .we shall show that when more users are involved , such generalized rate transfer operation requires strategic application of erasure correction codes , which reveals an inherent connection between erasure correction codes and broadcast with common messages . more specifically , in this work , we shall largely stay in the framework of , and provide a complete solution to the -user broadcast channel latent capacity region problem under an additional symmetry constraint , whereas only cases with two and three users were solved in without such a constraint ., width=321 ] the characterization of latent capacity / rate region is important in multi - user it for two reasons .first , it may facilitate finding a single - letter characterization or an approximate characterization .for example , a rate - distortion region characterization for the problem of multi - stage successive refinement with degraded decoder side information was given in the form of bounds on sum - rates as on the other hand , it seems impossible to establish directly the converse for a characterization in the form of bounds on each individual incremental rate , despite the fact that the two characterizations are equivalent .this is not a coincidence , and it is not difficult to show that the latent rate region for this problem has exactly the following form , assuming non - negativity of the rates intuitively , when a rate vector is achievable , its latent capacity / rate region gives the largest achievable region thus implied , i.e. , maximally utilizes it , which may help simplify the representation of the region when taking the union over auxiliary random variables . similarly , when an approximate characterization is needed , a good inner bound may be found by choosing one or several good ( auxiliary coding ) distributions in an information theoretic coding scheme which lead to one or several rate vectors , and then taking the convex hull of their latent rate regions .one simple example is in , where an approximate characterization for the side - information scalable source coding problem was given for general sources under the squared error distortion measure , and the inner bound approximation is exactly the latent capacity region implied by a single rate pair . the second reason making this concept important is even if it does not lead to a single - letter characterization or an approximate characterization , it can still provide insights into the problem .one such example is that the capacity region can always be written as the ( possibly uncountable ) union of latent capacity regions , which places certain constraints on the geometry of the achievable region . for the above example of successive refinement source coding , we show in fig .[ fig : twoshapes ] a possible rate region on the left , and an impossible rate region on the right .the one on the right is impossible because the black dot is in the achievable region , thus the latent capacity region implied by it ( given by the thin line ) must be also in the region , which is not satisfied by the region depicted on the right .this important observation was also discussed in ( see corollary 4.3 ) , and we do not elaborate it further . nevertheless , it is rather clear that the latent capacity region indeed provides fundamental and useful property of the rate region , in addition to the well - known convexity .we first define the symmetric broadcast problem , and then introduce the notion of latent capacity region in this context . in a general -user broadcast channel , the conditional probability distributionis given as ,y_2[1,2,\ldots], ... ,y_k[1,2,\ldots]\big{|}x[1,2,\ldots])\end{aligned}\ ] ] where the index in the bracket ] is sometimes written as ; for a dimensional vector , we sometimes write it simply as .let be mutually independent and uniformly distributed messages , where is the message intended for all the receivers in the set ; for notational convenience , we include but will assume it to be a constant .for each , define the set of random variables thus is the collection of messages that the -th receiver should decode .we also define the following set of random variables more specifically for , we have where we have slightly abused the notation by writing , e.g. , as .the sets and are defined similarly for length- random vectors . in this work ,we only consider the case that the rates of messages are the same for all such messages where the set has the same cardinality .more formally , the problem is defined as follows .an symmetric broadcast code consists of an encoder where and decoders , resulting in the decoded messages at the -th receiver , and the decoding error probability of at least one message at one receiver a rate vector is symmetrically achievable if there exists a sequence of codes with . the closure of the set of symmetrically achievable rate vectors is called the symmetric broadcast capacity region , denoted as , or simply as . notethat secrecy constraint is not considered in the definition .next we define the latent capacity region for this problem .[ definition : lcr ] for a given rate vector , the collection of rate vectors is called the latent capacity region for symmetric broadcast implied by , denoted as , if the following two conditions are satisfied ( ) for any broadcast channel , implies ; ( ) there exists a set of channels , such that and . for the second condition, we essentially wish to find one particular channel such that .however this does not quite serve the purpose since this channel might be difficult to realize , however it can always be approximated by a sequence of channels .the above definition is slightly different from the one in , which is it can be easily verified that they are equivalent .the problem we wish to solve is the characterization of .it is clear that the region is uniquely defined for any , and thus the problem is meaningful. definition [ definition : lcr ] makes clear the maximal implication " meaning of the latent capacity region . in multi- user it , usually a coding scheme is given by fixing some auxiliary random variables , and then showing a single rate vector is achievable with certain random codes ; the task of maximizing the implication region of this single point is sometimes mingled with the conditions under which this single point is achievable . the concept of latent capacity region can be used to delineate them .the following lemma is needed in the converse proof .[ lemma : kwaysubmodularity ] let be a set of mutually independent random variables , and be a set of random variables jointly distributed with it . let , be subsets of .then where this lemma is a direct consequence of the sub - modularity of the conditional entropy function , when the random variables being conditioned on are independent ( a proof is given in appendix [ append : kwaysubmodularity ] ) , and the -way submodularity property of any submodular function given in .our main result is a complete characterization of the latent capacity region for the symmetric broadcast problem . to present this region ,a few more quantities need to be defined first .let us define the following up - exchange rate for and the down - exchange rate for and define .the up / down exchange rates essentially describe the ratio when converting certain type of messages into other types . for examplewhen , the common message can be used to convey individual information to the three users , and vice versa , but the conversion of such rates is not always ratio one .it will become clear in the achievable proof how such conversion can be done in a most efficient manner .define to be the set of rate vectors satisfying the following conditions with some non - negative quantities , , roughly speaking , the rate is that taken from level- rate but used to transmit level- messages .we have the following theorem . [theorem : maintheorem ] for any non - negative rate vectors , we have _ example : _ for , it is straightforward to see : , i.e. , the same amount of individual message rate for each user can be used to transmit a common message ; and , i.e. , to split a common message into two equal parts , each to transmit a separate individual message for one user ._ example : _ for , it can be verified using fourier - motzkin elimination that is given by the non - negative rates satisfying a typical shape is given in fig .[ fig : example122 ] with .the computation is tedious and thus omitted here .the same result can also be reduced from that given in for the asymmetric case .it is clear that this region is non - trivial , and it is not at all clear a priori why these rate combinations should be considered . in , the regionis characterized by investigating the distinct universal encoding / decoding operations , which leads to the concept of extremal rays . because the latent capacity region in question is a polytope , it can be characterized by its faces , edges , or vertices .the extremal rays are essentially the edges of this polytope . howeverthis proof approach in appears rather difficult to generalize for more than three users since the number of edges quickly becomes very large , and thus we introduce the parametric characterization ( [ eqn : condition1 ] ) and ( [ eqn : condition2 ] ) to avoid this difficulty ..[fig : example122],width=321 ] notice that the exchange rate is pairwise , suggesting in this symmetric setting there is no need to convert rates jointly , e.g. , use and to send the same message . in the rest of the paper , we shall prove theorem [ theorem : maintheorem ] .the naive approach of finding the planes of the rate region and derive its upper and lower bounds is not appropriate for general , particularly for the purpose of converse .instead , we utilize the structure of the region to give a proof .the proof of the forward part of theorem [ theorem : maintheorem ] , i.e. , the fact that satisfies the first condition in definition [ definition : lcr ] is relatively straightforward . since is achievable on any channel , there exists a sequence of codes with such rates with , and we will use these codes to construct a set of codes to approach any rate vectors in .this is done by essentially relabeling and adding erasure correction codes on the messages .observe that the messages can also be used to transmit common messages to the subsets with cardinality smaller or larger than .moreover , we can use part of the rate , denoted as , for this purpose , to transmit some messages , thus increasing .such an operation will cause a conversion of rate for -user subset messages into rate for -user subset messages , with an exchange rate .the region is precisely the result of allowing this kind of pairwise exchange on the rate vector .thus we only need to show that the exchange rates given before theorem [ theorem : maintheorem ] is indeed valid , then the existing sequence of channel codes can be used directly .it is clear that we only need to consider the following problem : on a channel with and for , how do we transmit messages , and how much rate can be supported ? we will only need to distinguish two cases or , since it is clear .we first consider the case .for a subset of where , there are a total of subset of with cardinality ; denote the collection of such subsets as . for a particular user , it can decode ( with high probability ) the messages , i.e. , such messages . to transmit the common message ,if we can guarantee that when receiving any messages out of the messages in the set , the message is decodable , then it is clear that indeed any receivers in the set can decode the message .this is an erasure correction problem and a maximum distance separable ( mds ) code can satisfy this requirement , which indeed exists when the codeword length is sufficiently large .furthermore , since each subset of cardinality is a subset of sets of cardinality , only of the rate can be used for each mds code .this yields next consider the case .let be a subset of where .the common message can be shared uniformly between its subsets of cardinality , for transmitting their individual " message . sinceeach subset of cardinality is a subset of distinct sets of cardinality , it can take part in such sharing times .this yields taking into account the existence of good mds code , and the fact that is a closed set , the proof is complete . in , it was observed that in order to efficiently transfer rates , sometimes a modulo two addition is needed , similar to that seen in butterfly network of network coding .the mds codes we use in the above proof can be understood as a generalization of the modulo two addition , which itself is essentially a mds code .it is worth noting that other coding / processing may also be useful for converting rates , however , mds codes are sufficient in solving the symmetric broadcast problem .the converse proof of theorem [ theorem : maintheorem ] requires more work . for simplicitywe shall assume s are all integers ; if this is not the case , a sequence of channels need to be considered , and we shall return to this technical point after the proof .we only need to provide one particular channel that and .the channel is the deterministic one considered in , extended to the -user case ; see fig .[ fig : deterministic ] for the case .more precisely , let the channel input be the collection of .the alphabet of where is .the -th channel output is given by denote this deterministic channel as . in order to prove the converse part for theorem [ theorem : maintheorem ] , we need to establish for this channel . for any , define the following quantity and similarly it is clear that both and are convex regions , and thus if we can prove the following theorem , then the converse of theorem [ theorem : maintheorem ] directly follows . [ theorem : boundingplane ] for any where , this is indeed our proof approach , however before giving the rather long proof for the general case , we first prove a few rate combinations for , which illustrates the basic techniques as well as facilitates better understanding . though the proof of the case for can also be found in , our proof given here is different and in fact more structured , which is geared toward the general case .after this example , a few necessary tools and intermediate results are provided , and finally we give the converse proof of theorem [ theorem : maintheorem ] . from .[fig: deterministic],width=321 ] we give an outline of the proof for the first two inequalities in the example given after theorem [ theorem : maintheorem ] .\notag\\ & \qquad+\frac{1}{3}\sum_{i=1}^3h(\mathcal{x}^n_i|x^n_{123},w_{123})-n\delta\notag\\ & \stackrel{(b)}{=}2nr_1 + 4nr_2 + 2nr_3+\frac{2}{3}\sum_{i=1}^3h(\mathcal{x}^n_i|\mathcal{w}_i)\notag\\ & \qquad+\frac{1}{3}\sum_{i=1}^3h(\mathcal{x}^n_i|w_{123})-h(x^n_{123}|w_{123})-n\delta\notag\\ & \stackrel{(c)}{\geq}2nr_1 + 4nr_2 + 2nr_3+\frac{2}{3}h(x^n_{123}|w_{123})\notag\\ & \qquad+\frac{1}{3}\sum_{i=1}^3[h(\mathcal{w}_i|w_{123})+h(\mathcal{x}^n_i|\mathcal{w}_i)]\nonumber\\ & \qquad - h(x^n_{123}|w_{123})-n\delta'\notag\\ & = 3nr_1 + 6nr_2 + 2nr_3-n\delta'\notag\\ & \qquad+\left[\frac{1}{3}\sum_{i=1}^3h(\mathcal{x}^n_i|\mathcal{w}_i)-\frac{1}{3}h(x^n_{123}|w_{123})\right]\notag\\ & \stackrel{(d)}{\geq } 3nr_1 + 6nr_2 + 2nr_3-n\delta',\end{aligned}\ ] ] where ( a ) is by fano s inequality , ( b ) is by adding and subtracting the same term , ( c ) is by applying fano s inequality on the third term , and noticing that lemma [ lemma : kwaysubmodularity ] together with the fact of the channel being discrete implies that , and ( d ) is again by the inequalities in ( [ eqn : example1 ] ) .this completes the proof for the first rate combination . for the second rate combination ,we have \notag\\ & \stackrel{(b)}{\geq}6nr_1 + 6nr_2 + 3nr_3-n\delta',\label{eqn : examplesecond}\end{aligned}\ ] ] where ( a ) is because of ( [ eqn : example1 ] ) , and in ( b ) we applied lemma [ lemma : kwaysubmodularity ] , and then omit the first term since the channel is discrete ; the rest of the inequalities in ( [ eqn : examplesecond ] ) are by fano s inequality .this proof illustrates several main components of the proof for the general case .firstly , the rate combination needs to be written as summations under appropriate proportions , secondly the -way submodularity lemma needs to strategically used , and thirdly there are connections between different layers of messages and thus terms may be canceled among them . for the general -user problem ,the bounding becomes much more complicated , and we will rely on the optimal solution to provide necessary structure and guidance . we begin with a few properties on the exchange rate .[ lemma : increasingijk ] for any integers such that , we have .[ lemma : decreasingijk ] for any integers such that , we have . [lemma : circleij ] for any integers such that , we have .[ lemma : twostep ] for any integers , we have , with equality only when the sequence is monotonic .[ lemma : consequtivedown ] for any , we have . for any , . [ lemma : alternativephijk ] the above lemmas ( particularly lemma [ lemma : increasingijk]-[lemma : twostep ] )may be best understood as a currency exchange system where up - converting ( or down - converting ) many times results in the same final exchange rate as a single step conversion , but up - converting mixed with down - converting to the original currency results in a loss .the proofs of these lemmas are given in appendix [ appendix : lemmas ] . to prove the converse part of theorem [ theorem : boundingplane ], we proceed in two steps : first we identify some special optimal solutions for the maximization problem ( [ eqn : maximizingbstar ] ) with certain desired properties , then show that is an upper bound to the quantity . in this subsectionwe discuss the first step .a non - negative setting of satisfying ( [ eqn : condition1 ] ) is called extremal if the following conditions hold ( ) for each , there exists a unique such that and for .( ) if , then .( ) if , then for any such that , .[ lemma : extremal ] the solutions to the maximization problem ( [ eqn : maximizingbstar ] ) include one that is extremal .the lemma is intuitively true since a linear optimization problem has an optimal solution at its corner point .the concept of extremal solution makes the definition of corner point in the problem context more precise . a proof is given in appendix [ appendix : lemmas ] . in an optimal extremal solution ,the effective rate set is defined as .the elements of in an increasing order are denoted as .lemma [ lemma : extremal ] implies there exists a specific structure of rate exchange in the optimal extremal solutions .[ lemma : quantization ] for an optimal extremal solution : * there exist a partition of the sequence , labeled as , each consisting a consecutive sequence of integers , and . * for , we have .this structure is analogous to scalar quantization to some extent , as illustrated in fig .[ fig : quantization ] ..[fig : quantization],width=321 ] for a fixed vector , let be an optimal extremal solution for the maximization problem ( [ eqn : maximizingbstar ] ) , and let be its effective rate set and let be the partition sets ; for convenience , denote the smallest element in the set as and the largest element as . assuming a sequence of length- codes is given with diminishing error probability .let and be defined similarly as and .the proof consists of two layers of inductions .we start from the inner layer , and then put the pieces together in the outer layer . define the following quantity for , for which lower bounds will be derived where and for convenience we have defined and .note that all the coefficients in front of the entropy functions are non - negative : those in the first summation are straightforward to verify by using the definition of , and for the last term we only need to observe that by the optimality of the extremal solution . for conveniencelet us also define which are clearly non - negative quantities .we are interested in these quantities s because they are directly related with the rate combination being considered , as we shall see shortly .we start by writing the following where ( a ) is by fano s inequality , and ( b ) is by applying lemma [ lemma : kwaysubmodularity ] on the second term . for notational simplicity , we shall ignore the small quantity in the sequel . slightly further expanding the first term in ( [ eqn : decomposedlk ] ) and substituting it in give us ( [ eqn : lkfirst ] ) .more generally , we claim that for such that , ( [ eqn : lkgenerala ] ) holds , which we prove by induction .clearly it holds for since it is exactly ( [ eqn : lkfirst ] ) in this case .suppose it holds for , we shall prove it also holds for . putting ( [ eqn : decomposedlk ] ) into ( [ eqn : lkgenerala ] ) , we have ( [ eqn : lkinduction1 ] ) given on the next page . in order to simplify ( [ eqn : lkinduction1 ] ) , first notice that , and where ( a ) is by lemma [ lemma : alternativephijk ] and ( b ) is by lemma [ lemma : increasingijk ] .it follows that furthermore , notice that where the last step is due to where the last equality is by lemma [ lemma : consequtivedown ] . combining ( [ eqn : lkinduction1 ] ) , ( [ eqn : lkinduction2 ] ) and ( [ eqn : lkinduction3 ] ) , we have ( [ eqn : lkinduction4 ] ) , proving that the claim ( [ eqn : lkgenerala ] ) is indeed true . letting , we can write ( [ eqn : newlabel1 ] ) on this page . by breaking the second term as given in ( [ eqn : difference2 ] ) , where in the last step we apply lemma [ lemma : alternativephijk ] and lemma [ lemma : increasingijk ] , and noticing that for the third term implied by the discrete nature of the channel , we can further write ( [ eqn : lkgeneral ] ) on the next page .this concludes the inner layer induction , and next we turn to the outer layer .first notice that the optimality of extremal solution and lemma [ lemma : quantization ] together imply that ^*_j+\frac{a_{e_{|\mathcal{e}|}}}{nk}\sum_{j = e_{|\mathcal{e}|}}^k \left(\frac{\phi_{j , e_{|\mathcal{e}|}}}{\binom{k-1}{j-1}}-\frac{\phi_{j+1,e_{|\mathcal{e}|}}}{\binom{k-1}{j}}\right)\sum_{i=1}^kh(\mathcal{x}^n_i|\overline{\mathcal{x}}^n_{j+1}\overline{\mathcal{w}}_{j+1})\notag\\ = & \sum_{i=1}^{|\mathcal{e}|-1}\sum_{j\in \mathcal{s}_i}\left[a_{e_i}\phi_{j , e_i}-a_{e_{|\mathcal{e}|}}\phi_{j , e_{|\mathcal{e}|}}\right]r^*_j+\frac{a_{e_{|\mathcal{e}|}}}{nk}l_{|\mathcal{e}|},\label{eqn : expansion}\end{aligned}\ ] ] we first write ( [ eqn : expansion ] ) where the inequality can be justified as follows .observe that in the second summation , for any , the random variables with appear only in the last terms in the outer summation .each inner summation has a total of such terms , which implies such random variables are counted a total of times .thus by the cardinality of the alphabets , the normalized entropy is upper bounded by . through a similar argument, it is not difficult to verify that for , all the terms are accounted for .furthermore , notice that by the optimality of the extremal solution , for any , we have , and thus the first summation is non - negative .^*_j+a_{e_{|\mathcal{e}|}}\sum_{j = l_{|\mathcal{e}|}}^{u_{|\mathcal{e}|}}\left(\phi_{j , e_{|\mathcal{e}|}}-\frac{a_{e_{|\mathcal{e}|+1}}\phi_{j , e_{|\mathcal{e}|+1}}}{a_{e_{|\mathcal{e}|}}}\right)r_{j}\notag\\ & + \frac{a_{e_{|\mathcal{e}|}}}{nk}b_{{|\mathcal{e}|},e_{|\mathcal{e}|}}\left(\sum_{i=1}^kh(\mathcal{w}_i|\overline{\mathcal{w}}_{l_{|\mathcal{e}|}})+\sum_{i=1}^{l_{|\mathcal{e}|}}h(\overline{\mathcal{x}}^n_{i}|\overline{\mathcal{w}}_{i})\right)-\frac{a_{e_{|\mathcal{e}|}}}{nk}u_{|\mathcal{e}|}a_{{|\mathcal{e}|},u_{|\mathcal{e}|}}h(\overline{\mathcal{x}}^n_{u_{|\mathcal{e}|}+1}|\overline{\mathcal{w}}_{u_{|\mathcal{e}|}+1})\notag\\ & -\frac{a_{e_{|\mathcal{e}|}}}{nk}\frac{a_{e_{{|\mathcal{e}|}+1}}\phi_{u_{|\mathcal{e}|},e_{{|\mathcal{e}|}+1}}}{a_{e_{|\mathcal{e}|}}\binom{k-1}{u_{|\mathcal{e}|}-1}}\sum_{i = e_{|\mathcal{e}|}}^{u_{|\mathcal{e}|}-1}h(\overline{\mathcal{x}}^n_{i+1}|\overline{\mathcal{w}}_{i+1})\notag\\ = & \sum_{i=1}^{|\mathcal{e}|-1}\sum_{j\in \mathcal{s}_i}\left[a_{e_i}\phi_{j , e_i}-a_{e_{|\mathcal{e}|}}\phi_{j , e_{|\mathcal{e}|}}\right]r^*_j+a_{e_{|\mathcal{e}|}}\sum_{j = l_{|\mathcal{e}|}}^k\phi_{j , e_{|\mathcal{e}|}}r_j+\frac{a_{e_{|\mathcal{e}|}}}{nk\binom{k-1}{e_{|\mathcal{e}|}-1}}\sum_{i=1}^kh(\mathcal{w}_i|\overline{\mathcal{w}}_{l_{|\mathcal{e}|}})\notag\\ & + \frac{a_{e_{|\mathcal{e}|}}}{nk\binom{k-1}{e_{|\mathcal{e}|}-1}}\sum_{i=1}^{l_{|\mathcal{e}|}}h(\overline{\mathcal{x}}^n_{i}|\overline{\mathcal{w}}_{i } ) , \label{eqn : outerinduction1}\end{aligned}\ ] ] we next apply ( [ eqn : lkgeneral ] ) with in ( [ eqn : expansion ] ) , and write ( [ eqn : outerinduction1 ] ) on the next page , because we have , by definition , and more generally , we claim for , the following inequality holds ^*_j\notag\\ & \,+\sum_{i = k+1}^{|\mathcal{e}|}a_{e_i}\sum_{j = l_i}^{u_i}\phi_{j , e_{i}}r_j+\frac{a_{e_{k+1}}}{nk\binom{k-1}{e_{k+1}-1}}\sum_{i=1}^kh(\mathcal{w}_i|\overline{\mathcal{w}}_{l_{k+1}})\notag\\ & \qquad+ \frac{a_{e_{k+1}}}{nk\binom{k-1}{e_{k+1}-1}}\sum_{i=1}^{l_{k+1}}h(\overline{\mathcal{x}}^n_{i}|\overline{\mathcal{w}}_{i})\label{eqn : outergeneral}\end{aligned}\ ] ] we again take an induction approach to prove this claim .the claim is clearly true for .now suppose ( [ eqn : outergeneral ] ) is true for , and we seek to show it is also true for . for notational simplicity ,let us define we first prove the following inequality to do this , we need to count in the second term the number of appearance of random variables for all , for all fixed , such that .this is similar to ( [ eqn : expansion ] ) , but slightly more involved .for such that , it is easily seen that there are a total of such random variables in , implying the following amount of is accounted for where we have used ( [ eqn : difference1 ] ) .this indeed is the difference between the left hand side of ( [ eqn : accountrate ] ) and the first term on the right hand side , in terms of . for the case , the following amount of is accounted for where we have used the derivation in ( [ eqn : difference2 ] ) .this is again precisely the difference between the left hand side of ( [ eqn : accountrate ] ) and the first term on the right hand side , in terms of .thus ( [ eqn : accountrate ] ) is indeed true .now we proceed with the proof of ( [ eqn : outergeneral ] ) through induction by assuming it holds for , and write ( [ eqn : outerind1 ] ) on the top of this page by applying ( [ eqn : lkgeneral ] ) . in order to simplify ( [ eqn : outerind1 ] ), similar terms need to be combined , for which we write ( [ eqn : outerind2 ] ) , where in ( a ) we used lemma [ lemma : alternativephijk ] , and ( b ) is because where the last step is again by lemma [ lemma : alternativephijk ] . next consider the summation ( [ eqn : outerind3 ] ) where we have split the last term and combined it with the other terms , and used ( [ eqn : summationmiddle ] ) ; moreover , some terms for are ignored because they are non - negative by the discrete nature of the channel .observe that for the last term in the right hand side of ( [ eqn : outerind3 ] ) for the second term in the right hand side of ( [ eqn : outerind3 ] ) , notice that , thus by the optimality of the extremal solution , we have and thus ( [ eqn : outerind5 ] ) follows , where ( a ) is by lemma [ lemma : circleij ] , and the final inequality is by ( [ eqn : changepoint ] ) .now combining ( [ eqn : outerind1 ] ) , ( [ eqn : outerind2 ] ) , ( [ eqn : outerind3 ] ) , ( [ eqn : outerind4 ] ) and ( [ eqn : outerind5 ] ) completes the induction proof of ( [ eqn : outergeneral ] ) for . writing ( [ eqn : outergeneral ] ) for , we have where the second inequality is because the first term in the parenthesis degenerates to zero , and the second is non - negative . notice that for any , by the optimality of the given extremal solution , , thus by the non - negativeness of rate s , we arrive at this completes the proof . for the case that s are not integers, we can instead consider a sequence of channels with memory , for which the alphabet sizes are , however , for each channel use , the channel erases of them .this channel is not a memoryless channel anymore , however , our definition is sufficiently general to include such a case , and the converse proof can be used without any change .we consider the latent capacity region of the symmetric broadcast problem , which gives the maximum implication region for a specific achievable rate vector .a complete characterization is provided , for which the converse proof relies on a deterministic channel model , and deriving upper bounds for any bounding plane of the rate region .the forward proof reveals an inherent connection between broadcast with common messages and erasure correction codes .we believe the latent capacity region ( or latent rate region ) is a general concept , and can be applied to other problem . in , the multiple access channel is also considered for two and three - user case .it is conceivable that the technique used in this work can be used to generalize their results for the multiple access channel .another interesting case may be the interference channel , where the well - known han - kobayashi region is indeed the projection of a rate region for the coding problem with common messages . a careful analysis of the latent capacity region for the general interference channel may yield further insight into the problem .the author wishes to thank the anonymous reviewers for their comments which help improve the presentation of this paper .[ lemma : submodular ] let be a set of mutually independent random variables , and let be random variables jointly distributed with them . let be a subset of , i.e. , .the conditional entropy function is a submodular function , i.e. , for any , notice that and the mutual independence among s gives the submodularity of unconditioned entropy function of random variables is well - known , which gives and the proof is thus complete . the case is exactly lemma [ lemma : circleij ] , thus we only need to consider the case ; we may also assume and since these cases are trivial .the order of can be arbitrary , but since the proof only relies on lemma [ lemma : increasingijk ] , [ lemma : decreasingijk ] and [ lemma : circleij ] , we may assume without loss of generality .thus we have the only three cases .( 1 ) : by lemma [ lemma : decreasingijk ] and [ lemma : circleij ] , we have . ( 2 ) : by lemma [ lemma : increasingijk ] and [ lemma : circleij ] , we have . ( 3 ) : the equality is implied by lemma [ lemma : increasingijk ] . for condition ( ) , we may assume because otherwise the statement is trivial . observe that for any optimal solution , the second inequality in ( [ eqn : condition2 ] ) must hold with equality , because otherwise the quantity being maximized can strictly increase .first suppose for certain , there exist distinct such that and , then we must have because otherwise , e.g. , if held , then letting and strictly increases the quantity being maximized in ( [ eqn : maximizingbstar ] ) . however , if ( [ eqn : mustbeequal ] ) is true , the new solution given above does not decrease the quantity being maximized , thus a new solution can be found such that there exist no such two distinct .given this is true , it is clear that for each , letting the unique for which be is an optimal choice .thus condition ( ) is satisfied by some optimal solution , and from here on , we shall only consider such solutions . for condition ( ), we may assume since otherwise the statement is trivial .suppose condition ( ) is not true , i.e. , for some , , then the optimality of the solution implies now we claim that the new solution with and ( with other values unchanged ) can not decrease the quantity being maximized . to see this, we only need to observe that which is by lemma [ lemma : twostep ] and ( [ eqn : jtok ] ) .thus the conditions ( ) and ( ) are indeed satisfied by some optimal solution , and from here on we shall only consider such solutions . for condition ( ) , we only discuss the case , because the other case is similar . the fact that implies .we may assume because otherwise the statement is trivial .take an arbitrary , such that , we may have for some , and the value of may be , or ; note that we can assume since condition ( ) afore - proved .it is easy to see that we must have and .the fact that and imply that the three cases are now discussed individually next .case ( 1 ) , from ( [ eqn : twoorders ] ) and lemma [ lemma : increasingijk ] and [ lemma : decreasingijk ] , we have which lead to , contradicting lemma [ lemma : circleij ] , thus this is an impossible case .case ( 2 ) , from ( [ eqn : twoorders ] ) and lemma [ lemma : increasingijk ] , we have which lead to , thus this is another impossible case . case ( 3 ) , from ( [ eqn : twoorders ] ) and lemma [ lemma : increasingijk ] , we have that for this case thus the new solution that and does not decrease the quantity being optimized .thus the conditions ( ) , ( ) and ( ) are indeed satisfied simultaneously by some optimal solution .the lemma is proved .
|
we consider the problem of broadcast with common messages , and focus on the case that the common message rate , i.e. , the rate of the message intended for all the receivers in the set , is the same for all the set of the same cardinality . instead of attempting to characterize the capacity region of general broadcast channels , we only consider the structure of the capacity region that any broadcast channel should bear . the concept of latent capacity region is useful in capturing these underlying constraints , and we provide a complete characterization of the latent capacity region for the symmetric broadcast problem . the converse proof of this tight characterization relies on a deterministic broadcast channel model . the achievability proof generalizes the familiar rate transfer argument to include more involved erasure correction coding among messages , thus revealing an inherent connection between broadcast with common message and erasure correction codes . broadcast channel , common message , individual message .
|
consider the polynomial regression model of degree ( ) : , where and are column vectors in .the errors are independently distributed according to the normal distribution with mean 0 and variance . is the region of the explanatory variable where the model ( [ regression ] ) is defined .typically , is a bounded interval in .we assume that sufficient statistics of the model ( [ regression ] ) , that is , the ordinary least squares estimator of , and when is unknown , the unbiased variance estimator of , distributed independently of , are available . in this paper, we deal with the hypothesis of positivity , or of superiority : to state its statistical meaning , it is natural to consider a two - sample problem .let ( ) be the polynomial regression curves of two groups .the hypothesis that the polynomial curve of group is always bounded below by , or superior to , group is expressed as for all .taking the difference , we see that ( [ positivity ] ) represents the hypothesis of superiority .it is also possible to model the difference of two profiles ( mean vectors ) by a polynomial without modeling the profile of each group ( see section [ subsec : example ] ) .this notion of superiority is particularly important in statistical tests for assessing new drugs ( ) .the set of coefficients satisfying ( [ positivity ] ) forms a closed convex cone : this is referred to as the cone of positive polynomials ( ) .we use instead of when we emphasize that is defined in . is closed , since is the intersection of closed sets .the hypothesis ( [ positivity ] ) is rewritten as . including this hypothesis, we consider the following hierarchical hypotheses : we then formalize the test for positivity as the likelihood ratio test ( lrt ) for testing against . in addition, we define an lrt for testing against . in the context of the two - sample problem , this is the test for the equality of two regression curves against the hypothesis of superiority . as we see later , it is mathematically convenient to treat the two lrts at a time .the theory of lrts for convex cone hypotheses has been developed under the name of order restricted inference ( ) .a general theorem states that the null distribution of lrt statistics is a finite mixture of chi - square distributions ( ) .when the cone has piecewise smooth boundaries , proved that the weights ( mixing probabilities ) are expressed in terms of curvature measures on boundaries .these results arise out of a geometric approach referred to as the volume - of - tubes method . using this method, gave the weights associated with the cone of nonnegative definite matrices .however , the weights of few cones are obtained explicitly .the main result of the present paper is the derivation of the weights associated with the cone of positive polynomials , that is , the null distribution of the lrt for positivity . by applying the representation ( parameterization ) theorem for the positive polynomial cone and its dual cone developed in the framework of tchebycheff systems ( ) ,we evaluate the weights of the two highest degrees ( ) and the two lowest degrees ( ) . in terms of these weights ,the null distributions of the lrts are expressed when the degree of the polynomial regression is less than or equal to . when the degree is more than 4 , upper and lower bounds for the null distributions are provided .the outline of the paper is as follows . in section [ sec : lrt ] , we present the expressions of the lrt statistics in both cases where the variance is known and unknown . as in most statistical tests , we can also propose simultaneous confidence bands associated with the lrt for positivity . in section [ sec : null ] , we first briefly summarize the volume - of - tubes method . in order to apply this method ,we need the volumes of the cone , its dual cone , and their boundaries . modifying the representation theorems for the positive polynomials in tchebycheff systems, we obtain explicit formulas for the weights . in section[ sec : computation ] , we discuss computation . to construct our lrt statistics , we need the maximum likelihood estimate ( mle ) , say , under the hypothesis of positivity . the coefficient is calculated as the orthogonal projection of onto the positive polynomial cone . we show that this calculation can be conducted by symmetric cone programming , which is extensively studied in the optimization community .we also demonstrate an example of growth curve data analysis . throughout the paper , we treat only the polynomial regression . however , a polynomial is just one example of tchebycheff systems .the approach developed here is applicable to other systems .another typical example is trigonometric regression and we can consider the testing problem for the positivity once more . in this case , by changing a variable , all results in the polynomial regression are translated into the trigonometric regression .throughout the paper , we need to deal with a metric linear space and its dual space simultaneously .we write the inner product and the norm as where is a positive definite matrix .the orthogonal projection of onto the set with respect to the distance is denoted by this is well defined when is a closed convex set .the subscript in , , and will be omitted when it does not cause any confusion . in the regression model ( [ regression ] ) with known , the least squares statistic is sufficient , and we can restrict attention to inference based on . the distribution of is the -dimensional normal distribution with mean vector and covariance matrix , where with , the inverse of the design matrix . when is unknown , the sufficient statistic is the pair , where is the unbiased estimator of calculated from the residuals , and whose distribution is proportional to that of a chi - square random variable with degrees of freedom .given the data distributed as the normal distribution with known , the mle of under the hypothesis of positivity is the orthogonal projection of onto the cone under the metric .when is unknown , the mle is the orthogonal projection onto under the metric , .this mle is the same as that with known , because the orthogonal projection onto a cone is invariant with respect to the scale change of metric ( ) .the mles of under and are given as and , respectively .acknowledging these facts , we obtain the lrt statistics as follows .[ prop : lrt ] when the variance is known , the lrt statistics for against , and for against are given by respectively , where .when the variance is unknown and an independent and unbiased estimator of with degrees of freedom is available , the lrt statistics for against , and for against are given by respectively , where , .the null hypotheses are rejected when the lrt statistics are sufficiently large .the hypothesis of positivity is a composite hypothesis . to obtain the critical points for testing such a hypothesis, we need to know the least favorable configuration .the proof of the following proposition is essentially given in section 2.3 of .[ prop : lfc ] in both cases where is known or unknown , the least favorable configurations of the lrts for testing ( the hypothesis of positivity ) against ( the no - restriction hypothesis ) are given by the case where holds , that is , . in the case where is known , the acceptance region is of the form we first prove the monotonicity of the set : this is because , for , the last inclusion follows from , because is a convex cone .therefore , for , and follows . in the case where is unknown ,the lrt statistic in ( [ beta ] ) is rewritten as which is monotone in in ( [ lambda ] ) .the monotonicity of the acceptance region can be proved similarly . in regression analysis ,simultaneous confidence bounds for the estimated regression curve are often provided to assess the reliability of the estimated regression curve .the construction of confidence bands is still an active research topic because of its practical importance ( ) . in this subsection, we propose simultaneous confidence bands that are naturally linked to our proposed lrts . in general ,when we want to construct simultaneous confidence bands for the regression curve , we need to bound above by a pivotal statistic whose distribution is independent of the true parameters .the most standard tool to obtain the upper bound is the cauchy - schwarz inequality .however , in this inequality , strict equality is attained when and only when ( the closure of ) the set of undirected rays spanned by the explanatory variable vectors forms the whole space . in our polynomial regression model ( [ regression ] ) , this becomes the whole space only when and ( ) .the cases where or is a proper subset of ( ) are not easy problems and have been solved in limited cases ( e.g. , , ) . in our proposal , we relax the set of estimands from the regression curve itself .let be a nonnegative measure on , and write =\int_t \psi(t ) \, \mu(dt) ] , where \mid \mu(dt)\ge 0 \bigr\}}\ ] ] is the closure of the conic hull of the trajectory .this cone is the dual cone of the positive polynomial cone in ( [ k ] ) , and is referred to as the moment cone ( ) .we construct confidence bands on the basis of the inequality - \mu[f_c ] = \int_t ( \widehat c - c)^\top \psi(t ) \,\mu(dt ) & = \langle \sigma^{-1}(\widehat c - c),\mu[\psi]\rangle_{\sigma } \nonumber \\ & \le \bigl\vert \mu[\psi ] \bigr\vert_{\sigma } \cdot \bigl\vert \pi_{\sigma}(\sigma^{-1}(\widehat c - c)|k^ * ) \bigr\vert_{\sigma } , \label{inequality}\end{aligned}\ ] ] where =\int f(t;c)\,\mu(dt) \mu t ] ( bounded ) , ( ii ) , and ( iii ) .the two propositions below give representations for the moment cone and its boundary .let let , and let formally .[ prop : ks - map ] the moment cone on ( i ) ]should read as .we use this convention in propositions [ prop : ks - map][prop : bk - map ] .the representations of the right - hand sides of ( [ ks - map ] ) for ] or , + 1 } \times \delta_{\left[\frac{n}{2}\right ] } \bigr ) \nonumber \\ & \sqcup \phi^{(u)}_{n , n-1 } \bigl ( { \mathbb{r}}_+^{\left[\frac{n}{2}\right]+1 } \times \delta_{\left[\frac{n-1}{2}\right ] } \bigr ) , \label{bks - map}\end{aligned}\ ] ] ( iii ) when and is even , almost everywhere with respect to the -dimensional hausdorff measure , where means a disjoint union . the maps and in ( [ bks - map ] ) and ( [ bks - map ] ) are diffeomorphic . the general forms of the one - to - one representations for ] , ( ii ) when , ( iii ) when and , the map is a diffeomorphism . here, we use the convention . the representations of the positive polynomials on ] , ( ii ) when , ( iii ) when and is even , almost everywhere with respect to the -dimensional hausdorff measure , where means a disjoint union .the maps , , , and are diffeomorphisms .\(i ) the case of t=[a , b] [ a,\infty) or } ) \nonumber \\ & + \int_{(0,\frac{\pi}{2})\times\delta_{n-2 } } \det \left\ { \biggl(\frac{\partial \bar\varphi^{(1)}_{n}(\alpha,\gamma , b)}{\partial\zeta}\biggr)^\top \sigma^{-1 } \biggl(\frac{\partial \bar\varphi^{(1)}_{n}(\alpha,\gamma , b)}{\partial\zeta}\biggr ) \right\}^{\frac{1}{2 } } d\zeta \nonumber \\ & \hspace*{17.5em } ( \mbox{if\,\ } ) \nonumber \\ & + \int_{(0,\frac{\pi}{2})\times\delta_{n-2 } } \det \left\ { \biggl(\frac{\partial \bar\varphi_{n-1}(\alpha,\gamma)}{\partial\zeta}\biggr)^\top \sigma^{-1 } \biggl(\frac{\partial \bar\varphi_{n-1}(\alpha,\gamma)}{\partial\zeta}\biggr ) \right\}^{\frac{1}{2 } } d\zeta \nonumber \\ & \hspace*{17.5em } ( \mbox{if\,\ } ) .\label{wn}\end{aligned}\ ] ] in the right - hand side of ( [ wn ] ) , the second term is not needed for , the third term is not needed for and , the fourth term is not needed for ] with finite .however , the technique explained here is easily extended to the other cases .the positive polynomial of degree on the set is characterized in proposition [ prop : k - map ] .this is a unique representation . admitting the redundancy of the parameters ,this polynomial is rewritten as where and are symmetric positive semi - definite matrices .this polynomial ( [ markov - lukacs ] ) is obviously nonnegative on ] . under the metric with , the orthogonal projection of onto given by . in figure[ fig : proj ] , is depicted as a dashed line ( - - - ) , and the projection is depicted as a solid line ( ) . in our study , let be the age minus 11 for stabilizing numerical calculations .the measurements of the individual at the age in the girl and boy groups are denoted by and , respectively . for modeling the difference of the profiles ( mean vectors ) of two groups , we assume the multivariate normal model : with where ( ) are independent gaussian error vectors with mean zero . for the covariance matrices , we assume the intraclass correlation structure where is the matrix with all entries 1 , and and are unknown parameters .the model ( [ intraclass ] ) is widely used covariance structure in the analysis of growth curves and repeated measurements ( , ) . under the model ( [ model ] ) with ( [ intraclass ] ) , the mles are calculated as and , , , . if and are known , is distributed as the normal distribution with covariance matrix , where and is the design matrix .the mle of is obtained as in the following , we treat as the true value , and suppose the statistic to be a gaussian vector with mean and covariance matrix as an approximating analysis .let us focus on the whole period from ages 8 to 14 years , that is , ] .the orthogonal projection of onto under the metric is the lrt statistics for testing against , and for testing against are obtained as and , respectively .the weights are computed as using these weights , the -values for and are calculated as and , respectively .thus , the hypothesis that is a positive polynomial is accepted , and the hypothesis that is rejected at the 1% significance level .we conclude that the growth rate of the boy group is always greater than that of the girl group between the age 8 and 14 .federer , h. ( 1996 ) ._ geometric measure theory_. springer , berlin .johnstone , i. and siegmund , d. ( 1989 ) . on hotelling s formula for the volume of tubes and naiman s inequality ._ , * 18 * ( 1 ) , 652684 . kato , n. , yamada , t. , and fujikoshi , y. ( 2010 ) . high - dimensional asymptotic expansion of lr statistic for testing intraclass correlation structure and its error bound . _ j. multivariate anal ._ , * 101 * ( 1 ) , 101112 .knowles , m. and siegmund , d. ( 1989 ) . on hotelling s approach to testing for a nonlinear parameter in regression ._ internat .statist ._ , * 57 * ( 3 ) , 205220 .kuriki , s. and takemura , a. ( 2009 ) .volume of tubes and the distribution of the maximum of a gaussian random field ._ selected papers on probability and statistics _ , ams translations series 2 , * 227 * , no . 2 , 2548 . liu , w. , bretz , f. , hayter , a.j . , and wynn , h.p .( 2009 ) . assessing non - superiority , non - inferiority or equivalence when comparing two regression models over a restricted covariate region . _biometrics _ , * 65 * ( 4 ) , 12791287 .nesterov , y. ( 2000 ) . squared functional systems and optimization problems . in _ high performance optimization _ ( eds .h.frenk , k.roos , t.terlaky and s.zhang ) , 405440 , kluwer , dordrecht .potthoff , r. and roy , s. ( 1964 ) . a generalized multivariate analysis of variance model useful especially for growth curve problems . _ biometrika _ , * 51 * ( 3 - 4 ) , 313326 .takemura , a. and kuriki , s. ( 2002 ) . on the equivalence of the tube and euler characteristic methods for the distribution of the maximum of gaussian fields over piecewise smooth domains .appl . probab ._ , * 12 * ( 2 ) , 768796 .
|
a polynomial that is nonnegative over a given interval is called a positive polynomial . the set of such positive polynomials forms a closed convex cone . in this paper , we consider the likelihood ratio test for the hypothesis of positivity that the estimand polynomial regression curve is a positive polynomial . by considering hierarchical hypotheses including the hypothesis of positivity , we define nested likelihood ratio tests , and derive their null distributions as mixtures of chi - square distributions by using the volume - of - tubes method . the mixing probabilities are obtained by utilizing the parameterizations for the cone and its dual provided in the framework of tchebycheff systems for polynomials of degree at most 4 . for polynomials of degree greater than 4 , the upper and lower bounds for the null distributions are provided . moreover , we propose associated simultaneous confidence bounds for polynomial regression curves . regarding computation , we demonstrate that symmetric cone programming is useful to obtain the test statistics . as an illustrative example , we conduct data analysis on growth curves of two groups . we examine the hypothesis that the growth rate ( the derivative of growth curve ) of one group is always higher than the other . _ _ key words:__chi - bar square distribution , cone of positive polynomials , moment cone , symmetric cone programming , tchebycheff system , volume - of - tubes method .
|
the use of the multiscale analysis transform in the field of image processing efficiently analyses the visual information of an image allowing the localization of the information in the time and frequency domains .the classical schemes of the wavelet decomposition are not specially defined for a color image and they are usually marginalized : it consists in applying the transform to each component separately . but that could lead to undesirable effects on the color .we can still find in the literature several frameworks that propose multiscale vectorial techniques .for example in our team we have developed an approach that uses the analytic wavelet which is principally based on the principle of the monogenic signal .this allows the non - marginalized processing of color and provides an analysis of the separate orientations of the information but this representation is based on the signal principle .recently many fundamental based - graph tools in the field of signal processing have emerged and have allowed to extend the classical multiscale transform to an irregular domain , . in the diffusion wavelets are based on the definition of the diffusion operator and the localization at each scale depends on the dyadic powers of this operator .the graph wavelet transform proposed by crovella and kolaczyk in is an example of the design in the vertex domain , when the construction of the wavelet function is designed using the geodesic distance and is localized according to the source and the scale . in narang and ortega proposed a local two channel critically sampled filter bank on graphs that requires the combination of different processing blocks such as upsampling on graphs ( select of vertex and graph reduction ) and filtering . for this, the authors introduced the technique of graph partition based on the max cut on the graph and the as a graph reduction method . in order to ensure the recovery, they proposed in a perfect reconstruction of the two channel wavelet filter banks .so they take advantages of the properties of the bipartite graphs to define the low pass and high pass filters .they also need to decompose the original graph into a series of bipartite subgraphs . in shuman_ et al _ extend the laplacian pyramid transform on the graph domain .for this , they suggest a general way to select vertices based on the polarity of the largest eigenvector , then they use the kron reduction technique followed by the step of sparsification graph .that provides a perfect reconstruction transform but the dimension of the output is different than the input signal .none of these methods takes into account the perceptual information in the based - graph wavelet decomposition . in spectral graph wavelet transform ( sgwt ) is based on the representation of the wavelet coefficients in the spectral domain .it turns out that it is more flexible and technically easier to implement .furthermore , graph construction is a significant milestone for the application of this transform . in our work ,we focus on the properties of the sgwt to define an approach for color analysis . in practice , it is increasingly necessary to adapt strategies with human vision and it is thus becoming desirable to take into consideration the psychovisual system in the design of the multi - scale analysis tools .the idea is to use the geodesic distance to compute the weight of pairwise pixels .this principle helps highlight the perceptual properties through the implementation of the color difference . to the best of our knowledge, it is the first suggestion of a wavelet transform based on perceptual graphs . in this paper , we review a few methods to construct graphs from image data and we give an overview of the spectral graph theory and the main characteristics of the sgwt .we then suggest a color representation by inserting the color dimension into the measured distance , and more generally we consider the possibility of a psychovisual representation by the insertion of the color difference in the graph computation . to interpret the influence of the structure of the graph on the wavelet scale coefficients , we propose a general way to understand the behavior of the decomposition using the notion of quadratic form .we use the highlight of the color perceptual features associated with this new representation through the application of image denoising and we develop a method of inpainting on a graph from the wavelet coefficients . in section [ sec:2 ] we recall some elements of the weighted graph and spectral graph theory . in section [ sec:33 ]we develop our method by inserting the color dimension into the distance measured and finally in section [ sec:5 ] we develop a color image denoising method and an inpainting strategy .this section provides the notations and the definition of a graph used throughout the paper , we also present how one can represent discrete data defined on regular or irregular domains with a weighted graph . to begin with , we will review some concepts that are introduced or presented for example in .any discrete domain can be represented by a weighted graph .we note a loopless , undirected and weighted graph where is a finite set comprised of elements called vertices and is a subset of edges .an edge connects two neighbor vertices and .the neighborhood of a vertex is noted : the weight of an edge can be defined with a function such that if and otherwise .let be a distance computed between the vertices and .in practice , it is difficult to establish a general rule to define this distance .the methods used are essentially based on local and non - local comparisons of features ( see more details in ) .obviously , the weight function is represented by a decreasing function as follows : where is the parameter that adjusts the influence on the pixel s neighborhood in the distance computation : the weight of edge increases when increases .see more examples of weight functions in .we note the weighted adjacency matrix of size with the total number of vertices : the degree of a vertex is defined as the sum of the weights of all the adjacent vertices to in the graph : the degree matrix is the diagonal matrix of the degrees of .one of the issues encountered when dealing with the graph is the graph construction .there are two steps to build the graph , the first one consists in the definition of the graph topology which focuses primarily on the link between all regions .then , in the second step the weights are computed to measure the similarities between these regions , as discussed below . depending on the structure of the data ,there are different ways to construct the graph .the constructed graph has to be adapted to represent the discrete domain of the data but there is no general rule of data representation and that basically depends on the application . indeed , in the case of an irregular domain, one can use the -neighborhood graph noted .its principle is based on the notion of neighborhoods of a vertex that determine the set of vertices whenever the distance is lower than or equals the threshold parameter .the distance helps measure the similarities between the features vectors associated with the pixels .the simplest distance is the euclidean distance noted _ed_. in the case of a regular domain , the spatial organization of data is well - known , thus appropriate measurements are used in order to design a regular mesh such as the infinity norm and the unitary norm .an alternative approach used in image processing consists in representing data structure in the shape of regions which are represented thereafter by a region adjacency graph ( see examples of partitions in ) . in our approachwe work directly on the pixels and to analyze the constructed graph we propose to work in the spectral graph domain . in the following we give some definition and notation of the spectral graph theory . in this section ,we recall some basic definitions of the spectral graph theory , and we also provide some details about the spectral graph wavelet transform ( sgwt ) . here , we consider an undirected weighted graph with vertices .we can associate a vector in on any scalar valued function defined on the vertices of the graph . in the case of a color image ,this vector is the three components of the image . this amounts to adapting the fourier transform for the graph domain .let be a weighted graph , the non - normalized laplacian graph is : where and denote respectively the adjacency matrix and the degree matrix computed according to equations [ eq : a ] and [ eq : d ] .thanks to the spectral analysis , it is established that the fourier basis is associated with the eigenvectors of the laplacian operator . by analogy , the eigenvectors of noted by with constitute an orthonormal basis for the graph spectral domain : with the eigenvalue and eigenvector pairs of the graph laplacian .let be a based - graph signal .consequently , the based - graph fourier transform is defined by : with the number of vertices .we can also use a vector expression : where is the matrix whose columns are equal to the eigenvectors of the laplacian graph and the transpose of a matrix .the construction of the elements of the basis of spectral graph domain from the eigenvalues and the eigenvectors therefore leads to the definition of the wavelet transform on graphs .the sgwt is an approach of a multiscale graph - based analysis which is defined in the spectral domain .the scale notion for the graph domain is defined by analogy with the representation of the classical wavelet transform in the spectral domain .the graph wavelet operator at a given scale is defined by with stands for the wavelet kernel which is considered as a band pass function : , .the wavelet coefficient at the scale has the following expression : whereas the scaling function coefficient is given by : where represents a low pass function satisfying , and as .the wavelet and the scaling kernels have to satisfy some conditions . in order to simplify the calculations , we chose to implement the cubic spline kernel . to facilitate the use of this transform , the chebyshev polynomial approximation is suggested by the authors .one can notice that this transform is invertible .this transform provides a general framework for adapting the data representation .indeed one may observe that one of the central elements of the construction of this representation will be of course the graph and more specifically the structure that is associated with the input data .this transform is also well suited for introducing psychovisual information .we will now introduce our strategy for graph construction .the objective is to build a graph with a model organized in such a way as to measure the color similarities .in this section we develop an adaptive approach where the study is not restricted to spatial coordinates .the idea is to be able to compute the distances between a pairwise of vertices which can be used to further characterize the color structure . in low - dimensional embedding, the geodesic distance has proven its efficiency to estimate the intrinsic geometry of the data manifold used for example in nonlinear data analysis ( isomap technique ) .traditionally geodesics are defined through a parallel transport of a tangent vector in a linear connection .this distance is adequate for our definition in order to model the topological structure of the data rather well . in the next section ,we note the embedding of the graph of the image on 2-dimensional euclidean space with the vertex , the embedding on 3-color red - blue - green space and the embedding on cielab space .the computation of the geodesic distance is estimated by applying an isomap strategy which consists in finding first the -nearest neighbors ( -nn ) for each pixel having the coordinates . in order to find the shortest paths between two vertices and thus two color pixels we used the dijkstra algorithm .the goal is to find the shortest path from the source vertex to the other vertices in the graph . to properly identify the geodesic distances the choice of the number of the -nearest neighbors must not be too high . indeed some abnormal neighborhood edges deviate from the underlying manifold ( _ short circuit _moreover this value should be as low as possible in order to minimize the calculation time .ideally , we should compute all possible cases of the geodesic distance but for larger data , this requires significant computing and memory capacity . to achieve this ,we consider only a small percentage of the totality of the structure .the weighted graph defined by equation ( [ eq : we ] ) is thus obtained by applying the function , with and the geodesic distance between vertices and is defined as follows : with , the shortest path connecting two vertices according the path distance connecting two vertices . the choice of distance is discussed in [ ssec : ed ] and [ ssec : deltae ] . the weight function attempts to measure the difference between vertices and provides a clear link with the graph connection and the smoothness of the signal on the graph .if we choose an important value of this leads to add more edges to the graph and also increases the total variation because two different values of the signal on the graph will probably be connected as discussed below .we now wish to discuss how our proposed framework can yield a graph wavelet for a color image by inserting a color dimension into the measured distance .two approaches are put forward : in the first we are interested in simply representing the data in the classical red green blue ( rgb ) color space , in the second we have used the cielab space which can bring a visual aspect to the analysis . with this approach , we are interested in highlighting the visual characteristics in order to have a relevant representation of the color data through a weighted graph .such a strategy allows us to enhance the link between two similar colors and to clearly indicate the ruptures of the color regions .we denote the euclidian color distance between a pair of color values in rgb space and of both vertices and . the -nn distance between two vertices and also defined by : equation ( [ eq : we ] ) is then used to compute the weights of the edges .this way of building the graph allows us to consider all the color components of the image .transforms are applied on each color component which is represented as a signal on the graph . in figures [ fig : de5 ] and [ fig : de30 ] we show an example of the application on a color image using two different values of , the scale number is set to .the approximation result in figure [ fig : de30 ] with a high has more homogeneous areas than the approximation result in figure [ fig : de5 ] and also more details in the scales .it is to be noted that the choice of remains empirical .we indicate on these figures a value , the mean of quadratic form of wavelet coefficient on each scale for each color component and a value , the mean of quadratic form of wavelet coefficient for each color component .it should be noted that the quadratic form is introduced in the following paragraph .now we propose a model of the behavior of the information in the sgwt basis .parameter has some influence on the analysis revolving around two main observations : * the nature of details does not decrease according to the wavelet scale ; * when increases more information is contained in higher frequency bands . to clarify these issues , we focused on the distribution of the energy through the wavelet scale coefficients . possibly the simplest way to measurethis energy is to rely on the notion of the global smoothness by measuring the laplacian quadratic form .( a ) using the _ ed _ , , , : ( a ) approximation ( ) , ( b ) scale 1 ( ) , ( c ) scale 2 ( ) and ( d ) scale 3 ( ).,width=264 ] ( a ) using the _ ed _ , , , : ( a ) approximation ( ) , ( b ) scale 1 ( ) , ( c ) scale 2 ( ) and ( d ) scale 3 ( ).,width=264 ] the quadratic form is defined as follows : where is the signal on the graph .the quadratic form is small when is similar at the two extremes of the edge with a high weight .usually , the energy of information of the classical wavelet transform increases with the scale .consequently the quadratic form increases also with the scale .[ prop:1 ] the quadratic form of a given signal on a graph can be written as follows : let us consider the expression of the quadratic form shown in ( [ eq : q ] ) . is defined as follows : where is the matrix of eigenvalues of , substituting ( [ eq : ll ] ) in ( [ eq : q ] ) : according to equation ( [ eq : fou ] ) we have : writing the quadratic form in the spectral domain is a good representation of the variation of the signal on the graph , hence the quadratic form has a significant value when the based - graph fourier transform coefficients on the high frequencies are high .let be the wavelet kernel at the scale with the scaling function kernel .the based - graph fourier transform of the wavelet coefficients at each scale is written : with the original signal on the graph . according to proposition [ prop:1 ] , the quadratic form of the wavelet coefficients at each scale is : the localization of the information follows equation ( [ eq : pu ] ) defined in the spectral domain . a color image is decomposed in 3 scales using the numerical scheme in figure [ fig : flowchart ] . in figure[ fig : de5 ] one can observe clearly more details at scale 1 then scale 2 and in figure [ fig : de30 ] one can see clearly more details at scale 2 then scale 1 .this is verified by comparing their quadratic form values ( equals the average of the quadratic form of all color components ) as presented in figures [ fig : de5 ] and [ fig : de30 ] . as explained before, involves modifications in adjacencies in the graph structure . in particular , that has a considerable impact on the variation of the information content .this phenomenon is measured in our framework by the quadratic form which increases with the number of connections in the weighted graph .this involves more localization of the energy of the signal in the higher frequencies ( see more details in ) as one can see in figure [ fig : de30 ] .one can observe that the sgwt extracts various high level information or details to solve a specific problem .for example , in the case of image denoising application , the choice of and the color distance is adapted to obtain a significant separation of noise as will be see in section [ sec:5 ] .we now explore the specifications of the graph construction through the geodesic distance to introduce a psychovisual conception in the graph wavelet .the use of the visual features is becoming increasingly widespread in the field of image processing .several metrics developed to assess the perceptual qualities of color images falls within this context .the principle is thus based on the modeling of the human visual system .the cielab color space is considered as the most complete color space which describes all colors visible to the human eye .this representation is created to serve as a device independent model used as a reference .the three coordinates of cielab represent the lightness of the color ( ) , the position between red and green ( ) and the position between yellow and blue ( ) .intuitively , the employment of these characteristics in the analysis of the data provides a good localization of the specific means of the color which can be adapted to the human vision .the purpose of this work is to introduce this aspect on the multi - scale analysis , therefore we compare the features of a given pair of vertices by using the color difference .the difference allows us to measure the difference between two colors modeled on the psychovisual system .moreover , a high difference corresponds to an important colorimetric difference ( see ) .this notion can be included into the transform through the computation of the geodesic distances of the data represented in the cielab space .this amounts to combining spatial and color comparisons . as regard to the color difference, we opt for the difference .we denote the color difference between a pair of color values in cielab space and of both pixels and as follows : the -nn distance between two pixels and is also defined by : we illustrate the decomposition based on the difference and by using the same parameters as the previous section . for all of computations , differences are computed for an observer visualizing images 47 cm away from a lcd monitor displaying 72 dots per inch ( a standard configuration ) .so a greater than 4 corresponds to a high perceptual difference and it is thus not necessary to weight the perceptual difference and the spatial distance . the influence of parameter is illustrated in both figures [ fig : deltae1 ] ( ) and [ fig : deltae2 ] ( ) .indeed when increases , details shift to the high frequencies .if we compare the difference with the classical euclidean distance , at a fixed value of , the computation of the distance measured in the cielab space creates obviously more edges and enhances the links between neighbors .the quality of the real color distance has direct impact on the increasing quadratic form : the energy of the based - graph signal is more concentrated on the higher frequencies . from proposition [ prop:1 ], we deduce that the values of the coefficients of the based - graph fourier transform in the higher frequencies increase in relation to the weights and the number of edges . to illustrate this point, we obtain a larger value of the quadratic form for color distance ( figure [ fig : deltae2 ] , ) , than for euclidean distance ( figure [ fig : de30 ] , ) .consequently , the change of the distance implies a transfer of details to the higher frequencies , as one can see when comparing figures [ fig : de30 ] and [ fig : deltae2 ] ( or figures [ fig : de5 ] and [ fig : deltae1 ] ) . in figure[ fig : de30 ] the scaling function coefficient has the higher quadratic form ( ) while in figure [ fig : deltae2 ] the higher quadratic form corresponds to the wavelet coefficient on the scale ( ) .this phenomenon is explained in the equation [ eq : pu ] : the difference causes an increase of the fourier coefficients on the higher frequencies , and this increase is controlled by the wavelet kernel at each scale noted ( ) . to conclude , figure [ fig : deltae1](a ) highlights the importance of taking into account the human visual system .indeed color regions appear notably more contrasted than figures [ fig : de5 ] and [ fig : de30 ] and one can observe also a good separation with sharper edges . in figure[ fig : deltae2](a ) the colored regions become more homogeneous because of the effect of parameter . ( a ) using the distance , : ( a ) approximation ( ) , ( b ) scale 1 ( ) , ( c ) scale 2 ( ) and ( d ) scale 3 ( ).,width=264 ] ( a ) using the distance , : ( a ) approximation ( ) , ( b ) scale 1 ( ) , ( c ) scale 2 ( ) and ( d ) scale 3 ( ).,width=264 ] so far , in order to illustrate the importance of the phase information in the fourier transform , many authors have proposed to mix the phase of a first image with the magnitude of the fourier coefficients of a second image . in this subsection , in order to illustrate how our scheme takes into account the geometry of the data and captures singularities of the image , we suggest a similar operation .we will construct the graph from a first image and compute the analysis with the sgwt on a second image .in our example we use the image [ fig : imagetest](a ) to construct the graph , then the sgwt is applied on the image [ fig : imagetest](b ) . figures [ fig : geomeuc ] , [ fig : geomlab ] illustrate respectively the representation in the rgb and cielab color space .it is to be noted that the geometry is recovered into the scaling function coefficients .we set the parameter of deviation at a large value in order to minimize the information content of image [ fig : imagetest](b ) at the scaling function coefficients .two observations can be made : * figure [ fig : geomeuc](a ) shows that the euclidean distance is in better accord with textures ( on the left ) because this distance designed in the rgb space recognizes well the data organization .indeed textures are characterized by energy measures : the perceptual approach is therefore not appropriate .* figure [ fig : geomlab](a ) shows that the color difference conserves edges well and keeps more color regions .we also see a preservation of the specular region on objects presented on the original image [ fig : imagetest](a ) .( a ) and applied on image [ fig : imagetest](b ) using the _ ed _ , : ( a ) approximation , ( b ) scale 1 , ( c ) scale 2 and ( d ) scale 3.,width=264 ] ( a ) and applied on image [ fig : imagetest](b ) using the difference , : ( a ) approximation , ( b ) scale 1 , ( c ) scale 2 and ( d ) scale 3.,width=264 ] the experiment above has shown the advantages of the implementation of perceptual features in the analysis of the color data : the color information is well encoded into the wavelet basis .image denoising is one of the fundamental applications of the multiscale analysis .the representation on a graph domain preserves the structure of the data and it ensures also the robustness of the denoising process .we assess the performance of our approach with the two representations of the color data . to compare our proposed denoising methods in the same context ( multiscale transform and thresholding ) , we compute the classical denoising method based on an undecimated d8 daubechies wavelet transform ( uwt ) with three scales .we have used the undecimated wavelet transform because this decomposition obtains better denoising results than the classical orthogonal wavelet thresholding and the swgt is also a redundant transform .the sgwt is computed with a cubic spline kernel as specified in [ ssec : sgwt ] .the denoising procedure by both wavelet transforms simply consists in thresholding the wavelet coefficients on each color component and computing the inverse transform .hard thresholding of a wavelet coefficient is defined by : with the threshold . in our application, is the empirical threshold defined as with variance which is estimated using the absolute median of the wavelet decomposition s first scale .the denoising processing with the perceptual graph wavelet transform consists in algorithm [ algo : denoising ] .a flowchart is proposed in figure [ fig : flowchart_denoising ] .we note that geometrical and color structuration is contained in graph .initialization : color image smooth( ) compute graph of compute we simulate noisy images by adding a gaussian white noise to a color image with a known standard deviation and we then compute the image recovered from the noisy one by each method .one of the drawbacks of the sgwt is the poor denoising performance when applied directly on noisy images .this poor separation of noise is due to the fact that the scaling function is arbitrary and not scale - dependent . in figure[ fig : decbruit ] we show an example of direct application of the sgwt on noisy images .we have a degraded quality of the wavelet coefficient at each scale and the coefficients provide limited details .therefore , such a decomposition depends heavily on the constructed graph , we need a regularized graph which will allow us to estimate the image structure and provides with the wavelet coefficients a good separation between the noise and the information .( a ) using the _ ed _ , : ( a ) approximation , ( b ) scale 1 , ( c ) scale 2 and ( d ) scale 3.,width=264 ] we recall that our aim through this study is to illustrate the advantages of the representation in the cielab color space .to regularize coarsely the noisy image , we apply a gaussian smoothing and we construct the graph on this result .we then compute an analysis with the sgwt on each color component ( rgb ) of the noisy image using a cubic spline wavelet kernel with three scales ( ) .it is to be noted that we did not choose the same value of for the graph construction to compare the results of reconstructions .as we have seen previously for the same value of , the difference allows us to localize more energy in the higher frequencies : this can affect the quality of the reconstructed image .one can address this issue by choosing a lower value of .several techniques of estimation of the clean image and wavelet coefficients thresholding are also discussed in .we discuss the denoising schemes with two color images _ girl _ and _ butterfly _ which propose different textures and colors , specular effects , round objects , edges in different directions , etc .( snr=(1.8,3.1,1.6)db , ssim=0.916 , qssim=0.905 ) ; ( e ) , ( snr=(4.2,6.3,5.4)db , ssim=0.947 , qssim=0.939 ) ; ( f ) uwt ( snr=(3.3,5,3.8)db , ssim=0.931 , qssim=0.927 ) . ,width=264 ] ( snr=(14.0,12.7,10.3)db , ssim=0.918 , qssim=0.898 ) ; ( e ) , ( snr=(14.0,12.9,10.5)db , ssim=0.928 , qssim=0.907 ) ; ( f ) uwt ( snr=(12.6,11.3,9.1)db , ssim=0.921 , qssim=0.916 ) . , width=264 ] ( snr=(-3.8,-3.1,-4.9)db , ssim=0.792 , qssim=0.740 ) ; ( e ) , ( snr=(0.6,0.6,-1.5)db , ssim=0.810 , qssim=0.763 ) ; ( f ) uwt ( snr=(-0.2,-0.5,-2.7)db , ssim=0.817 , qssim=0.819 ) ., width=264 ] one can note that this quality of restoration is confirmed by the measurement of snr ( we have noted the snr measurements on the red , green and blue bands as . the structural similarity ( ssim ) index is computed to measure the quality of the reconstruction of the original geometrical and structural information .this measure is associated with the value component of the color image .we compute also the quaternion structural similarity index ( qssim ) that takes into account the quality of the reconstruction of the color information . in figures [ fig:1g](e ) and[ fig:1b](e ) , we observe that the restoration in cielab color space provides satisfactory results with a low level of noise . indeed , the psychovisual consideration improves the visual quality of noisy image and ensures the conservation of the discontinuity between the homogeneous regions .one can observe that the reconstruction in the rgb color space in figures [ fig:1g](f ) and [ fig:1b](f ) indicates a good result but a lot of details of the image have disappeared . comparing the uwt method in figures [ fig:1g](f ) and [ fig:1b](f ) , we observe that important details are preserved ( such as the reflection on the shield ) with our method .the graph structuration allows us to connect vertices similar in terms of color distance or perceptual difference .this indicates that the classical measures are not always appropriate for estimating the degradation : the ssim and qssim are greater in the noisy image in figure [ fig:1g](b ) , because these measures are particularly sensitive to very small variations and the low noise does not enough damage the structure of our original image in figure [ fig:1g](a ) . however the image denoising with our method using the psychovisual information in figure [ fig:1g](e ) is better in terms of ssim , qssim and snr than the reference denoising using uwt in figure [ fig:1g](f ) .when we increase the noise level ( figure [ fig:2g](b ) ) , it is difficult to construct a graph without a good estimation of the coarse approximation of the original image .this is due to the problem of the estimation of the initial graph. a simple gaussian smoothing ( figure [ fig:2g](c ) ) does not produce a good noise - free coarse representation of data and one can observe a cartoon - like effect on the homogeneous zones of the denoised image in figures [ fig:2g](d ) and [ fig:2g](e ) .it is to be noted that there is no global perceptual difference between the uwt and the sgwt defined with the euclidian distance .both these restorations are however different from one antoher with a better enhancement of edges for the homogeneous regions for the sgwt method ( figure [ fig:2g](d ) ) in comparison to the uwt method ( figure [ fig:2g](f ) ) . in red , smoothed image in green and uwt in magenta).,width=264 ]we propose to assess the quality of methods by measuring structural similarity indices ( ssim and qssim ) between denoised images and the original one for different levels of noise . in figure[ fig : ssimcomparaison](a ) the ssim highlights that the method based on the sgwt with the difference obtains a better reconstruction than the same method with the _ ed_. in figure [ fig : ssimcomparaison](b ) the qssim highlights the same results .we observe that the method based on the sgwt with the difference obtains a best perceptual reconstruction for a slight noise .when the standard deviation of noise is greater than 5.5 , the uwt d8 obtains a better perceptual reconstruction compared to our based - graph decomposition .indeed , one can not obtain a good first estimation of the based - graph image after a simple regularization when the noise is too high .it will be necessary in the future to define a better strategy to estimate the based - graph coarse approximation of the image .we propose to extend the scope of applications to the inpainting in color images .we extend the principle of inpainting with classical wavelet on the graph domain .our approach consists in the use of a classical thresholding iterative process .let be the observed signal and the signal with missing pixels at locations , an additive noise .the estimation can be obtained by resolving the following optimization : where is a regularizer which imposes an _ a priori _ knowledge and should be adapted to the noise level . by definition , is sparse in the dictionary ( synthesis operator ) and is obtained by linear combination .we compute a sparse set of coefficients denoted in a frame : with the sparsity regularizer defined by : the problem corresponds to a linear ill posed inverse problem .it can be solved by an iterative scheme with the hard thresholding operator . in the case of the sgwt ,the wavelet transform is computed in respecting the structure of data , the solution is also obtained by the numerical scheme presented in algorithm [ algo:2 ] .initialize + ; compute the graph can not be constructed for damaged zones : we should therefore provide graph construction techniques more suitable for missing data .to connect vertices in the spatial domain with edges , a regular mesh represents a simpler tool to estimate the data structure of the missing areas ( cf figure [ fig : topo ] ) . from this principlewe propose to introduce a regular mesh within the processing areas .db , ssim=0.88 , qssim=0.89 ) ; ( b ) _ ed _( snr= , ssim=0.93 , qssim=0.92 ; ( c ) distance ( snr= , ssim=0.93 , qssim=0.93).,width=264 ] we have applied the graph - based wavelet inpainting method on the image in figure [ fig : inpainting](a ) .one can notice that the color information is diffused in the damaged regions and results show clearly the limitation of the method when applied on textured regions , that is a classical problem in the inpainting using a pde - based method .we propose to restore this image with an inpainting processing using the rgb color space [ fig : inpainting](b ) and using the cielab color space to compute the graph ( figure [ fig : inpainting](c ) ) .the graph structuration allows us to regularize similar zones whose vertices are connected . in term of snr ,the inpainting results with a graph structure computed with an euclidian distance and those computed with a are close except on the green component . the quality metrics ( ssim and qssim ) highlight a good reconstruction of structures .we have introduced a framework which takes into account the intrinsic geometry and the color information of an image .indeed we have developed a perceptual wavelet transform based on the graph of the image and , in particular , on the color difference in the graph computation . furthermore using the laplacian quadratic form to measure the amount of energy of information helps understanding the behavior of this new representation .moreover , the quadratic form depicts the influence of parameters on the localization of information across the wavelet scales .the application on the image denoising shows that the restoration in the cielab space allows to enhance homogeneous zones with respect to their ruptures and keeping some important visual details .it is to be noted that the use of the geodesic distance allows us to preserve the geometry of the image .this provides an enhancement of regions and does not create any artifact or blur .we illustrate the respect of color in a restoration processing by introducing perceptual information in the graph computation . we have compared this new representation with an approach based on the euclidian distance .we have also introduced the perceptual sgwt in a classical inpainting process and the results are correct but do not depend on the two distances for graph computation .our work yields promising results and shows the relevance of taking into consideration the geometry of the image and perceptual information ( with the cielab space ) for the wavelet transform .this methodology can be extended to a multi- or hyperspectral image where the choice of the distance to compute the graph may have a great influence on the denoising process , inpainting application and texture characterization .r. soulard , p. carre , and c. fernandez - maloigne , `` vector extension of monogenic wavelets for geometric representation of color images , '' _ image processing , ieee transactions on _ * 22 * , 10701083 ( 2013 ) .s. k. narang and a. ortega , `` lifting based wavelet transforms on graphs , '' in _ proceedings : apsipa asc 2009 : asia - pacific signal and information processing association , 2009 annual summit and conference _ , 441444 , asia - pacific signal and information processing association , 2009 annual summit and conference , international organizing committee ( 2009 ) .m. gavish , b. nadler , and r. r. coifman , `` multiscale wavelets on trees , graphs and high dimensional data : theory and applications to semi supervised learning , '' in _ proceedings of the 27th international conference on machine learning ( icml-10 ) _ , 367374 ( 2010 ) .m. crovella and e. kolaczyk , `` graph wavelets for spatial traffic analysis , '' in _infocom 2003 .twenty - second annual joint conference of the ieee computer and communications .ieee societies _ , * 3 * , 18481857 , ieee ( 2003 ) . l. grady and m .-jolly , `` weights and topology : a study of the effects of graph construction on 3d image segmentation , '' in _ medical image computing and computer - assisted intervention miccai 2008 _ , d. metaxas , l. axel , g. fichtinger , and g. szkely , eds . , _ lecture notes in computer science _ * 5241 * , 153161 , springer berlin heidelberg ( 2008 ) .j. cousty , g. bertrand , l. najman , and m. couprie , `` watershed cuts : minimum spanning forests and the drop of water principle , '' _ pattern analysis and machine intelligence , ieee transactions on _ * 31 * , 13621374 ( 2009 ) .l. vincent and p. soille , `` watersheds in digital spaces : an efficient algorithm based on immersion simulations , '' _ pattern analysis and machine intelligence , ieee transactions on _ * 13 * , 583598 ( 1991 ) .d. i. shuman , s. k. narang , p. frossard , a. ortega , and p. vandergheynst , `` the emerging field of signal processing on graphs : extending high - dimensional data analysis to networks and other irregular domains , '' _ signal processing magazine , ieee _ * 30*(3 ) , 8398 ( 2013 ) .g. sharma , w. wu , and e. n. dalal , `` the ciede2000 color - difference formula : implementation notes , supplementary test data , and mathematical observations , '' _ color research & application _ * 30*(1 ) , 2130 ( 2005 ) .d. k. hammond , l. jacques , and p. vandergheynst , `` image denoising with nonlocal spectral graph wavelets , '' in _ image processing and analysis with graphs : theory and practice _ , o. lzoray , ed ., crc press ( 2012 ) .z. wang , a. c. bovik , h. r. sheikh , and e. p. simoncelli , `` image quality assessment : from error visibility to structural similarity , '' _ image processing , ieee transactions on _ * 13*(4 ) , 600612 ( 2004 ) .
|
in this paper , we propose a numerical strategy to define a multiscale analysis for color and multicomponent images based on the representation of data on a graph . our approach consists in computing the graph of an image using the psychovisual information and analysing it by using the spectral graph wavelet transform . we suggest introducing color dimension into the computation of the weights of the graph and using the geodesic distance as a mean of distance measurement . we thus have defined a wavelet transform based on a graph with perceptual information by using the cielab color distance . this new representation is illustrated with denoising and inpainting applications . overall , by introducing psychovisual information in the graph computation for the graph wavelet transform we obtain very promising results . therefore results in image restoration highlight the interest of the appropriate use of color information . : david helbert , e - mail : david.helbert-poitiers.fr
|
a social dilemma is a situation , where actions that ensure or enhance individual prosperity harm the well - being on the collective level .public goods such as social benefit systems or the environment are particularly prone to the exploitation by individuals who want to profit at the expense of others .while collective cooperation would be favorable , individual free - riding ( `` defection '' ) is tempting , which may end in a collapse of solidarity known as `` tragedy of the commons '' .while several mechanisms that prevent defection from taking over have been discovered so far , the identification of conditions for the survival and spreading of cooperation among selfish individuals still remains a grand challenge , which is addressed by scientists from various fields of research , including physics .the puzzle is most frequently tackled within the framework of evolutionary game theory .in contrast to the famous prisoner s dilemma , which studies cooperation ( c ) and defection ( d ) in pairwise interactions , the public goods game addresses cooperation and defection within _groups_. in the latter , cooperators contribute to the public good , while defectors do not .irrespective of the strategy , all contributions are summed up , multiplied with a factor , and then equally divided among all members of the group .thus , defectors bear no cooperation costs , while enjoying the same benefits as contributors , which makes it profitable to defect and tends to cause a spreading of free - riders .remarkably enough , however , individuals cooperate much more in public goods situations than expected . this requires the identification of mechanisms that can sustain cooperation in public goods games .punishment has been identified as one possible route to cooperation , but its effectiveness depends on whether the participation in the public goods game is optional or not .social diversity and volunteering may also promote cooperation in public goods games , as does a random exploration of strategies . in this paper, we investigate the impact of punishment on the evolution of cooperation in structured populations , focusing on the case of a minimal number of pure strategies .punishment is considered by adding the strategy of punishing cooperators ( pc ) or , alternatively , of punishing defectors ( pd ) .both punishing strategies sanction other defectors with a fine at a personal cost .our main interest is to clarify how the so - called `` institution of punishment '' influences the general cooperation level , if it is executed by players who either cooperate or defect .we investigate the possible similarities and differences in the mechanisms leading to the final system states and the underlying dynamics .it turns out that , in the two variants of the model ( the one with the additional pc strategy and the one with the pd strategy ) , punishment promotes cooperation through completely different mechanisms . as a consequence, the impact of punishment in structured populations can be significantly different . while we describe the details of our model in section [ model ] , we discuss the results of computer simulations in section [ results ] and summarize our findings in section [ discussion ] .the public goods game is played on a periodic square lattice .each site of the lattice is occupied by one player , represented by the index .initially , all three strategies ( c , d , and pc or pd ) are assumed to have the same frequency , and they are randomly and uniformly distributed over the grid . for the sake of simplicity , every player participates in groups ( consisting of the focal individual and the 4 nearest neighbors each ) .we should also note that our results basically remain valid , when varying the group size or the interaction network within reasonable limits .the only crucial feature is the limited number of interacting neighbors in the structured population . in accordance with the standard definition of the public goods game , cooperators ( c and pc )contribute an asset to the public good and defectors ( d and pd ) contribute nothing .subsequently , the sum of contributions in a group is multiplied by the `` synergy factor '' .the resulting amount is then equally shared among all members of the group , irrespective of their strategy . in this waythe defector strategies ( d and pd ) try to exploit the cooperator strategies ( c and pc ) .summing up the shares of all groups that a player belongs to yields the value .this value corresponds to his or her overall payoff , if no punishment is applied .otherwise , the overall payoff quantifying the `` fitness '' of player is obtained by subtracting punishment costs and/or punishment fines .if the strategy of player is d or pd , player will be punished with a fine resulting in , where the sum runs over all the groups containing player . is given by the number of punishing players ( pc and pd ) in each group ( not considering player ) , divided by .furthermore , if pc or pd , player will have to bear the punishment cost resulting in , where the sum runs again over all the groups containing player . is given by the number of defectors around player in each group , divided by .in other words , the punishing strategies ( pc and pd ) make an extra contribution to keep the punishment and , as we will see , also cooperation alive . to update the strategy of players ,we employ a monte carlo simulation procedure .each elementary step involves the random selection of a focal player and of one nearest neighbor .following the determination of payoffs and as described above , player takes over the strategy of player with probability } \, , \label{fermi}\ ] ] where denotes the uncertainty of strategy adoption .in the limiting case , player copies the strategy of player if and only if . for , however , under - performing strategies may also be adopted sometimes , for example , due to errors in the evaluation of payoffs . during one full iteration, the strategy of all players may be copied once on average .the computational results presented below have been obtained for lattices with sites , where is chosen between 400 and 3000 ( large enough to avoid the accidental disappearance of a strategy ) .the final fractions of all three strategies were obtained after up to iterations ( depending on how quickly the fractions stabilized ) .the presented data were averaged over sufficiently many runs to ensure a low variability of the results ( 5 to 30 runs , depending on the system size ) .for well - mixed interactions , when a random sample of players engages in public goods games with the two strategies c and d only , defectors spread and the tragedy of the commons results for .this undesirable outcome does not significantly change by adding punishing strategies ( pc and pd ) , because the latter have to bear additional punishment costs , which reduce their competitiveness .accordingly , the social dilemma persists in the presence of punishing strategies , and for well - mixed interactions defectors still spread in the system .it is furthermore worth noting that conventional cooperators ( c ) , who avoid extra costs by punishment efforts , can be considered as `` second - order free - riders '' , as they exploit the defection - suppressing benefits created by punishers .this is actually the reason why punishing cooperators tend to disappear , which finally weakens the cooperators in their battle against defectors . in other words ,the tragedy of the commons results because `` lazy ( non - punishing ) cooperators '' crowd out their `` friends '' , the punishing cooperators , who are needed for their own survival .as nowak and may pointed out for the prisoner s dilemma , a fixed interaction network in structured populations facilitates network reciprocity , which is beneficial for cooperators .the same mechanism can be found for the two - strategy spatial public goods game as well . using the parametrization of our model ,cooperators manage to survive , if , and crowd out the other strategies , if .the impact of additional punishing strategies ( pc and pd ) on structured population was also studied by several research groups .it turns out that the condition of a fixed and finite interaction neighborhood can resolve the problem of second - order free - riding by allowing punishing cooperators to separate themselves from pure cooperators , thereby escaping a direct competition and exploitation . in this paper , we study two minimalist models , where only one type of punishing strategy is considered besides conventional cooperators and defectors . in other words ,we explore the possible impact of punishing cooperators and punishing defectors separately .the corresponding models will be called the `` pc model '' and the `` pd model '' , respectively .representative phase diagrams for the two minimalist models are presented in fig .[ phd ] , using the same value of the synergy factor . in both diagrams , each region( `` phase '' ) is named after the strategies , which survive over time and contribute to the final strategy distribution .a small value of the punishment fine does not significantly change the behavior of the system , given a finite punishment cost .generally , however , the system behavior depends in a sensitive way on the actual values of punishment cost and fine . in case of the pc model , punishingcooperators always prevail for a sufficiently large fine , independently on the cost value .if the cost is lower than a critical value ( for ) , the application of a sufficiently large fine will drive the system into a state , where the punishing strategy replaces its non - punishing counterpart .( as we will see , a similar behavior can be observed for the pd model , but the explanation is completely different . ) the critical cost value that limits the existence of a mixed d+pc phase decreases by reducing the synergy factor , and the phase disappears completely for sufficiently low values of .accordingly , the system turns from d - only to a pc - only phase , similar to what is found in the public goods game with all four strategies ( c , d , pc , and pd ) .the system always leaves the punishment - free state via a discontinuous first - order phase transition , while the transition between the mixed d+pc phase and the pc - only phase is continuous .( the critical behavior of this transition will be discussed in the next subsection . ) the global cooperation level , i.e. the sum of fractions of cooperators and of punishing cooperators , increases monotonously with the fine , as the inset shows . in case of the pd model ( right panel of fig .[ phd ] ) , the impact of punishment is limited to a finite region of the punishment cost ( for ) .below this cost value , the impact of punishment starts similarly to the pc model : when the fine value is increased , a first - order phase transition occurs , which goes along with a considerable increase in the fraction of cooperators . beyond a certain value , however , a further increase of the fine decreases the level of cooperation , and the system eventually returns to a phase that is characteristic for a system without punishment . as a consequence of the observed reentrant phase transition, there exists an optimal level of the punishment fine , for which the fraction of cooperators becomes maximal .this can be understood based on a pattern formation mechanism described subsection 3.3 .the mentioned critical -value that limits the emergence of the punishing strategy decreases as we increase the value of the synergy factor , and it disappears around .as we will see , this is closely related to the fact that too large fines do _ not _ influence the system behavior .to study the phase transitions in more detail , we have plotted the stationary fractions of all strategies for both models in fig .[ cross ] . in case of the pc model ( fig .[ cross]a ) , the fraction of punishing cooperators can increase at the cost of defectors , as soon as cooperators are eliminated from the system .interestingly , second - order free - riders disappear out of a sudden , as soon as the punishment fine passes a critical threshold . at this threshold , `` lazy '' , non - punishing cooperators are essentially replaced by punishing ones . as the punishment fine is further increased , the fraction of defectors ( ) decreases gradually and becomes zero above a certain value of the fine . the present nonequilibrium continuous phase transition from the fluctuating d+pc phase to the absorbing pc phase agrees with the directed percolation universality class conjecture .namely , the interactions amongst players are short - ranged , and the order parameter , which is the fraction of defectors , becomes zero at the critical value of the fine , where the system arrives at the single absorbing all - pc state .accordingly , the ( static ) exponents of the phase transition are expected to belong to the universality class of directed percolation , for which with in two spatial dimensions .figure [ cross]c shows the decay of the defector concentration at a fixed cost of when approaches the critical value of the punishment fine .the numerically determined critical exponent is well compatible with the mentioned exponent 0.584 for directed percolation , which is represented in fig .[ cross]c by the separate solid line . in the pd model ,the fraction of punishing defectors rises suddenly from zero to a finite value at a critical threshold of the fine value , as in the other minimal model ( see fig .[ cross]b ) .however , as defectors disappear , punishing defectors only reach half of the fraction that defectors had in the previous c+d phase .this difference signals already that another type of mechanism must be responsible for the spreading of the punishing strategy in the pd model .it turns out to be crucial that the fraction of punishing defectors goes down , as the punishment is increased .this is , because punishing defectors ( pd ) punish not only pure defectors ( d ) , but also each other a behavior that is called `` hypocritical punishment '' .consequently , defectors can spread again above a certain value of the punishment fine .when this happens , the fraction of cooperators starts to fall , while the fraction of punishing defectors decreases further ( until it reaches zero ) .therefore , for high values of the fine , the system arrives in a state that is identical to the one for negligible fines ( ) . in other words ,the system behavior becomes exactly the same as for the spatial public goods game without punishment .the critical behavior of the pd model is more interesting than the one of the pc model , because two continuous phase transitions can be observed as the fine is increased ( for a fixed cost value ) . in both casesthe system leaves a three - strategy ( c+d+pd ) phase for a two - strategy phase c+pd or c+d , when the fine is decreased or increased . as we will see in the next subsection , the mechanism determining the stationary patterns in the last two phases are significantly different . despite this , as fig .[ cross]d illustrates , the exponents of the phase transitions agree within the accuracy of numerical estimates .the value is , which is very close to the previously mentioned directed percolation exponent . to explore the differences between the punishment - promoting mechanisms in the pc model and the pd model , we have plotted the fraction of each strategy as a function of time ( see fig .[ evol ] ) .the punishment cost and fine were chosen such that the final strategy distribution contained punishing players ( d+pc or c+pd , respectively ) .for the pc model ( left ) , the randomly mixed initial state is particularly beneficial for the exploitation of cooperative strategies by defectors .accordingly , rises rapidly , while both and fall .defectors spread almost everywhere , but a number of islands made up of cooperative strategies can survive , where cooperative behavior is effective thanks to network reciprocity .it is important to note that , in the beginning , c and pc players may form mixed cooperative islands together . however , when defectors are absent in the neighborhood , the c and pc strategies result in identical payoffs , and the strategy update dynamics defined by eq .( [ fermi ] ) results in a voter model kind of logarithmic coarsening within the cooperative islands ( since the c and pc strategies are equivalent in the bulk of c+pc domains , where there are no defectors and accordingly also no punishment ) . although the coarsening dynamics is logarithmically slow , the c and pc strategies in the cooperative islands segregate quickly , as the sizes of these islands are small .after this time period , the end of which is indicated in the left panel of fig .[ evol ] by an arrow , homogeneous clusters of cooperators ( c ) and punishing cooperators ( pc ) fight separately against defectors ( d ) . when the punishment fine is high enough , punishing cooperators can outcompete defectors , but defectors are superior to cooperators ( thanks to the low synergy factor ) . consequently , the fraction of punishing cooperators increases quickly , and cooperators are eventually crowded out . finally , cooperators disappear completely and , with them , second - order free - riders . as a conclusion ,to get rid of second - order free - riding , the spatial segregation of the c and pc strategies is crucial .the evolutionary dynamics is significantly different for the pd model ( see the right panel of fig .[ evol ] ) .initially , similarly to the pc model , both defecting strategies ( d and pd ) can benefit from the well - mixed distribution at the beginning . as pure defectors are not burdened by punishment costs ,their fraction is further increasing with time .after some iterations , however , small cooperative clusters that have survived start growing thanks to network reciprocity , while the number of defectors is reduced , since they perform poorly in the defecting environments they have created .when the fraction of pd players reaches a certain value , the mixture of c and pd strategies can form an alliance that is beneficial for both strategies .on the one hand , pd players can collect the payoff in the vicinity of cooperators , which allows them to survive despite their costs for punishing defectors . on the other hand ,the payoff of cooperators is competitive , because the punishment efforts of pd players keep the fraction of defectors in the neighborhood of cooperators at a low level .accordingly , both strategies benefit from the alliance , and they can crowd out the d players together . it is essential that the alliance can only work , when the mixture of cooperators and punishing defectors is just right . when crowding out defectors , neither c nor pd players can occupy the gained territory alone .instead , as soon as the c+pd alliance starts to work , the fractions of both strategies rise simultaneously with an almost constant ratio ( as we have checked by complementary evaluations ) .the start of this phase is marked by an arrow in right panel of fig .it appears that the delicate balance between both members of the alliance is self - organized and self - stabilizing . for both models ,the above described pattern formation mechanisms can be nicely seen in snapshots of the time evolution .figure [ snapshots ] illustrates , how the strategy distribution evolves in case of the pc model ( top ) and the pd model ( bottom ) , when the same parameter values are used as in fig .the first snapshot for the pc model shows the moment , when c and pc players form common islands together , but the segregation of both cooperative strategies is just beginning . in the second plot ,both cooperative strategies have already largely segregated from each other , and now mainly struggle with defectors .the third plot shows the nearly final state , where c and pc players still form independent clusters , but punishing cooperators have largely replaced cooperators , as they are more successful in the battle with defectors .the finally resulting strategy distribution containing only d and pc players is illustrated in the last plot .for the pd model , the first plot in the bottom of fig .[ snapshots ] shows a state , where the alliance of c and pd players is not yet established , so that defectors can spread .however , when the optimal mixture of cooperators and punishing defectors emerges ( second plot in the bottom row ) , the two allied strategies c and pd can continuously crowd out defectors ( third plot in the bottom row ) . it can be seen that the ratio of c and pd players stays essentially constant while both strategies spread , which indicates a self - stabilizing mechanism . if only cooperators would conquer the territory previously occupied by defectors , the fraction of punishing defectors would locally decrease below a critical level , and cooperators would become vulnerable to the exploitation by defectors . on the other hand , if only punishing cooperators would spread , they would not find enough cooperators to exploit , while they require this for their survival . as a consequence, the ratio of c and pd strategies is maintained at a typical value , which supports the spreading of the alliance best .the concept of an optimal ratio of alliance members can explain , why the phase of c+pd disappears for large fine values or high values of the synergy factor .too large synergy factors keep defecting strategies at a low level , while too large fines prevent that the required fraction of pd players occurs .this is , why the alliance does not work , and d players can spread again . at first sight , the phase diagram of the pd model and the functional dependence of the cooperation level in fig .[ cross ] appear to be paradoxical : when the punishment fine is increased ( while the punishment cost is fixed , something that can happen in case of escalation ) , the cooperation level is _ reduced _ , although punishment intends quite the opposite .based on the above described argument , however , this paradox can be resolved : too big fines prevent the occurrence of the right mixture of the two strategies and , thereby , the emergence of a functioning alliance . to support our argument , we have plotted stationary strategy distributions in the pd model for different fine values . as the top panel of fig . [ stationary ] shows , we have used identical punishment costs to study the effect of the fine .figure [ stationary]a illustrates the case , where the punishment is too low to eliminate defectors , so that the resulting strategy distribution consists of cooperators and defectors , as in the spatial public goods game without punishment for . when the fine is increased , the alliance of cooperators and punishing defectors can crowd out non - punishing defectors , which enhances the level of cooperation ( see fig . [stationary]b ) .a new phase , which additionally includes the d strategy , starts when the alliance between the c and pd strategies does not work anymore , because the fine is too large , hence the fraction of pd players is too small ( see fig . [stationary]c ) . for higher fines , pd players can not efficiently punish d players anymore , and as the fraction of punishing defectors goes towards zero , the system returns to the state that is typical for the spatial public goods game without punishment ( cf . fig .[ stationary]d with fig .[ stationary]a ) . for the pd model , one could, therefore , conclude that the `` institution of punishment '' fails when values of the punishment fine are set too high .in order to explore the impact of punishment in spatial public goods games , we have studied two minimalist models by adding either punishing cooperators ( pc ) or punishing defectors ( pd ) as an additional behavioral strategy . we have found that both punishing strategies can promote cooperation for synergy factors , for which defectors would spread in case of well - mixed interactions .as we pointed out , punishing strategies can spread in different ways .punishing cooperators ( pc ) can crowd out `` lazy '' , non - punishing cooperators ( c ) above a certain value of the punishment fine .this solves the `` second - order free - rider problem '' , i.e. the puzzle why people perform punishment efforts despite their costs : the cooperation- and punishment - promoting mechanism is based on spatially restricted interactions between players , which supports the survival of non - defecting strategies via clustering and segregation . through segregation , punishing cooperators can avoid being exploited by pure cooperators and fight against defectors more efficiently .accordingly , defectors ( conventional free - riders ) and non - punishing cooperators ( second - order free - riders ) disappear eventually , if the punishment fine exceeds a certain threshold .larger punishment fines do not have any positive effects .in contrast to punishing cooperators ( pc ) , punishing defectors ( pd ) can not survive alone .they need the presence of cooperators that they can exploit , while the cooperators ( c ) need punishing defectors to punish and contain defection .the functionality of this alliance needs an optimal mixture of strategies to thrive .once the optimal ratio between the c and pd strategies comes into existence , it is maintained by self - stabilization , when conquering the territory of the rival d strategy .if external conditions prevent the establishment of this optimal ratio , the alliance can not work .this explains the paradoxical reentrant behavior found in the phase diagram of the pd model , according to which too high punishment fines imply the same results as no punishment at all . while the occurrence of alliances is possible in spatial games with more than two strategies ,as is known from spatial population dynamics , here the resulting outcomes and dynamics provide interesting new examples of this fascinating phenomenon .we acknowledge partial financial support by the future and emerging technologies programme fp7-cosi - ict of the european commission through the project qlectives ( grant no . :231200 ) and by the eth competence center `` coping with crises in complex socio - economic systems '' ( ccss ) through eth research grant ch1 - 01 08 - 2 ( d.h . ), by the hungarian national research fund ( grant k-73449 to a.s . and g.s .) , the bolyai research grant ( to a.s . ) , the slovenian research agency ( grant z1 - 2032 - 2547 to m.p . ) , and the slovene - hungarian bilateral incentive ( grant bi - hu/09 - 10 - 001 to a.s . ,m.p . and g.s . ) .
|
we study the evolution of cooperation in spatial public goods games where , besides the classical strategies of cooperation ( c ) and defection ( d ) , we consider punishing cooperators ( pc ) or punishing defectors ( pd ) as an additional strategy . using a minimalist modeling approach , our goal is to separately clarify and identify the consequences of the two punishing strategies . since punishment is costly , punishing strategies loose the evolutionary competition in case of well - mixed interactions . when spatial interactions are taken into account , however , the outcome can be strikingly different , and cooperation may spread . the underlying mechanism depends on the character of the punishment strategy . in case of cooperating punishers , increasing the fine results in a rising cooperation level . in contrast , in the presence of the pd strategy , the phase diagram exhibits a reentrant transition as the fine is increased . accordingly , the level of cooperation shows a non - monotonous dependence on the fine . remarkably , punishing strategies can spread in both cases , but based on largely different mechanisms , which depend on the cooperativeness ( or not ) of punishers .
|
negative index of refraction and perfect lenses have become one of the most important concepts in metamaterials .the theoretical design of such devices was considerably stimulated by the observation that a negative index of refraction can be understood from transformation optics as a transformation of space that inverts its orientation .based on this idea , not only the flat perfect lens was re - interpreted as a folding of space , but also lenses with different shapes were proposed . in all these conceptsa perfect lens is established by folding of space , such that three points in laboratory space ( one on each side of the lens and one inside the lens ) correspond to a single point in the virtual electromagnetic space that is used to derive the media properties .based on these successes it was natural to conclude that transformation optics is an ideal tool to design perfect imaging devices .recently , it was suggested that perfect imaging should rather be seen as the result of multi - valued maps than an effect of the amplification of evanescent waves .these results suggest to critically review the role of negative index of refraction and perfect lenses within transformation optics . in sec .[ se : negativerefr ] it is reviewed how negative values of permittivity and permeability can emerge in transformation optics .it is pointed out that a negative index of refraction can not be seen as an inherent characteristic of transformation optics similar to the bending of light as used in cloaking .rather , these values are obtained from a clever choice of signs in an ambiguity related to orientation changing transformations .[ se : interface ] presents an argument that can justify the choice of conventions that yields to negative refraction .still , as will be shown in sec .[ se : lenses ] , results from transformation optics based on multi - valued maps should be used with utmost care . in particular , the transformation optics analogue of the pendry - veselago lens neither amplifies evanescent modes nor includes an imaging of the near field .it is the purpose of this section to review how a negative index of refractive appears in transformation optics . in the logic of transformation opticsone starts by writing down a vacuum solution , of the maxwell equations .greek indices are spacetime indices , the sum runs over , whereby is interpreted as time .further explanations on our notations and conventions can be found in the appendix . ] to account for possibly curvilinear coordinates we used the the covariant derivative in three dimensions , , with where is the determinant of the space metric .now a diffeomorphism to a virtual space called electromagnetic space is defined , which locally is implemented as a coordinate transformation .its effect is captured by re - writing the maxwell equations in terms of the new , barred variables .more involved is the new relation among the fields , , and , which in a generic coordinate system takes the form here , are the components of the transformed spacetime metric , from which the transformed space metric follows as .these relations resemble the constitutive relations of a special medium , but of course just describe the same physics as eqs . and , re - written in complicated coordinates . to make use of the relations as media parameters ,the solutions , , and are turned back into solutions in terms of the metric in the coordinate system , while keeping the form of the `` constitutive relations '' in terms of .since the maxwell equations only depend on the determinant of the metric , but not on its specific components , this can be achieved by the simple rescaling if , , and are a solution of the maxwell equations with metric and with `` constitutive relations '' , then , , and are a solution in terms of the coordinates with metric and with a constitutive relation in contrast to eq . , which still describe electrodynamics in empty space , the constitutive relations describe electrodynamics in a medium .the basic idea of transformation optics is illustrated in fig .[ fig : tonotation ] , which also summarizes our notation .an additional problem within the program of transformation optics has been pointed out by leonhardt and philbin : consider a transformation of the coordinates that changes the orientation of the manifold , i.e. that maps a right - handed coordinate system onto a left - handed one and vice versa .examples of such transformations are or the combined transformation , .let s consider the second example and assume that we start with a right - handed coordinate system , .since the transformation does not change the determinant of the metric , , we immediately obtain where in the first step the coordinate transformation has been applied and in the second step the result has been re - interpreted in terms of the original space . obviously , under transformations that change the orientation of the manifold the sign of the anti - symmetric tensor changes as well .thus , they lead to a sign error in the maxwell equations since the cross products change sign and we have to conclude that the recipe does not yield solutions of the correct maxwell equations if such orientation changing transformations are allowed . to circumvent this problem it has been suggested in ref . to include an additional sign in the rescalings .indeed , with the new definition with these signs are absorbed in the definition of and .but these new re - interpretations of the solutions also affect the constitutive relations , which now become this leads to the important conclusion that _ transformations that change the orientation of the coordinate system yield media with negative index of refraction . _ since this conclusionis intimately connected with the spacetime symmetries of electrodynamics , it makes sense to review the argument in terms of a spacetime covariant formulation of electrodynamics .thus , the fields and are combined to the field strength tensor , while and become parts of the excitation tensor .the space vectors are found by the identifications and the maxwell equations are rewritten as here , is the covariant spacetime derivative and is the four - current encompassing and .this formulation has the advantage that all spacetime symmetries are manifest and consequently spatial , time and mixed spacetime transformations can be treated on the same footing . while the maxwell - ampre equations are invariant under any change of orientation , the maxwell - faraday equations change sign if __ changes its orientation due to the four - dimensional anti - symmetric tensor .thus , it has been suggested in ref . that the field strength tensor in the medium should be defined as where now indicates a change in the orientation of _ spacetime _ rather than just space. thus , also a map yields a negative index of refraction .consequently , the signs in eq . have to be replaced by in this prescription . on secondthought , however , it is seen that an eventual change of sign in the maxwell - faraday equations remains without consequences , simply since the sign just appears as an overall factor .thus , even for orientation changing transformations _the correct maxwell equations in the medium are found without sign ambiguity and there is no need for a negative index of refraction ._ the apparent contradiction is resolved immediately by looking at eqs and . in these equations the magnetic fields and are unambiguously defined as pseudo - vectors and thus they are reversed if the orientation of the manifold changes .thus , although no signs appear in the maps of spacetime tensors , they reappear in the maps of space vectors : since this prescription changes the signs of and in case of orientation changing transformations , the constitutive relations are not affected and consequently there is no room for negative index of refraction .this exercise shows that some sign changes are unavoidable in the re - interpretation of the transformed vacuum solutions as medium solutions due to the cross product in the maxwell equations and the fact that and are vectors , while and are pseudo - vectors .however , there are different ways treat these signs and depending on the choice different maps yield negative index of refraction .the three prescriptions presented here are summarized in table [ tab : summary ] , it should be noted that this list is not exhaustive , but further possibilities to distribute the signs are conceivable . \mbox{type } & \mbox{(a ) : \cite{leonhardt:2006nj , leonhardt:2008oe } } & \mbox{(b ) : \cite{bergamin:2008pa } } & \mbox{(c ) : minimal}\\ \hline&&&\\[-1.6ex ] \mbox{spacetime tensors } & \tilde f_{\mu\nu } = \bar\sigma f_{\mu\nu } & \tilde f_{\mu\nu } = \bar s f_{\mu\nu } & \tilde f_{\mu\nu } = f_{\mu\nu } \\ & \tilde{\mathcal h}^{\mu\nu } = \frac{\sqrt{-\bar g}}{\sqrt{-g } } \bar{\mathcal h}^{\mu\nu } & \tilde{\mathcal h}^{\mu\nu } = \frac{\sqrt{-\bar g}}{\sqrt{-g } } \bar{\mathcal h}^{\mu\nu } & \tilde{\mathcal h}^{\mu\nu } = \frac{\sqrt{-\bar g}}{\sqrt{-g } } \bar{\mathcal h}^{\mu\nu } \\\hline&&&\\[-1.6ex ] \mbox{space vectors } & \tilde e_i = \bar \sigma \bar e_i & \tilde e_i = \bar s \bar e_i & \tilde e_i = \bar e_i \\ & \tilde d^i = \frac{\sqrt{\bar \gamma}}{\sqrt{\gamma } } \bar d^i & \tilde d^i = \frac{\sqrt{\bar \gamma}}{\sqrt{\gamma } } \bar d^i & \tilde d^i = \frac{\sqrt{\bar \gamma}}{\sqrt{\gamma } } \bar d^i \\\hline&&&\\[-1.6ex ] \mbox{space pseudo - vectors } & \tilde b^i = \frac{\sqrt{\bar \gamma}}{\sqrt{\gamma } } \bar b^i & \tilde b^i = \bar s \bar \sigma\frac{\sqrt{\bar \gamma}}{\sqrt{\gamma } } \bar b^i & \tilde b^i = \bar \sigma \frac{\sqrt{\bar \gamma}}{\sqrt{\gamma } } \bar b^i \\ & h_i = \bar \sigma \bar h_i & h_i = \bar \sigma \bar h_i & h_i = \bar \sigma \bar h_i \\ \hline&&&\\[-1.6ex ] \mbox{negative index } & x^i\rightarrow \bar x^i & x^\mu\rightarrow \bar x^\mu & \mbox{never } \\\mbox{of refraction } & \mbox{changes orientation } & \mbox{changes orientation } & \\\hline \end{array}\ ] ]which of the three options ( a)(c ) is the correct one? there exists no definite answer to this question , there even exist more possibilities than presented here . since the re - interpretation of the solutions and are an _ ad - hoc _ manipulation , there exist no strict rules or even mathematical definitions how this should be done . as only requirement , the fields with a tilde must constitute a solution of the maxwell equations in the space with coordinates and a trivial transformation must map the original solution onto itself . from a purely mathematical point of view, option ( c ) clearly appears as the preferable one , since it contains no sign ambiguity at all in the spacetime formulation , or in terms of space vectors changes the signs of and , which are pseudo - vectors and thus have to be odd under a change of orientation of space .still , the possibility to describe materials with negative index of refraction is an important asset of options ( a ) and ( b ) . at this point, one should remember that negative refractive index media are interesting only if they include some interfaces to normal media or empty space .thus , it has to be studied how the different prescriptions match the boundary conditions that have to hold at such an interface . here, we answer this question from the point of view taken in ref . : on both sides of an interface between two transformation media , the solution of the maxwell equations in the medium ( i.e. the solutions with a tilde ) can be described by means of vacuum solutions ( the solutions without tilde or bar . )we thus can ask the question , under which restrictions of the transformations the boundary conditions at the interface are met if the _ same _ vacuum solution , is used on both sides . this provides a sufficient condition for a reflectionless interface and in addition guarantees that the interpretation of transformation optics as `` mimicking a different space '' indeed extends across the interface .the transformations as presented in table [ tab : summary ] are not yet sufficient to perform this task , but we need the expressions of the medium solutions in terms of the original vacuum solution . in the followingwe exclude bi - anisotropic media ; then the transformations for and are while the transformations of and depend on the chosen prescription ( a)(c ) : in these equations we have introduced a new parameter , which just represents the fact that shifting _ all _ fields by a constant does not change the constitutive relation . without loss of generality one can assume and moreover in case of a trivial map .furthermore , is the determinant of the transformation matrix .let us now consider a passive interface between a `` left medium '' ( index ) and a `` right medium '' ( index ) with boundary conditions where is a vector normal to the interface .without loss of generality we can assume an adapted coordinate system in laboratory space , such that the direction normal to the interface is labeled by the coordinate , while the directions parallel to the interface have coordinates , whereby the index takes values . as is immediately seen the four conditions and reduce to two restrictions on the transformation if option ( b ) is chosen .then , the vacuum solution extends across the interface if these restrictions are satisfied if the two transformations obey remains unrestricted , in particular is permitted and yields negative index of refraction .these conditions say that the transformed coordinates parallel to the interface must agree on both sides , while the transformation in the orthogonal direction is continuous , but not necessarily differentiable at the interface .furthermore , the time coordinates must agree on both sides . a negative index of refraction results as an inversion of the direction normal to the interface .contrariwise , options ( a ) and ( c ) yield four different conditions . still , for all cases that meet the restrictions option ( a ) reduces to the case ( b ) , since time inversions are excluded .thus in all cases that allow an extension of the vacuum solution across the interface . still , within the prescription ( c ) the boundary conditions and can not be met with the same vacuum solution on both sides of the interface unless , i.e. there is no change of orientation .we thus conclude that there exist good reasons to chose options ( a ) or ( b ) since these prescriptions allow to describe a larger class of interfaces by means of a single vacuum solution than option ( c ) and thereby also allow to describe negative refractive index materials .if negative index of refraction can be made part of transformation optics , how good is this interpretation ? to our knowledge perfect lenses are the only device where transformation optics with negative index of refraction was proposed .thus , we restrict to this example here .a flat lens is associated with the map as is easily seen , any point is mapped on three different points in the virtual electromagnetic space and upon re - interpretation on three different points in laboratory space , whereby in the region a medium with negative index of refraction emerges .this triple valued map was associated with perfect imaging , since any solution of the maxwell equations in the region is reproduced exactly inside the lens at and on the other side of the lens at . sincenegative index of refraction within transformation optics is rather an effect of the choice of signs than an inherent characteristic , one should have a careful look at the lens proposed by the map .the following three conclusions are immediate : 1 . due to causality , the transformation optics lensis strictly limited to stationary situations .it is well known that transformation designed concepts can get in conflict with causality , but mostly this can be resolved by a limitation to a rather narrow bandwidth .however , the folding of space by means of the map limits the application of this concept to strictly stationary situations , simply because any change in the electromagnetic fields at the source point causes an _ instantaneous _ change of the mirror image inside the lens and the image behind the lens .2 . the transformation designed lens can not image a source , but rather triples it .indeed , a situation with a source at the source point , but empty mirror image and image point , is not covered by transformation optics .instead , a source automatically creates a mirror source ( sink ) inside the lens and a second source behind the lens ( see fig .[ fig : lens comparison ] . ) 3 .consequently , within transformation optics no enhancement of the evanescent waves takes place , which is the working principle of the pendry - veselago lens . as can be seen from fig .[ fig : lens comparison ] , all evanescent waves in the transformation designed lens are easily explained as the evanescent modes generated by one of the three sources .there is no need for an amplification of such modes .in this paper we have reviewed the role of negative index of refraction within transformation optics .it was shown that negative refraction emerges as a consequence of a sign ambiguity and thus should not be seen as an inherent characteristic of transformation optics .indeed , variants of transformation optics without negative refraction are consistent and follow immediately from the expected behavior of the electromagnetic fields under orientation changing transformations .nonetheless , it can be argued that negative refraction should be included , since this formulation allows a simpler description of interfaces between different transformation media . the most important application of a negative index of refraction , the perfect lens , has shortly been reviewed starting from the above observation .most importantly it was found that the transformation designed lens does not amplify the evanescent modes , but at the same time also is unable to image a source .this observation might be important with respect to recent ideas on perfect imaging without negative refraction .also this concept relies on multi - valued maps , notice however that in these works several points in electromagnetic space are mapped onto a single point in laboratory space , rather than vice versa as in the case of the lens discussed here .thus , even within transformation optics this concept is not restricted to stationary situations . as a general conclusionit should be stressed that transformation designed imaging devices should be used with utmost care , in particular if they include negative index of refraction . in many cases ,the analysis essentially is restricted to stationary situations where sources are not imaged , but rather duplicated .consequently , such concepts do not amplify evanescent modes , but they are produced at source and image points by means of the multiple sources .the author would like to thank s. tretyakov , c. simovski , i. nefedov , p. alitalo and a. favaro for stimulating discussions .this project was supported by the academy of finland , project no. 124204 .in this appendix we present our notations and conventions regarding the covariant formulation the maxwell equations on a generic ( not necessarily flat ) manifold and written in general coordinates . for a detailed introduction to the topic we refer to the relevant literature , e.g. .greek indices are spacetime indices and run from 0 to 3 , latin indices space indices with values from 1 to 3 .furthermore an adapted coordinate system is used at the interface , such that , where are the directions parallel to the interface , while is perpendicular . therefore capitallatin indices take values 1,2 . for the metric we use the `` mostly plus '' convention , so the standard flat metric is .time is always interpreted as the zero - component of , . with this identification an induced space metric can be obtained as where is the kronecker symbol .this implies as relation between the determinant of the spacetime metric , , and the one of the space metric , , in the relativistically covariant formulation and are combined to the field strength tensor , while and become part of the excitation tensor : &= \begin{pmatrix } 0 & e_1 & e_2 & e_3\\-e_1 & 0 & -c b^3 & c b^2 \\-e_2 & c b^3 & 0 & -c b^1 \\-e_3 & -c b^2 & c b^1 & 0 \end{pmatrix } & [ \mathcal h^{\mu\nu}]&= \frac{1}{\varepsilon_0 \sqrt{g_{00}}}\begin{pmatrix } 0 & -d^1 & -d^2 & -d^3\\ d^1 & 0 & - \frac{h_3}{c } & \frac{h_2}{c } \\ d^2 & \frac{h^3}{c } & 0 & -\frac{h^1}{c } \\ d^3 & -\frac{h_2}{c } & \frac{h_1}{c } & 0 \end{pmatrix}\end{aligned}\ ] ] finally , electric charge and current are combined into a four - current . in this waythe maxwell equations can be written in the compact form the maxwell equations depend on the metric through the covariant derivative . since it is seen that the maxwell equations just depend on the determinant of the metric , but not on its individual components .diffeomorphisms can change the orientation of a manifold , such that a right - handed coordinate system in laboratory space is mapped onto a left - handed one in electromagnetic space .this induces several changes of signs due to the levi - civita tensor that appears in the maxwell equations .the four dimensional levi - civita tensor is defined as \ , & \epsilon^{\mu\nu\rho\sigma } & = - \frac1{\sqrt{-g}}[\mu\nu\rho\sigma]\ , \end{aligned}\ ] ] with = 1 $ ] .the relation between the four - dimensional levi - civita tensors two different spacetimes can be written as where if the corresponding map does not change the orientation of the manifold , otherwise .the reduction of the four dimensional to the three dimensional tensor reads an additional complication arises in the definition of , since the orientation of the spacetime manifold may change without changing the orientation of space ( e.g. a map , changes the orientation of spacetime but not of space . )therefore the corresponding relation should be written as where if the spatial part of the transformation preserves the orientation and otherwise .l. bergamin , `` a coordinate transformation approach to indefinite materials and their perfect lenses , '' in _2nd international congress on advanced electromagnetic materials in microwaves and optics , `` metamaterials 2008 '' _ , p. 591 . 2008 .
|
negative index of refraction has become an accepted part of transformation optics , which is encountered in transformations that change the orientation of the manifold . based on this concept , various designs of perfect lenses have been proposed , which all rely on a folding of space or spacetime , where the maps from electromagnetic space to laboratory space are multi - valued . recently , a new concept for perfect imaging has been proposed by leonhardt and philbin , which also uses multi - valued maps , but does neither include negative index of refraction nor an amplification of evanescent modes . in this context it was speculated that multi - valued maps should be seen as the basis of perfect imaging rather than amplification of evanescent modes . it might be useful to review the standard lens based on negative index of refraction from this point of view . in this paper we show that a negative index of refraction is not an inherent characteristic of transformation optics , but rather appears as a specific choice of a sign ambiguity . furthermore , we point out that the transformation designed lens does not amplify evanescent modes , in contrast to the pendry - veselago lens . instead , evanescent modes at the image point are produced by a duplicated source and thus no imaging of the near field takes place .
|
the levenshtein ( or edit ) metric is a standard tool to estimate the distance between two sequences .it is widely used in linguistics and bioinformatics , and for the recognition of text blocks with isolated mistakes . as is well known , its computational complexity , when applied to two sequences of ( approximately ) the same length , is . since this is a hurdle in many practical applications , it is desirable to replace , or to approximate , the levenshtein ( l ) distance by some quantity of smaller ( preferably linear ) computational complexity .two fast approximation algorithms for edit distances were suggested by , one based on maximal exact matches , the other on suitably restricted subword comparisons between the two sequences ; compare also .this would indeed give , due to their computability from the suffix tree ; see .however , they only provide lower bounds , and hence no complete solution of the problem .it seems possible to estimate probabilistically , with sublinear complexity , whether the l - distance of two sequences is ` small ' or ` large ' ; see .whether an improvement of this rather coarse result or even a replacement of the l - distance is possible , with at most linear complexity and a non - probabilistic outcome , seems open .below , we compare the l - distance with a representative dictionary - based distance .our findings support the conclusion that such a simplification might be difficult or even impossible . on the way , we highlight some interesting properties that have been neglected so far , but seem relevant for a better understanding of such distance concepts .to keep discussion and results transparent , we concentrate on two specific distances , and on binary sequences .we have also tried a number of obvious alternatives , but they did not show any significantly different behaviour . in this sense ,the structure of our example is more likely typical than exceptional .the l - distance of two sequences and ( not necessarily of equal length ) is the minimum number of edit operations ( insertions , deletions , or substitutions ) needed to transform into or vice versa ( * ? ? ?though is closely related to the longest common subsequence ( lcs ) ( loc.cit .11.6.2 ) of and ( and hence to distances based upon it ) , one important difference lies in the possibility of substitutions .so , using the lcs in this context requires some care . for sequences of lengths and , the computational complexity of calculating ( or the lcs ) is , e.g. , when based on the needleman - wunsch algorithm ; see ( * ? ? ? * ch .6.4.2 ) .a generic choice for a dictionary - based metric is where is the full dictionary of , i.e. , the set of all non - empty subwords of , and is the symmetric difference of and .this choice actually disregards the goal of computational simplification , but focuses on the full dictionary information instead , and thus , in some sense , represents the optimal information on the sequences to be compared .it is well known that , using the suffix tree structure , the calculation of closely related dictionary - based distances is possible with linear complexity , e.g. , by means of ukkonen s algorithm ; compare ( * ? ? ?* ch . 6 ) . on the other hand ,further restrictions are likely to reduce the usefulness in relation to the l - distance .both and define a _i.e. , for arbitrary sequences and , the distance satisfies the axioms of a metric ( * ? ? ?* ch . 2.11 ): * ( positivity ) ; * if and only if ( non - degeneracy ) ; * ( symmetry ) ; * ( triangle inequality ) .less clear is the relation between and .since one can easily construct pairs of sequences that are close in one , but not in the other distance , they are certainly not equivalent in the strong sense as also used for norms , compare ( * ? ? ?they are equivalent in the weaker sense of generating the same topology ( * ? ? ?22.5 ) , which is the discrete topology here . however , this is of little use for the question addressed above .the situation does not improve if one replaces by the quotient , which is another metric , with range in $ ] . as we shall see below, the situation is actually much worse .of the l - distance between two random sequences of length ( dots ) and gaussian approximation ( line ) , with mean and variance .,scaledwidth=80.0% ] to get a first impression of the l - distance , we computed the discrete probability distribution of the values for sequences of the same length , under uniform distribution on sequence space .this has long been known to be a reasonable first approach for the comparison of sequences from data bases . up to length 20 , this was done using all possible pairs ; for longer sequences , the distribution was estimated from a sufficiently large random selection of pairs . for length ,the result obtained from pairs is shown in figure [ gauss ] . for large , the distributions seem to be well described by gaussian ( or normal ) distributions .this qualitative behaviour does not change much and seems to improve with sequence length .one could add weight to this finding by performing a statistical test on gaussianity , which would score well .however , we think that one should not over - interpret this observation , in particular in view of a recent numerical investigation by which indicates that a gamma distribution might give an even better description .note that , if extrema over _ local _ alignments are taken , one obtains an extremal value distribution ( * ? ? ?however , this implies nothing for the _ global _ alignment considered here .the possible ( or approximate ) gaussian nature of this case has been observed before by dayhoff , see ( * ? ? ?3 ) and references given there ; a more detailed investigation of tail probabilities can be found in .still , it seems to be hardly noted , although it is a relevant phenomenon that deserved further attention , with exact results presently not in sight . of the probability distribution as a function of sequence length , calculated exactly for and by simulation otherwise .the solid line shows the least squares fit .,scaledwidth=80.0% ] of the probability distribution as a function of sequence length , calculated exactly for and by simulation otherwise . the solid line shows the least squares fit .,scaledwidth=80.0% ] for this reason, we could only investigate our findings numerically .beyond checking the gaussian behaviour qualitatively , means and variances were calculated for different , both by exact enumeration ( for ) and by simulation ( for larger , up to ) .it is an interesting question whether the mean and the variance , as functions of sequence length , show power - law behaviour , at least asymptotically . our data ,see figures [ meanplot ] and [ varplot ] , are compatible with an asymptotically linear growth of the mean and an asymptotic power law for the variance , both with a square - root correction term ( for which we do not have any particular justification ) .such predictions and conjectures are presently discussed by various people . in particular , the power law for the variance would be in line with analogous observations for the lcs , compare .since there has recently been some doubt in the correctness of this finding , it requires further corroboration and investigation .a similar finding ( though with larger fluctuations ) applies to the distribution of the values for random pairs . however , there is no compelling reason to investigate this specific distance in detail , as it was mainly selected for illustrative purposes and does not seem to be closely related to one of the standard problems of probability theory .more interesting , and also more relevant , is the question for the _ joint _ distribution of and .a necessary requirement for a useful relation between the two distances would be a strong correlation .however , as figure [ corr ] shows for sequences of length , there is little correlation at all the joint distribution is rather well described by the product of the two gaussians needed for the marginal distributions .this observation could be quantified with some effort , but we refrain from doing so because it would not contribute to the interpretation at this stage .our finding means that , at least on the level of the full sequence space or for the alignment of two random sequences ( as analyzed in our simulations ) , the distances and are closer to being statistically independent of each other than to being useful approximations of one another . for and , obtained from a simulation with random pairs of sequences of length .both marginal distributions are approximately gaussian , with slightly larger fluctuations for the distribution of -values.,scaledwidth=80.0% ]our findings are to be interpreted with care . they do not rule out a simplified approach to l - type distances , at least when restricted to ( possibly relevant ) subsets of sequences .however , they seem to indicate that subword comparison leads to statistically independent information , at least when viewed on the full sequence space . clearly , different distance concepts can and should be tried .moreover , a rigorous stochastic analysis of the various limit distributions is necessary to clarify the picture obtained from the simulations . as long as analytic results ( e.g. , via limit theorems ) are unavailable , it would also help to perform a more detailed statistical analysis of the various distributions , including clear - cut statistical tests .in particular , it would be extremely relevant to also consider suitable subspaces of the full sequence space , such as those extractable from existing data bases . though this is clearly far beyond the scope of this short note , we believe that it would be a rewarding task for future investigations .it is our pleasure to thank e. baake and d. lenz for helpful discussions , and f. merkl and m. vingron for useful hints on the literature .financial support from british council ( arc 1213 ) and daad is gratefully acknowledged .batu , t. , ergn , f. , kilian , j. , magen , a. , raskhodnikova , s. , rubinfeld , r. , and sami , r. 2003 . a sublinear algorithm for weakly approximating edit distance , 316324 . in _ proc . of the 35th annual acm symp . on theory of computing_. acm press , new york , ny .pearson , w.r . , and wood , t.c . 2001 .statistical significance in biological sequence comparison , 3965 . in balding ,d.j . , bishop , m. , and cannings , c. , eds ., _ handbook of statistical genetics_. wiley , chichester .
|
the levenshtein distance is an important tool for the comparison of symbolic sequences , with many appearances in genome research , linguistics and other areas . for efficient applications , an approximation by a distance of smaller computational complexity is highly desirable . however , our comparison of the levenshtein with a generic dictionary - based distance indicates their statistical independence . this suggests that a simplification along this line might not be possible without restricting the class of sequences . several other probabilistic properties are briefly discussed , emphasizing various questions that deserve further investigation . * surprises in approximating levenshtein distances * michael baake , uwe grimm and robert giegerich fr mathematik , universitt bielefeld , postfach 100131 , 33501 bielefeld , germany + of mathematics , the open university , milton keynes mk7 6aa , uk + fakultt , universitt bielefeld , postfach 100131 , 33501 bielefeld , germany + * keywords : * global alignment ; distance concepts ; statistical independence ; computational complexity
|
experts probability assessments are often evaluated on _ calibration _ , which measures how closely the frequency of event occurrence agrees with the assigned probabilities . for instance , consider all events that an expert believes to occur with a 60% probability .if the expert is well calibrated , 60% of these events will actually end up occurring . even though several experiments have shown that experts are often poorly calibrated [ see , e.g. , ] , these are noteworthy exceptions . in particular , argue that higher self - reported expertise can be associated with better calibration .calibration by itself , however , is not sufficient for useful probability estimation . consider a relatively stationary process , such as rain on different days in a given geographic region , where the observed frequency of occurrence in the last 10 years is 45% . in this setting an expertcould always assign a constant probability of 0.45 and be well - calibrated .this assessment , however , can be made without any subject - matter expertise .for this reason the long - term frequency is often considered the baseline probability a naive assessment that provides the decision - maker very little extra information .experts should make probability assessments that are as far from the baseline as possible .the extent to which their probabilities differ from the baseline is measured by _ sharpness _ [ ] .if the experts are both sharp and well calibrated , they can forecast the behavior of the process with high certainty and accuracy .therefore , useful probability estimation should maximize sharpness subject to calibration [ see , e.g. , ]. there is strong empirical evidence that bringing together the strengths of different experts by combining their probability forecasts into a single consensus , known as the _ crowd belief _ , improves predictive performance .prompted by the many applications of probability forecasts , including medical diagnosis [ ] , political and socio - economic foresight [ ] , and meteorology [ ] , researchers have proposed many approaches to combining probability forecasts [ see , e.g. , for some recent studies , and for a comprehensive overview ] .the general focus , however , has been on developing one - time aggregation procedures that consult the experts advice only once before the event resolves .consequently , many areas of probability aggregation still remain rather unexplored .for instance , consider investors aiming to assess whether a stock index will finish trading above a threshold on a given date . to maximize their overall predictive accuracy, they may consult a group of experts repeatedly over a period of time and adjust their estimate of the aggregate probability accordingly .given that the experts are allowed to update their probability assessments , the aggregation should be performed by taking into account the temporal correlation in their advice .this paper adds another layer of complexity by assuming a heterogeneous set of experts , most of whom only make one or two probability assessments over the hundred or so days before the event resolves .this means that the decision - maker faces a different group of experts every day , with only a few experts returning later on for a second round of advice .the problem at hand is therefore strikingly different from many time - series estimation problems , where one has an observation at every time point or almost every time point . as a result ,standard time - series procedures like arima [ see , e.g. , ] are not directly applicable .this paper introduces a time - series model that incorporates self - reported expertise and captures a sharp and well - calibrated estimate of the crowd belief .the model is highly interpretable and can be used for the following : * analyzing under and overconfidence in different groups of experts , * obtaining accurate probability forecasts , and * gaining question - specific quantities with easy interpretations , such as expert disagreement and problem difficulty .this paper begins by describing our geopolitical database .it then introduces a dynamic hierarchical model for capturing the crowd belief .the model is estimated in a two - step procedure : first , a sampling step produces constrained parameter estimates via gibbs sampling [ see , e.g. , ] ; second , a calibration step transforms these estimates to their unconstrained equivalents via a one - dimensional optimization procedure .the model introduction is followed by the first evaluation section that uses synthetic data to study how accurately the two - step procedure can estimate the crowd belief .the second evaluation section applies the model to our real - world geopolitical forecasting database .the paper concludes with a discussion of future research directions and model limitations .forecasters were recruited from professional societies , research centers , alumni associations , science bloggers and word of mouth ( ) .requirements included at least a bachelor s degree and completion of psychological and political tests that took roughly two hours .these measures assessed cognitive styles , cognitive abilities , personality traits , political attitudes and real - world knowledge .the experts were asked to give probability forecasts ( to the second decimal point ) and to self - assess their level of expertise ( on a 1-to-5 scale with 1 at all expert and 5 expert ) on a number of 166 geopolitical binary events taking place between september 29 , 2011 and may 8 , 2013 .each question was active for a period during which the participating experts could update their forecasts as frequently as they liked without penalty .the experts knew that their probability estimates would be assessed for accuracy using brier scores .this incentivized them to report their true beliefs instead of attempting to game the system [ ] .in addition to receiving ] .partition the experts into groups based on some individual feature , such as self - reported expertise , with each group sharing a common multiplicative bias term for .collect these bias terms into a bias vector ^t ] .ideally this probability maximizes sharpness subject to calibration [ for technical definitions of calibration and sharpness see ] . even though a single expert is unlikely to have access to all the available information, a large and diverse group of experts may share a considerable portion of the available information .the collective wisdom of the group therefore provides an attractive proxy for .given that the experts may believe in false information , hide their true beliefs or be biased for many other reasons , their probability assessments should be aggregated via a model that can detect potential bias , separate signal from noise and use the collective opinion to estimate . in our modelthe experts are assumed to be , on average , a multiplicative constant away from .therefore , an individual element of can be interpreted as a group - specific _ systematic bias _ that labels the group either as overconfident [ or as underconfident [ .see section [ model ] for a brief discussion on different bias structures .any other deviation from is considered _random noise_. this noise is measured in terms of and can be assumed to be caused by momentary over - optimism ( or pessimism ) , false beliefs or other misconceptions .the _ random fluctuations _ in the hidden process are measured by and are assumed to represent changes or shocks to the underlying circumstances that ultimately decide the outcome of the event .the _ systematic component _ allows the model to incorporate a constant signal stream that drifts the hidden process .if the uncertainty in the question diminishes [ , the hidden process drifts to positive or negative infinity .alternatively , the hidden process can drift to zero , in which case any available information does not improve predictive accuracy [ .given that all the questions in our data set were resolved within a prespecified timeframe , we expect for all . asfor any future time , the model can be used for time - forward prediction as well .the prediction for the aggregate logit probability at time is given by an estimate of .naturally the uncertainty in this prediction grows in . to make such time - forward predictions , it is necessary to assume that the past population of experts is representative of the future population .this is a reasonable assumption because even though the future population may consist of entirely different individuals , on average the population is likely to look very similar to the past population . in practice , however , social scientists are generally more interested in an estimate of the current probability than the probability under unknown conditions in the future .for this reason , our analysis focuses on probability aggregation only up to the current time . for the sake of model identifiability , it is sufficient to share only one of the elements of among the questions .in this paper , however , all the elements of are assumed to be identical across the questions because some of the questions in our real - world data set involve very few experts with the highest level of self - reported expertise .the model can be extended rather easily to estimate bias at a more general level .for instance , by assuming a hierarchical structure , where denotes the self - reported expertise of the expert in question , the bias can be estimated at an individual - level .these estimates can then be compared across questions .individual - level analysis was not performed in our analysis for two reasons .first , most experts gave only a single prediction per problem , which makes accurate bias estimation at the individual - level very difficult .second , it is unclear how the individually estimated bias terms can be validated .if the future event can take upon possible outcomes , the hidden state is extended to a vector of size and one of the outcomes , for example , the one , is chosen as the base case to ensure that the probabilities will sum to one at any given time point .each of the remaining possible outcomes is represented by an observed process similar to ( [ observedpr ] ) . given that this multinomial extension is equivalent to having independent binary - outcome models , the estimation and properties of the model are easily extended to the multi - outcome case .this paper focuses on binary outcomes because it is the most commonly encountered setting in practice .this section introduces a two - step procedure , called _ sample - and - calibrate _( sac ) , that captures a well - calibrated estimate of the hidden process without sacrificing the interpretability of our model . given that for any yield the same likelihood for , the model as described by ( [ observedpr ] ) and ( [ hiddenpr ] ) is not identifiable .a well - known solution is to choose one of the elements of , say , , as the reference point and fix . in section [ syntheticdata ] we provide a guideline for choosing the reference point .denote the constrained version of the model by where the trailing input notation , ( a ) , signifies the value under the constraint .given that this version is identifiable , estimates of the model parameters can be obtained .denote the estimates by placing a hat on the parameter symbol .for instance , and represent the estimates of and , respectively .these estimates are obtained by first computing a posterior sample via gibbs sampling and then taking the average of the posterior sample .the first step of our gibbs sampler is to sample the hidden states via the _ forward - filtering - backward - sampling _ ( ffbs ) algorithm .ffbs first predicts the hidden states using a kalman filter and then performs a backward sampling procedure that treats these predicted states as additional observations [ see , e.g. , for details on ffbs ] .given that the kalman filter can handle varying numbers or even no forecasts at different time points , it plays a very crucial role in our probability aggregation under sparse data . our implementation of the sampling step is written in c and runs quite quickly . to obtain 1000 posterior samples for 50 questions each with 100 time points and50 experts takes about 215 seconds on a 1.7 ghz intel core i5 computer .see the supplemental article for the technical details of the sampling steps [ ] and , for example , for a discussion on the general principles of gibbs sampling . given that the model parameters can be estimated by fixing to any constant , the next step is to search for the constant that gives an optimally sharp and calibrated estimate of the hidden process .this section introduces an efficient procedure that finds the optimal constant without requiring any additional runs of the sampling step .first , assume that parameter estimates and have already been obtained via the sampling step described in section [ sampling_step ] .given that for any , we have that and .recall that the hidden process is assumed to be sharp and well calibrated .therefore , can be estimated with the value of that simultaneously maximizes the sharpness and calibration of . a natural criterion for this maximizationis given by the class of _ proper scoring rules _ that combine sharpness and calibration [ ] .due to the possibility of _ complete separation _ in any one question [ see , e.g. , ] , the maximization must be performed over multiple questions .therefore , where is the event indicator for question .the function is a strictly proper scoring rule such as the negative brier score [ ] or the logarithmic score [ ] the estimates of the unconstrained model parameters are then given by notice that estimates of and are not affected by the constraint .this section uses synthetic data to evaluate how accurately the sac - procedure captures the hidden states and bias vector .the hidden process is generated from standard brownian motion .more specifically , if denotes the value of a path at time , then \end{aligned}\ ] ] gives a sequence of calibrated logit probabilities for the event .a hidden process is generated for questions with a time horizon of .the questions involve 50 experts allocated evenly among five expertise groups .each expert gives one probability forecast per day with the exception of time when the event resolves .the forecasts are generated by applying bias and noise to the hidden process as described by ( [ observedpr ] ) .our simulation study considers a three - dimensional grid of parameter values : where varies the bias vector by ^t \beta ] , and discard the rest . given that this implies ignoring a portion of the data , we adopt a censoring approach similar to by changing and to and , respectively .our results remain insensitive to the exact choice of censoring as long as this is done in a reasonable manner to keep the extreme probabilities from becoming highly influential in the logit space .the second matter is related to the distribution of the class labels in the data . if the set of occurrences is much larger than the set of nonoccurrences ( or vice versa ), the data set is called _imbalanced_. on such data the modeling procedure can end up over - focusing on the larger class and , as a result , give very accurate forecast performance over the larger class at the cost of performing poorly over the smaller class [ see , e.g. , ] . fortunately , it is often possible to use a well - balanced version of the data .the first step is to find a partition and of the question indices such that the equality is as closely approximated as possible .this is equivalent to an np - hard problem known in computer science as the _ partition problem _ : determine whether a given set of positive integers can be partitioned into two sets such that the sums of the two sets are equal to each other [ see , e.g. , ] .a simple solution is to use a greedy algorithm that iterates through the values of in descending order , assigning each to the subset that currently has the smaller sum [ see , e.g. , for more details on the _ partition problem _ ] . after finding a well - balanced partition ,the next step is to assign the class labels such that the labels for the questions in are equal to for or .recall from section [ calibration_step ] that represents the event indicator for the question . to define a balanced set of indicators for all , let where , and .the resulting set is a balanced version of the data .this procedure was used to balance our real - world data set both in terms of events and time points .the final output splits the events exactly in half ( ) such that the number of time points in the first and second halves are 8737 and 8738 , respectively .the goal of this section is to evaluate the accuracy of the aggregate probabilities made by sac and several other procedures .the models are allowed to utilize a training set before making aggregations on an independent testing set . to clarify some of the upcoming notation , let and be index sets that partition the data into training and testing sets of sizes and , respectively .this means that the question is in the training set if and only if . before introducing the competing models , note that all choices of thinning and burn - in made in this section are conservative and have been made based on pilot runs of the models .this was done to ensure a posterior sample that has low autocorrelation and arises from a converged chain .the competing models are as follows : 1 ._ simple dynamic linear model _ ( sdlm ) .this is equivalent to the dynamic model from section [ model ] but with and .thus , where is the aggregate logit probability . given that this model does not share any parameters across questions , estimates of the hidden process can be obtained directly for the questions in the testing set without fitting the model first on the training set .the gibbs sampler is run for 500 iterations of which the first 200 are used for burn - in .the remaining 300 iterations are thinned by discarding every other observation , leaving a final posterior sample of 150 observations .the average of this sample gives the final estimates .2 . _ the sample - and - calibrate procedure both under the brier _ ( ) _ and the logarithmic score _ ( ) .the model is first fit on the training set by running the sampling step for 3000 iterations of which the first 500 iterations are used for burn - in .the remaining 2500 observations are thinned by keeping every fifth observation .the calibration step is performed for the final 500 observations .the out - of - sample aggregation is done by running the sampling step for 500 iterations with each consecutive iteration reading in and conditioning on the next value of and found during the training period .the first 200 iterations are used for burn - in .the remaining 300 iterations are thinned by discarding every other observation , leaving a final posterior sample of 150 observations .the average of this sample gives the final estimates ._ a fully bayesian version of _ ( ) .denote the calibrated logit probabilities and event indicators across all questions with and , respectively .the posterior distribution of conditional on is given by .the likelihood is \\[-8pt ] \nonumber & & \qquad \propto\prod _ { k=1}^k \prod_{t=1}^{t_k } { \operatorname{logit}}^{-1 } \bigl(x_{t , k}(1)/\beta \bigr)^{z_k } \bigl ( 1- { \operatorname{logit}}^{-1 } \bigl ( x_{t , k}(1)/\beta \bigr ) \bigr)^{1-z_k}.\end{aligned}\ ] ] as in , the prior for is chosen to be locally uniform , . given that this model estimates and simultaneously , it is a little more flexible than .posterior estimates of can be sampled from ( [ ose2 ] ) using generic sampling algorithms such as the metropolis algorithm [ ] or slice sampling [ ] .given that the sampling procedure conditions on the event indicators , the full conditional distribution of the hidden states is not in a standard form .therefore , the metropolis algorithm is also used for sampling the hidden states .estimation is made with the same choices of thinning and burn - in as described under _ sample - and - calibrate_. 4 .due to the lack of previous literature on dynamic aggregation of expert probability forecasts , the main competitors are exponentially weighted versions of procedures that have been proposed for static probability aggregation : a. _ exponentially weighted moving average _ ( ewma ) as described in section [ syntheticdata ] .b. _ exponentially weighted moving logit aggregator _ ( ewmla ) .this is a moving version of the aggregator that was introduced in .the ewmla aggregate probabilities are found recursively from where the vector collects the bias terms of the expertise groups , and the parameters and are learned from the training set by } \sum _ { k \in s_{\mathrm{train } } } \sum_{t=1}^{t_k } \bigl(z_k - \hat{p}_{t , k}(\alpha , \mathbf { b } ) \bigr)^2.\ ] ] c. _ exponentially weighted moving beta - transformed aggregator(ewmba)_. the static version of the beta - transformed aggregator was introduced in .a dynamic version can be obtained by replacing in the ewmla description with , where is the cumulative distribution function of the beta distribution and is given by ( [ weighted mean ] ) .the parameters and are learned from the training set by } \sum_{k \in s_{\mathrm{train } } } \sum _ { t=1}^{t_k } \bigl(z_k - \hat{p}_{t , k}(\alpha , \nu , \tau , \bolds{\omega } ) \bigr)^2 \nonumber \\[-8pt ] \\[-8pt ] \eqntext{\displaystyle\mbox{s.t . }\sum_{j=1}^j \omega_j = 1.}\end{aligned}\ ] ] the competing models are evaluated via a 10-fold cross - validation that first partitions the 166 questions into 10 sets such that each set has approximately the same number of questions ( 16 or 17 questions in our case ) and the same number of time points ( between 1760 and 1764 time points in our case ) .the evaluation then iterates 10 times , each time using one of the 10 sets as the testing set and the remaining 9 sets as the training set .therefore , each question is used nine times for training and exactly once for testing. the testing proceeds sequentially one testing question at a time as follows : first , for a question with a time horizon of , give an aggregate probability at time based on the first two days .compute the brier score for this probability .next give an aggregate probability at time based on the first three days and compute the brier score for this probability .repeat this process for all of the days .this leads to brier scores per testing question and a total of 17,475 brier scores across the entire data set . @* model * & & & & + + sdlm & 0.100 ( 0.156 ) & 0.066 ( 0.116 ) & 0.098 ( 0.154 ) & 0.102 ( 0.157 ) + & 0.097 ( 0.213 ) & * 0.053 * ( 0.147 ) & 0.100 ( 0.215 ) & 0.098 ( 0.215 ) + & 0.096 ( 0.190 ) & 0.056 ( 0.134 ) & 0.097 ( 0.190 ) & 0.098 ( 0.192 ) + & * 0.096 * ( 0.191 ) & 0.056 ( 0.134 ) & * 0.096 * ( 0.189 ) & * 0.098 * ( 0.193 ) + ewmba & 0.104 ( 0.204 ) & 0.057 ( 0.120 ) & 0.113 ( 0.205 ) & 0.105 ( 0.206 ) + ewmla & 0.102 ( 0.199 ) & 0.061 ( 0.130 ) & 0.111 ( 0.214 ) & 0.103 ( 0.200 ) + ewma & 0.111 ( 0.146 ) & 0.080 ( 0.101 ) & 0.116 ( 0.152 ) & 0.112 ( 0.146 ) + + sdlm & 0.089 ( 0.116 ) & 0.064 ( 0.085 ) &0.106 ( 0.141 ) & 0.092 ( 0.117 ) + & 0.083 ( 0.160 ) & * 0.052 * ( 0.103 ) & 0.110 ( 0.198 ) & 0.085 ( 0.162 ) + & 0.083 ( 0.142 ) & 0.055 ( 0.096 ) & 0.106 ( 0.174 ) & 0.085 ( 0.144 ) + & * 0.082 * ( 0.142 ) & 0.055 ( 0.096 ) & * 0.105 * ( 0.174 ) & * 0.085 * ( 0.144 ) + ewmba & 0.091 ( 0.157 ) & 0.057 ( 0.095 ) & 0.121 ( 0.187 ) & 0.093 ( 0.164 ) + ewmla & 0.090 ( 0.159 ) & 0.064 ( 0.109 ) & 0.120 ( 0.200 ) & 0.090 ( 0.159 ) + ewma & 0.102 ( 0.108 ) & 0.080 ( 0.075 ) & 0.123 ( 0.130 ) & 0.103 ( 0.110 ) + table [ prediction ] summarizes these scores in different ways .the first option , denoted by _scores by day _ , weighs each question by the number of days the question remained open .this is performed by computing the average of the 17,475 scores .the second option , denoted by _ scores by problem _ , gives each question an equal weight regardless of how long the question remained open .this is done by first averaging the scores within a question and then averaging the average scores across all the questions .both scores can be further broken down into subcategories by considering the length of the questions .the final three columns of table [ prediction ] divide the questions into _ short _ questions ( 30 days or fewer ) , _ medium _ questions ( between 31 and 59 days ) and _ long _ problems ( 60 days or more ) .the number of questions in these subcategories were 36 , 32 and 98 , respectively .the bolded scores indicate the best score in each column .the values in the parenthesis quantify the variability in the scores : under _ scores by day _ the values give the standard errors of all the scores . under _scores by problem _ , on the other hand , the values represent the standard errors of the average scores of the different questions .as can be seen in table [ prediction ] , achieves the lowest score across all columns except _ short _ where it is outperformed by .it turns out that is overconfident ( see section [ calibration ] ) .this means that underestimates the uncertainty in the events and outputs aggregate probabilities that are typically too near 0.0 or 1.0 .this results into highly variable performance .the short questions generally involved very little uncertainty . on such easy questions, overconfidence can pay off frequently enough to compensate for a few large losses arising from the overconfident and drastically incorrect forecasts .sdlm , on the other hand , lacks sharpness and is highly underconfident ( see section [ calibration ] ) .this behavior is expected , as the experts are underconfident at the group level ( see section [ expertbias ] ) and sdlm does not use the training set to explicitly calibrate its aggregate probabilities .instead , it merely smooths the forecasts given by the experts .the resulting aggregate probabilities are therefore necessarily conservative , resulting into high average scores with low variability . similar behavior is exhibited by ewma that performs the worst of all the competing models .the other two exponentially weighted aggregators , ewmla and ewmba , make efficient use of the training set and present moderate forecasting performance in most columns of table [ prediction ] .neither approach , however , appears to dominate the other .the high variability and average of their performance scores indicate that their performance suffers from overconfidence .a calibration plot is a simple tool for visually assessing the sharpness and calibration of a model .the idea is to plot the aggregate probabilities against the observed empirical frequencies .therefore , any deviation from the diagonal line suggests poor calibration. a model is considered underconfident ( or overconfident ) if the points follow an s - shaped ( or -shaped ) trend . to assess sharpness of the model ,it is common practice to place a histogram of the given forecasts in the corner of the plot . given that the data were balanced , any deviation from the the baseline probability of 0.5 suggests improved sharpness .the top and bottom rows of figure [ calibration - out ] present calibration plots for sdlm , , and under in- and out - of - sample probability aggregation , respectively .each setting is of interest in its own right : good in - sample calibration is crucial for model interpretability . in particular ,if the estimated crowd belief is well calibrated , then the elements of the bias vector can be used to study the amount of under or overconfidence in the different expertise groups .good out - of - sample calibration and sharpness , on the other hand , are necessary properties in decision making . to guide our assessment , the dashed bands around the diagonal connect the point - wise , bonferroni - corrected [ ] 95% lower and upper critical values under the null hypothesis of calibration .these have been computed by running the bootstrap technique described in for 10,000 iterations .the in - sample predictions were obtained by running the models for 10,200 iterations , leading to a final posterior sample of 1000 observations after thinning and using the first 200 iterations for burn - in .the out - of - sample predictions were given by the 10-fold cross - validation discussed in section [ forecasting ] .overall , sac is sharp and well calibrated both in- and out - of - sample with only a few points barely falling outside the _ point - wise _ critical values . given that the calibration does not change drastically from the top to the bottom row , sac can be considered robust against overfitting .this , however , is not the case with that is well calibrated in - sample but presents overconfidence out - of - sample . figure [ calibration - out](a ) and ( e ) serve as baselines by showing the calibration plots for sdlm .given that this model does not perform any explicit calibration , it is not surprising to see most points outside the critical values .the pattern in the deviations suggests strong underconfidence .furthermore , the inset histogram reveals drastic lack of sharpness .therefore , sac can be viewed as a well - performing compromise between sdlm and that avoids overconfidence without being too conservative .this section explores the bias among the five expertise groups in our data set .figure [ biases ] compares the posterior distributions of the individual elements of with side - by - side boxplots .given that the distributions fall completely below the _ no - bias _ reference line at 1.0 , all the expertise groups are deemed underconfident . even though the exact level of underconfidence is affected slightly by the extent to which the extreme probabilities are censored ( see section [ practicalmatters ] ) , the qualitative results in this section remain insensitive to different levels of censoring . for . ]figure [ biases ] shows that underconfidence decreases as expertise increases . the posterior probability that the most expert group is the least underconfident is approximately equal to , and the posterior probability of a strictly decreasing level of underconfidence is approximately 0.87 .the latter probability is driven down by the inseparability of the two groups with the lowest levels of self - reported expertise .this inseparability suggests that the experts are poor at assessing how little they know about a topic that is strange to them .if these groups are combined into a single group , the posterior probability of a strictly decreasing level of underconfidence is approximately 1.0.=1 the decreasing trend in underconfidence can be viewed as a process of bayesian updating .a completely ignorant expert aiming to minimize a reasonable loss function , such as the brier score , has no reason to give anything but 0.5 as his probability forecast .however , as soon as the expert gains some knowledge about the event , he produces an updated forecast that is a compromise between his initial forecast and the new information acquired .the updated forecast is therefore conservative and too close to 0.5 as long as the expert remains only partially informed about the event .if most experts fall somewhere on this spectrum between ignorance and full information , their average forecast tends to fall strictly between 0.5 and the most informed probability forecast [ see for more details ] .given that expertise is to a large extent determined by subject matter knowledge , the level of underconfidence can be expected to decrease as a function of the group s level of self - reported expertise .finding underconfidence in all the groups may seem like a surprising result given that many previous studies have shown that experts are often overconfident [ see , e.g. , for a summary of numerous calibration studies ] .it is , however , worth emphasizing three points : first , our result is a statement about groups of experts and hence does not invalidate the possibility of the individual experts being overconfident . to make conclusions at the individual level based on the group level bias termswould be considered an _ ecological inference fallacy _ [ see , e.g. , ] .second , the experts involved in our data set are overall very well calibrated [ ] .a group of well - calibrated experts , however , can produce an aggregate forecast that is underconfident .in fact , if the aggregate is linear , the group is necessarily underconfident [ see theorem 1 of ] .third , according to , the level of confidence depends on the way the data were analyzed .they explain that experts probability forecasts suggest underconfidence when the forecasts are averaged or presented as a function of independently defined objective probabilities , that is , the probabilities given by in our case .this is similar to our context and opposite to many empirical studies on confidence calibration .one advantage of our model arises from its ability to produce estimates of interpretable question - specific parameters , and .these quantities can be combined in many interesting ways to answer questions about different groups of experts or the questions themselves .for instance , being able to assess the difficulty of a question could lead to more principled ways of aggregating performance measures across questions or to novel insight on the kinds of questions that are found difficult by experts [ see , e.g. , a discussion on the _ hard - easy effect _ in ] . to illustrate , recall that higher values of suggest greater disagreement among the participating experts . given that experts are more likely to disagree over a difficult question than an easy one , it is reasonable to assume that has a positive relationship with question difficulty .an alternative measure is given by that quantifies the volatility of the underlying circumstances that ultimately decide the outcome of the event .therefore , a high value of can cause the outcome of the event to appear unstable and difficult to predict . as a final illustration of our model, we return to the two example questions introduced in figure [ exampleplotsfinal ] . given that and for the questions depicted in figure [ exampleplotsfinal](a ) and [ exampleplotsfinal](b ) , respectively , the first question provokes more disagreement among the experts than the second one .intuitively this makes sense because the target event in figure [ exampleplotsfinal](a ) is determined by several conditions that may change radically from one day to the next while the target event in figure [ exampleplotsfinal](b ) is determined by a relatively steady stock market index .therefore , it is not surprising to find that in figure [ exampleplotsfinal](a ) , which is much higher than in figure [ exampleplotsfinal](b ) .we may conclude that the first question is inherently more difficult than the second one .this paper began by introducing a rather unorthodox but nonetheless realistic time - series setting where probability forecasts are made very infrequently by a heterogeneous group of experts .the resulting data is too sparse to be modeled well with standard time - series methods . in response to this lack of appropriate modeling procedures, we propose an interpretable time - series model that incorporates self - reported expertise to capture a sharp and well - calibrated estimate of the crowd belief .this procedure extends the forecasting literature into an under - explored area of probability aggregation .our model preserves parsimony while addressing the main challenges in modeling sparse probability forecasting data .therefore , it can be viewed as a basis for many future extensions . to give some ideas , recall that most of the model parameters were assumed constant over time .it is intuitively reasonable , however , that these parameters behave differently during different time intervals of the question .for instance , the level of disagreement ( represented by in our model ) among the experts can be expected to decrease toward the final time point when the question resolves .this hypothesis could be explored by letting evolve dynamically as a function of the previous term and random noise .this paper modeled the bias separately within each expertise group .this is by no means restricted to the study of bias or its relation to self - reported expertise .different parameter dependencies could be constructed based on many other expert characteristics , such as gender , education or specialty , to produce a range of novel insights on the forecasting behavior of experts. it would also be useful to know how expert characteristics interact with question types , such as economic , domestic or international .the results would be of interest to the decision - maker who could use the information as a basis for hiring only a high - performing subset of the available experts .other future directions could remove some of the obvious limitations of our model .for instance , recall that the random components are assumed to follow a normal distribution .this is a strong assumption that may not always be justified .logit probabilities , however , have been modeled with the normal distribution before [ see , e.g. , ] .furthermore , the normal distribution is a rather standard assumption in psychological models [ see , e.g. , signal - detection theory in ] .a second limitation resides in the assumption that both the observed and hidden processes are expected to grow linearly .this assumption could be relaxed , for instance , by adding higher order terms to the model .a more complex model , however , is likely to sacrifice interpretability . given that our model can detect very intricate patterns in the crowd belief ( see figure [ exampleplotsfinal ] ) ,compromising interpretability for the sake of facilitating nonlinear growth is hardly necessary .a third limitation appears in an online setting where new forecasts are received at a fast rate . given that our model is fit in a retrospective fashion, it is necessary to refit the model every time a new forecast becomes available .therefore , our model can be applied only to offline aggregation and online problems that tolerate some delay .a more scalable and efficient alternative would be to develop an aggregator that operates recursively on streams of forecasts .such a _ filtering _ perspective would offer an aggregator that estimates the current crowd belief accurately without having to refit the entire model each time a new forecast arrives . unfortunately , this typically implies being less accurate in estimating the model parameters such as the bias term .however , as estimation of the model parameters was addressed in this paper , designing a filter for probability forecasts seems like the next natural development in time - series probability aggregation .the u.s . government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright annotation thereon . disclaimer : the views and conclusions expressed herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements , either expressed or implied , of iarpa , doi / nbc or the u.s . government .
|
most subjective probability aggregation procedures use a single probability judgment from each expert , even though it is common for experts studying real problems to update their probability estimates over time . this paper advances into unexplored areas of probability aggregation by considering a dynamic context in which experts can update their beliefs at random intervals . the updates occur very infrequently , resulting in a sparse data set that can not be modeled by standard time - series procedures . in response to the lack of appropriate methodology , this paper presents a hierarchical model that takes into account the expert s level of self - reported expertise and produces aggregate probabilities that are sharp and well calibrated both in- and out - of - sample . the model is demonstrated on a real - world data set that includes over 2300 experts making multiple probability forecasts over two years on different subsets of 166 international political events . , , ,
|
air quality and climate change are influenced by the fluxes of green house gases , reactive gas emissions and aerosols in the atmosphere .the ability to quantify variable , yet hardly observable emission rates is a key problem to be solved for the analysis of atmospheric systems , and typically addressed by elaborate and costly field campaigns or permanently operational observation networks . the temporal evolution of chemistry in the atmosphere is usually modelled by atmospheric chemistry transport models .optimal simulations are based on techniques of combining numerical models with observations . in meteorological forecast models , where initial values are insufficiently well known , while exerting a high influence on the model evolution , this procedure is termed data assimilation ( ) .there is no doubt that the optimization of the initial state is always of great importance for the improvement of predictive skill .however , especially for chemistry transport or greenhouse gas models with high dependence on the emissions in the troposphere , the optimization of initial state is no longer the only issue .the lack of ability to observe and estimate surface emission fluxes directly with necessary accuracy is a major roadblock , hampering the progress in predictive skills of climate and atmospheric chemistry models . in order to obtain the best linear unbiased estimation ( blue ) from the model with observations ,efforts of optimization included the emission rates by spatio - temporal data assimilation have been made .the first full chemical implementation of the 4d - variational method for atmospheric chemistry initial values is introduced in .further , elbern et al .( ) took the strong constraint of the diurnal profile shape of emission rates such that their amplitudes and initial values are the only uncertainty to be optimized and then implemented it by 4d - variational inversion .this strong constraint approach is reasonable because the diurnal evolution of emissions are typically much better known than the absolute amount of daily emissions .moreover , several data assimilation strategies were designed to adjust ozone initial conditions and emission rates separately or jointly in .bocquet et al .introduced a straightforward extension of the iterative ensemble kalman smoother in . in many cases , the better estimations of both the initial state and emission ratesare not always sustained based on appropriate observational network configurations when using popular data assimilation methods , such as 4d - variation and kalman filter and smoother .it may hamper the optimization by unbalanced weights between the initial state and emission rates , which can , in practice , even result in degraded simulations beyond the time intervall with available observations . the ability to evaluate the suitability of an observational network to control chemical states and emission rates for its optimised designis the a key qualification , which needs to be adressed . singular value decomposition ( svd ) can help identifying the priorities of observations by detecting the fastest growing uncertainties .the targeted observations problem is an important topic in the field of numerical weather prediction .singular vector analysis based on svd was firstly introduced to numerical weather prediction by lorenz ( ) , who applied it to analyse the largest error growth rates in an idealised atmospheric model .because of the high cost of computation , the singular vector analysis was not widely applied until 1980s .later the method of singular vector analysis of states of the meteorological model with high dimension was feasible ( ) .in atmospheric chemistry , studies about the importance of observations are still sparse .khattatov et al .( ) firstly analysed the uncertainty of a chemical compositions .liao et al .( ) focused on the optimal placement of observation locations of the chemical transport model . however , singular vector analysis for atmospheric chemistry with emissions is different since emissions play an similarly important role in forecast accuracy with initial values .goris and elbern ( ) recently used the singular vector decomposition to determine the sensitivity of the chemical composition to emissions and initial values for a variety of chemical scenarios and integration length .hence , in this paper , applying the kalman filter and smoother as the desirable data assimilation method we introduce an approach to identify the sensitivities of a network to optimize emission rates and initial values independently and balanced prior to any data assimilation procedure . through singular value decomposition and ensemble kalman filter and smoother, the computational cost of this approach can be reduced so that it is feasible in practice .then , by the equivalence between 4d variation and kalman filter for linear models , the approach is also feasible for the data assimilation of adjoint models via 4d variational techniques .this paper is organized as follows . in section [ model description ] , we describe the atmospheric transport - diffusion model with emission rates first and then reconstruct the state vector such that the emission rates are included dynamically . in section [ efficiency ] , the theoretical approach derives in order to determine the efficiency of observations or observational network configurations before running any data assimilation procedure . in section [ ensemble efficiency ] , based on the theoretical analysis in section [ efficiency ] ,we discuss the ensemble approach to evaluate the efficiency of observation configurations and present elementary examples . in section[ sensitivity ] , we present the approach to identify the sensitivity of observations by determining the directions of maximum perturbation growth to the initial perturbation . in the appendices ,the above approaches are generalised to continuous - time systems for comprehensive applications .the chemical tendency equation including emission rates , propagating forward in time , is usually described by the following atmospheric transport model where is a nonlinear model operator , and are the state vector of chemical constituents and emission rates at time , respectively .the a priori estimate of the state vector of concentrations is given and denoted by , termed background state .the a priori estimate of emission rates are usually taken from emission inventories , denoted by .let be the tangent linear operator of , the evolution of the perturbation of states and follows the tangent linear model with as where is the perturbation evolving from the perturbation of initial state of chemical state and emission rates .after discretizing the tangent linear model in space , let be the evolution operator or resolvent generated by .it is straightforward to obtain the linear solution of with continuous time as where , , the dimension of the partial phase space of concentrations and emission rates .obviously , .in addition , let be the observation configuration of and define where is a nonlinear forward observation operator mapping the model space to the observation space . then by linearising the nonlinear operator as , the linearised model equivalents of observation configurations can be presented as where , the dimension of the phase space of observation configurations at time . is the observation error at time of the gaussian distribution with zero mean and variance .it is feasible to apply the kalman filter and smoother into the model without any extension if the emission rates are accurate , which implies the initial state of concentration is the only parameter to be optimized .however , if the emission rates are poorly known , they should be combined into the state vector so that both of them can be updated by a smoother application . to establish the model with a new combination of the initial state and emissions ,let us rewrite the background of emission rates into the dynamic form where is a -dimensional vector of which the element is denoted by and is the diagonal matrix defined as since emission rates follow the diurnal variation , by taking the diurnal profile of emission rates as a constraint , the amplitude of emission rates can be estimated by constant emission factors ( ) .we reconstruct the dynamic model of emission rate perturbation as then , can be written as hence , we obtain the extended model with emission rates typically , there is no direct observation for emissions .therefore , we reconstruct the observation mapping as \left(\begin{array}{c } \delta c(t)\\ \delta e(t ) \end{array } \right)+\nu(t),\ ] ] where is a matrix with zero elements .it is clear now that both concentrations and emission rates are included into the state vector of the block extended model , such that the kalman smoother in a fixed time interval ] . in our case of emission rates , the estimation of by kalman smoother on ] . by the linear property of conditional expectation , \}]\\&=&e[m_e(t , s)e(s)\vert \{y(t),t\in [ t_0,t_n]\}]=m_e(t , s)e[e(s)\vert \{y(t ) , t\in [ t_0,t_n]\}],\end{aligned}\ ] ] which implies the dynamic model of emission ratessatisfies the constraint of the diurnal shape of emission rates if ] to the following abstract linear system : where is the state variable , is the observation vector at time , the model error and the observation error , follow gaussian distribution with zero mean and and are their covariance matrices respectively .denote the estimation of based on by , termed as the analysis estimation , the estimation of based on by , termed as forecast estimation .correspondingly , and are the analysis error and forecast error covariance matrices of and respectively . for convenience, the main results of the discrete - time kalman filter can be summarised as follows : + ( 1 ) analysis step : ( 2 ) forecasting step : where for any matrix , is the adjoint of and is the inverse of .denote the first guess of initial variance as and select and to be symmetric and positive definite. then we can rewrite and by the matrix inverse lemma ( ) , we have further , assume the model error , which is usually unknown , is negligible .then , we obtain hence , by the deduction based on and , we have define ] , where , and are the perturbations of the concentration , the emission rate and deposition rate of a species respectively . and are constants and is a differentiable function of height .assume , the numerical solution is based on the symmetric operator splitting technique ( ) with the following operator sequence where and are transport operators in horizontal directions , is the diffusion operator in vertical direction . the parameters of emission and deposition rates are included in .the lax - wendroff algorithm is chosen as the discretization method for horizontal advection with .the vertical diffusion is discretized by crank - nicolson discretisation with the thomas algorithm as solver .the horizontal domain is \times [ 0,14] ] with .so the number of the grid points , where , , .in addition , we choose , a continuous function in time , to formulate the temporal background evolution profile shape of the emission rate as where is the initial value of emission rate . with the same assumptions of and grid points in the 3d domain , the discrete dynamic model of emission rates is where are the coordinates of grid points and for expository reasons the background assumption of is denoted by , which is kept fixed . according to the discretization of the phase space, we always assume there is only one fixed observation configuration in this example .it indicates that the observation operator mapping the state space to the observation space is a time - invariant matrix. set 500 ( the ensemble number ) samplings for the initial concentration and emission rate respectively by pseudo independent random numbers and make the states correlated by moving average technique ._ advection test : _ for the advection test ( fig . 1 to fig . 6 ), we assume the model with a weak diffusion process and there is one single observation configuration of the concentration in the lowest layer at each time step , denoted by obs - cfg of conc in figures .besides , the emission source is assumed mainly from the location shown by the blue point in figures , named emss - source. if we set the data assimilation window to and the wind is from southwest , the left - side subplot in fig .1 shows the estimation of the concentration is probably improved at the field around the observation under the small assimilation window .meanwhile , though the right - side subplot in fig .1 shows hardly improvement of the emission rate , we can see from the first line of table 1 is feasible only for the concentration , for the simple reason that the single observation configuration can not detect the emission within the corresponding assimilation window .if we consider the same case as fig . 1 , but now extending the data assimilation window to , fig .2 shows the field where the concentration is potentially improved is enlarged since the states are more correlated with the extension of assimilation window and the estimation of the emission surrounding the emission source is improved , compared to the fig . 1 .the quantitative balance between the concentration and the emission is shown by the relative improvement ratios in the second line of table 1 .if we further extend the data assimilation window to , it is clear to see from fig . 3 and the third line of table 1 that the states are more correlated such that more areas can be analysed and improved by the single observation configuration . meanwhile , the improvement of the emission is dominant with increasing time .fig . 4 to fig . 6show the relative improvements of the concentration and emission rate , when the model domain is under a northeasterly wind regime , and assimilation windows of with the data assimilation windows , and respectively .it is easy to imagine that with northeasterly winds , whatever the duration of the assimilation window is , the emission is not detectable and improveable by the single observation configuration .this hypothesis is successfully tested by our approach , the results of which are clearly visible in fig . 4 to fig . 6 and table 2_ emission signal test : _ the purpose of emission signal test ( fig . 7 and fig . 8) is to show the approach is also sensitive to the different background profile of the emission rate evolution .hence , the only distinction between the situations in fig . 7 and fig . 8 is the background profile of the emission rate during the assimilation window . actually , fig .8 is the same case as fig .thus , the result of the approach is clearly shown in table 3 that the strong emission signal or the distinct variation of the emission rate during the data assimilation window is significant to the model to recognize the source of the changes of the concentration and improve the estimation of the states ._ diffusion test : _ the diffusion test ( fig .9 and fig .10 ) aims to test the approach via comparing the ensemble relative improvements of the concentration and the emission rate of the model with a weak diffusion process and a strong diffusion process . for the case in fig .9 , all assumptions are same with the situation in fig . 2 except that the single observation configuration is at the top layer instead . the only difference of the assumptions between fig .9 and fig .10 is that in fig .9 and in fig . 10 .comparing fig . 2 with fig . 9 , it is obvious that the different observation location influence on the distribution of the relative improvements of the concentration greatly . from table 5 , the total improvement value of the concentration in the lowest layer for fig . 2is shown to be larger than the one for fig .9 . besides, it can be seen in table 4 that the observation configuration in the top layer can not detect the emission with such weak diffusion under the assimilation window . if we compare fig . 9 with fig .10 , it is shown in table 5 that both the total improvement value of the concentration in the lowest layer for fig .10 and the weight of the emission rate increase , which implies that the observation configuration is more efficient to detect the emission and improve the estimation of the state of the model with the strong diffusion in fig . 10 . [ cols="<,^,>",options="header " , ]from the above discussion , we can determine the efficiency of the observation network by evaluating the improvement of estimation of initial state and emission rates separately , before we run the data assimilation by kalman filer and smoother .however , it does not provide the information about the improved configurations of observations which can help improving the estimations . in this section ,independent of any concrete data assimilation method , we will introduce the singular vector approach to identify the sensitive directions of observations to the initial state and emission rates .consider the generalized discrete - time linear system : where , is any estimate of .assume the observation mapping is accurate , which implies the data is the only source of observation errors , we have define the magnitude of the perturbation of the initial state by the norm in the state space with respect to a positive definite matrix similarly , we define the magnitude of the related observations perturbation in the time interval ] follow gaussian distribution with zero mean , while and are their covariance matrices respectively .as in section [ efficiency ] , we ignore the model error . it is well known that for the continuous kalman filter , the covariance of the optimal estimation of the state at time satisfies the integral riccati equation where and .on one hand , on the other hand , (s)r(s)k^t(s)m_k^t(t , s)ds\\ & = & \int_{t_0}^tm(t , s)k(s)r(s)k^t(s)m_k^t(t , s)ds\\ & & -\int_{t_0}^t\int_{0}^\eta m(t,\eta)k(\eta)h(\eta)m_k(\eta , s)k(s)r(s)k^t(s)m_k^t(t , s)dsd\eta\\ & = & \int_{t_0}^tm(t , s)k(s)r(s)k^t(s)m_k^t(t , s)ds\\ & & -\int_{t_0}^t\int_{0}^s m(t , s)k(s)h(s)m_k(s,\eta)k(\eta)r(\eta)k^t(\eta)m_k^t(t,\eta)d\eta ds.\end{aligned}\ ] ] therefore , . since ^{-1}(t , t_0)\\ & = & m_k^{-1}(t , t_0)-\int_{t_0}^tm_k^{-1}(s , t_0)l(s)h(s)m(t , s)ds,\end{aligned}\ ] ] we obtain .define ] as its adjoint operator is ; \mathds r^m).\ ] ] further , we define ; \mathds r^m)\rightarrow\mathcal l^{2}([t_0 , t_n ] ; \mathds r^m)$ ] , ; \mathds r^m).\ ] ] thus , where is the observability gramian of continuous - time systems . obviously , has the same pattern as , so where is the singular value decomposition of . then , following the same steps as in section [ efficiency ] , we can obtain the efficiency of observation configurations for continuous - time systems .consider the generalized continuous - time linear system : with the corresponding forecast perturbation of observations evolving from 99 , _ singular vectors and estimates of the analysis - error covariance metric _, q. j. roy .soc . , 124 , pp . 16951713 , 1998 . ,_ joint state and parameter estimation with an iterative ensemble kalman smoother _ , nonlinear proc geoph . ,20 , pp . 803818 , 2013 . , localization of optimal perturbations using a projection operator , quartsoc . , 120 , pp . 16471681 , 1994 ., _ targeting observations using singular vectors _, j. atmos .sci , vol 56 , pp .29652985 , 1999 . , _ current status and future developments of the ecmwf ensemble prediction system _ , meteorol .appl . 7 , pp . 163175 , 2000 . ,_ the singular - vector structure of the atmospheric global circulation _ , j. atom .9 , pp . 14341456 , 1995 . , _ estimation , control , and the discrete kalman filter _ , springer - verlag , 1989 . ,_ atmospheric data analysis _ , cambridge university press , 1991 . , _ a four dimensional variational chemistry data assimilation scheme for eulerian chemistry transport modeling _ , j. geophys, 104 , 18 583598 , 1999 . , _ 4d - variational data assimilation with an adjoint air quality model for emission analysis _, environ modell softw , vol .15 , pp . 539548 , 2000 . , _ emission rate and chemical state estimation by 4-dimension variational inversion _ , atmos, vol . 7 , pp . 37493769 , 2007 ., _ data assimilation : the ensemble kalman filter , 2th edition _ , springer , 2009 ., _ applied optimal estimation _ ,cambridge , mass .: m.i.t . press , 1974 . ,_ singular vector decomposition for sensitivity analyses of tropospheric chemical scenarios _ , atmos . chem ., vol . 13 , pp . 50635087 , 2013 . , _observability , eigenvalues and kalman filtering _ , ieee tran .aero . and electronic sys .aes-19 , no . 2 , 1983 . ,_ optimization of a certain quality of complete controllability and observability for linear dynamic systems _ , trans .asme , vol .91 , series d , pp . 228238,1969 . ,_ assimilation of photochemically active species and a case analysis of uars data _ , j. geophys .22 , pp . 1871518738 , 1999 . ,_ singular vector analysis for atmospheric chemical transport models _ , month .weather rev .134 , pp . 24432465 , 2006 ., _ a new approach to linear filtering and prediction problems _ , j. basic engineering , pp . 3545 , mar ., _ new results in linear filtering and prediction theory _ , j. basic engineering , pp . 95108 ,1961 . , _ a study of the predictability of a 28 variable atmospheric model , _ tellus , vol .17 , pp . 321333 , 1965 . ,_ multivariate statistical methods _ , new york : mcgraw - hill , 1967 . , _ improvement of ozone forecast over beijing based on ensemble kalman filter with simultaneous adjustment of initial conditions and emissions _ , atmos11 , pp . 1290112916 , 2011 . ,_ inverting modified matrices _ , memorandum rept42 , statistical research group , princeton university , princeton , nj , 1950 ., _ the method of fractional steps : solution of problems of mathematical physics in several variables _ , springer , 1971 .
|
the controllability of advection - diffusion systems , subject to uncertain initial values and emission rates , is estimated , given sparse and error affected observations of prognostic state variables . in predictive geophysical model systems , like atmospheric chemistry simulations , different parameter families influence the temporal evolution of the system . this renders initial - value - only optimisation by traditional data assimilation methods as insufficient . in this paper , a quantitative assessment method on validation of measurement configurations to optimize initial values and emission rates , and how to balance them , is introduced . in this theoretical approach , kalman filter and smoother and their ensemble based versions are combined with a singular value decomposition , to evaluate the potential improvement associated with specific observational network configurations . further , with the same singular vector analysis for the efficiency of observations , their sensitivity to model control can be identified by determining the direction and strength of maximum perturbation in a finite - time interval . * keywords : * atmospheric transport model , emission rate optimisation , observability , observational network configuration , singular value decomposition , data assimilation
|
knowledge of optical properties of clouds and aerosols is important in a wide range of scientific problems , from atmospheric and climate science to astronomical observations across wavelength bands .clouds are reflecting and absorbing radiation form the sun , thus regulating the intake of the solar energy by the earth .study of scattering and absorption of light by clouds is , therefore , a key element for understanding of the physics of the earth atmosphere .aerosols work as condensation centres for formation of cloud water droplets and ice crystals .understanding of relation between clouds and aerosols is one of the main challenges of atmospheric science .probes of the properties of clouds and aerosols are done using in situ measurements and remote sensing techniques including imaging from space or from the ground , observations of transmitted light from the sun or moon and sounding of the clouds with radiation beams .light detection and ranging ( lidar ) sounding techniques ( fig .[ fig : principle ] ) probe vertical structure of clouds and aerosols via timing of backscatter signal from a laser beam .presence of clouds perturbs astronomical observations in the very - high - energy ( vhe ) ( photons with energies 0.1 - 10 tev ) band and operation of cosmic ray ( cr ) experiments which use the earth atmosphere as a giant high - energy particle detector .imaging atmospheric cherenkov telescope ( iact ) arrays , as well as air fluorescence telescopes for detection of ultra - high - energy crs detect cosmic high - energy particles via imaging of cherenkov and fluorescence emission from the particle extensive air showers ( eas ) , initiated by the primary cosmic particles .information on the presence and properties of the clouds and aerosols is essential for the proper interpretation of the data collected in this way .gamma - ray / cr observations affected even by optically thin clouds are normally excluded from data sets , because the properties of the clouds are not known sufficiently well to allow correction for the effects of scattering of light by the atmospheric features . herewe show that cherenkov light produced by the eas could be used as a tool for remote sensing of the atmosphere .we show that this tool allows characterisation of three - dimensional cloud / aerosol coverage above the observation site and provides information on physical properties of cloud and aerosol particles .crs of energy are hitting the atmosphere from all directions at a rate ^{-\gamma_{cr}+1}(\mbox { m}^2\mbox { s sr})^{-1} ] khz , where we have assumed that the energy threshold scales approximately as the telescope aperture .each particular eas triggers the readout system of the telescope if it produces more than photon counts in the telescope camera .the statistics of cherenkov signal from all the eas accumulated each second is rather high , ^{2(\gamma_{cr}-1)} ] we could identify the clear sky altitude ranges in which for some , which could be found simultaneously with the clear sky intervals via fitting of the observed vertical profile .the extent of the cloud corresponds to the altitude range at which .identification of the altitude of the deviation of from constant provides a measurement of . vertical resolution of the measurement is determined by the angular resolution of the telescope and the typical distance of the telescope from the eas footprints on the ground , : .optical depth of the feature could be measured from the below and above the feature : / \left[sr(h < h_{cl})\right] ] .the right panel of fig .[ fig : cloudy_sky ] shows the minimal detectable optical depth of a cloud as a function of the cloud altitude .it is determined by the condition that this distortion of the vertical profile of cherenkov light above and below the cloud should be stronger than random fluctuations of the clear sky profile .measurement of the optical depth is done via comparison of the overall normalisations of the vertical profiles above and below the cloud .the accuracy of the measurement of both normalisations is determined by the statistics of the signal from above and below the cloud , , .the error of the measurement of both normalisations is , .the error of the measurement of the ratio of normalisations is .the right panel of fig .2 shows the minimal detectable optical depth of the cloud , derived from the condition .the minimal detectable optical depth is shown as a function of the altitude of the cloud for different telescope configurations .we have taken the telescopes with parameters close to those of existing and planned iact facilities , such as hess ( telescopes with 12 m dishes and fov ) , magic ( 17 m telescopes with fov ) and the telescopes of the small - size telescopes ( sst ) sub - array of the planned cta facility ( 4 m dishes , fov ) .longer exposure time should allow detection of the clouds with lower optical depth and also characterisation of clouds with a more complicated vertical structure ( compared to a single layer geometrically thin cloud considered in our model example ) .the improvement of the minimal measurable with exposure time would , however , stop as soon as the level of variations of the vertical profile induced by the presence of the cloud will become comparable to the level of the systematic uncertainties of the knowledge of the instrument characteristics and their time variability .the scattered light signal is responsible for the enhancement of emission from the altitude of the cloud , visible in the left panel of fig .[ fig : cloudy_sky ] .measurement of the strength of the scattered light peak as a function of the distance provides a probe of the scattering amplitude as a function of or , in other words , of the scattering phase function of the cloud particles . the spectrum of cherenkov emission ^ 2v^2)\right)$ ] ( is the refraction index of the air , is particle velocity ) is a continuum stretching through uv and visible bands .cherenkov light provides a `` white light '' source in the atmosphere .the use of such white light for remote sensing has certain advantages compared to typically mono - wavelength light used in lidars .namely , the broad range of the cherenkov light opens a possibility for a measurement of the optical depth as a function of wavelength .this provides a tool for the measurement of sizes of aerosol particles , since the scattering / extinction cross - section depends on the size parameter .modifications of existing iacts are needed for such a study , because currently existing systems measure only the intensity of cherenkov light integrated over a single spectral window of light sensors .in this paper we have proposed a novel approach for the remote sounding of the atmosphere using the uv cherenkov light generated by the cosmic ray induced eas throughout the atmospheric volume .this approach allows detection of atmospheric features , such as cloud and aerosol layers , and characterisation of their geometrical and optical properties .noticing an analogy between the uv light pulse produced by the eas and the pulse of the laser light , commonly used in the lidar devices , we demonstrated that the principles of the measurement of the properties of clouds and aerosols based on the imaging and timing of the eas signal are very similar to those used by the lidar .in fact , the equations ( [ eq : shower_eq1 ] ) , ( [ eq : shower_eq2 ] ) are the direct analogs of the well - known `` lidar equation '' commonly used in the analysis and interpretation of the lidar data .there are , however , important differences between the eas and laser light pulses , which make the new approach based on the eas light complementary to the lidar approach .most importantly , the cherenkov light is continuously `` regenerated '' all along the eas track from the top to the bottom of the atmosphere , while the laser light is generated once in a single location ( e.g. at the ground level for the ground based lidar ) .another important difference is that the cherenkov light has a continuum spectrum spanning through the visible and uv bands , while the laser light of the lidars is mono - wavelength .we have shown that the difference in the properties of the light used by the lidars and by the proposed eas + cherenkov telescope setup potentially provides new possibilities for the measurement of physical characteristics of the cloud / aerosol particles , such as e.g. size distribution and the scattering phase function .thus , the proposed technique is expected to provide data useful in the context of atmospheric physics .existing iact systems use a range of atmospheric monitoring tools to characterise weather conditions at their observation sites , including infrared / visible cameras and conventional lidars .the atmospheric monitoring data are collected with the aim to control the quality of the astronomical gamma - ray data , which are the data on the uv cherenkov emission from the eas induced by gamma - rays coming from high - energy astronomical sources .we have demonstrated that the iacts themselves could serve as powerful atmospheric monitoring tools , providing the atmospheric data complementary to those of the lidars and visible / infrared cameras .the atmospheric sounding data could be partially extracted from the background cosmic ray data of observations by existing iacts .their collection does not require interruptions of the planned astronomical observation schedule .moreover , the atmospheric data could be collected in cloudy sky conditions when astronomical observations are difficult or impossible .availability of detailed simultaneous atmospheric sounding data should allow a better control of the quality of the astronomical data taken by existing iacts , e.g. via a better definition of the `` clear sky '' conditions .besides , this should also open a possibility for observations in a borderline situation of the presence of moderately optically thin clouds and aerosols .99 international panel on climate change , _ climate change 2013 : the physical science basis _ ( 2013 ) .m. beniston , p. casals , m. sarazin , _ theor .climatol . _ * 73 * , 133 ( 2002 ) .l.l . font , et al .of 33rd international cosmic ray conference _ ,0090 ( 2013 ) .chaves , et al . , _ proc . of 33rd international cosmic ray conference _ ,1122 ( 2013 ) .stephens , _ j.climate_ , * 18 * , 237 ( 2005 ) . j. haywood , o. boucher , _ rev . geophys . _ * 38 * 513 ( 2000 ) . g.l .stephens , c.d .kummerow , _j.atmos.sci._ , * 64 * , 3742 ( 2007 ) .king , y.j .kaufman , w.p .menzel , d. tanre , _ ieee trans . on geroscience and remotesensing _ , * 30 * , 2 ( 1992 ) .h. bovensmann et al . , _ j.atmos.sci_ , * 56 * , 127 ( 1999 ) .winker , m.a .vaughan , a. omar , y. hu , k.a .powell , _ j. atmos .oceanic technol ._ , * 26 * , 2310 ( 2009 ) .j. beringer et al .( particle data group ) , _ phys ._ * d86 * , 010001 ( 2012 ) .aharonian , _ very - high - energy gamma radiation _ , world scientific ( 2004 ) .grieder , _ extensive air showers _ , springer , ( 2010 ) .a.m. hillas , _ j.phys ._ , * 8 * , 1475 ( 1982 ) .t.k.gaisser , a.m.hillas , _ proc .15th icrc _ ( plovdiv ) , * 8 * , 353 ( 1977 ) .f. nerling , j. bluemer , r. engel , m. risse , _ astropart.phys._ , * 24 * , 421 ( 2006 ) . y. takahashi , _ proc .29th international cosmic ray conference _ , pune ,i d 101 ( 2005 ) .longair , _ high - energy astrophysics _ , third edition , cambridge univ .press ( 2011 ) .j. kasparian , et al . , _science _ , * 301 * , 61 ( 2003 ) .f. bohren , d.r .huffman , _ absorption and scattering of light by small particles _ , wiley , ( 2004 ) .
|
remote sensing of atmosphere is conventionally done via a study of extinction / scattering of light from natural ( sun , moon ) or artificial ( laser ) sources . cherenkov emission from extensive air showers generated by cosmic rays provides one more natural light source distributed throughout the atmosphere . we show that cherenkov light carries information on three - dimensional distribution of clouds and aerosols in the atmosphere and on the size distribution and scattering phase function of cloud / aerosol particles . therefore , it could be used for the atmospheric sounding . the new atmospheric sounding method could be implemented via an adjustment of technique of imaging cherenkov telescopes . the atmospheric sounding data collected in this way could be used both for atmospheric science and for the improvement of the quality of astronomical gamma - ray observations .
|
in recent years a lot of effort has been devoted to measure gravitomagnetic effects due to earth s rotation , , predicted by the theory of general relativity ( gr ). in particular , the lense - thirring effect on the orbital motion of a test body can be measured by using the satellite laser ranging ( slr ) technique , whose data are provided by the ilrs . by analyzing the laser ranging data of the orbits of the satellites lageos and lageos ii , a measurement of the lense - thirring effect was obtained by ciufolini and pavlis .slr missions can also be useful to test modifications of gr , such as torsion theories of gravity .a class of theories allowing the presence of torsion is based on riemann - cartan spacetime , which is endowed with a metric and a compatible connection .the resulting connection turns out to be nonsymmetric , and therefore it originates a non - vanishing torsion tensor .we refer to , for the details . in standard torsion theoriesthe source of torsion is considered to be the intrinsic spin of matter , , , , which is negligible when averaged over a macroscopic body .therefore spacetime torsion would be observationally negligible in the solar system . nevertheless , in mao , tegmark, guth and cabi ( mtgc ) argue that the presence of detectable torsion in the solar system should be tested experimentally , rather than derived by means of a specific torsion model . for this reason , in a theory - independent framework based on symmetry arguments is developed , and it is determined by a set of seven parameters describing torsion and three further parameters describing the metric . here , by theory - independent framework , we mean the following : the metric and the connection are parametrized , around a massive body , with the help of symmetry arguments , without reference to a torsion model based on a specific lagrangian ( or even on specific field equations ) . this parametrized framework can be used to constrain from solar system experiments . in particular, mtgc suggest that gpb is an appropriate experiment for this task , and in they compute precessions of gyroscopes and put constraints on torsion parameters from gpb measurements . in hehl and obukhovargue that measuring torsion requires intrinsic spin , and criticize the approach of mtgc , since gpb gyroscopes do not carry uncompensated elementary particle spin .nevertheless , we accept the general idea that the precise form of the coupling of torsion to matter should be tested experimentally , and that actual experimental knowledge leaves room for nonstandard torsion theories which could yield detectable torsion signals in the solar system . in the present paperwe apply the parametrized framework developed by mtgc for the computation of satellites orbits around earth and we put a different set of constraints on torsion parameters from slr measurements .mtgc also address the question of whether there exists a specific gravitational lagrangian fitting in the parametrized framework and yielding a torsion signal detectable by the gpb experiment . as an example they quote the theory of hayashi and shirafuji ( hs ) in where a massive body generates a torsion field , and they propose what they call the einstein - hayashi - shirafuji ( ehs ) lagrangian , interpolating gr and hs lagrangians in a linear way .however , mtgc consider only a gravitational lagrangian in vacuum , so that they can not derive the equations of motion of test bodies from the gravitational field equations , which would require a suitable matter coupling .the ehs model has been criticized by various authors . in the paper , flanagan and rosenthalshow that the linearized ehs theory becomes consistent only if the coefficients in the lagrangian are constrained in such a way that the resulting predictions coincide with those of gr . in the paper ,puetzfeld and obukhov derive the equations of motion in the framework of metric - affine gravity theories , which includes the hs theory , and show that only test bodies with microstructure ( such as spin ) can couple to torsion . in conclusion , the ehs theory does not yield a torsion signal detectable for gpb . for these reasons , in the ehs lagrangian is proposed not as a viable physical model , but as a pedagogical toy model fitting in the parametrized framework , and giving an illustration of the constraints that can be imposed on torsion by the gpb experiment . in the present paperwe will not consider such a toy model .as also remarked by flanagan and rosenthal in , the failure of constructing the specific ehs lagrangian does not rule out the possibility that there may exist other torsion theories which could be usefully constrained by solar system experiments .such torsion models should fit in the above mentioned theory - independent framework , similarly to a parametrized post - newtonian framework including torsion .we remark that the parametrized formalism of mtgc does not take into account the intrinsic spin of matter as a possible source of torsion , and in this sense it can not be a general torsion framework .however , it is adequate for the description of torsion around macroscopic massive bodies in the solar system , like planets , being the intrinsic spin negligible when averaged over such bodies. therefore we think it is worthwhile to continue the investigation of observable effects in the solar system of nonstandard torsion models within the mtgc parametrized formalism , under suitable working assumptions .in particular , our aim is to extend the gpb gyroscopes computations made in to the case of motion of satellites . in the present paperwe compute the corrections to the orbital lense - thirring effect due to the presence of spacetime torsion described by .we consider the motion of a test body in the gravitational field of a rotating axisymmetric massive body , under the assumption of slow motion of the test body .since we use a parametrized framework without specifying the coupling of torsion to matter , we can not derive the equations of motion of test bodies from the gravitational field equations .therefore , in order to compute effects of torsion on the orbits of satellites , we will work out the implications of the assumption that the trajectory of a test body is either an extremal or an autoparallel curve .such trajectories do not need to coincide when torsion is present .as in the original paper of lense and thirring , we characterize the motion using the six orbital elements of the osculating ellipse . in terms of these orbital elements ,the equations of motion then reduce to the lagrangian planetary equations .we calculate the secular variations of the longitude of the node and of the longitude of the pericenter .the computed secular variations show how the corrections to the orbital lense - thirring effect depend on the torsion parameters , and it turns out that the dependence is only through .the data from the lageos satellites are then used to constrain the relevant linear combinations of the torsion parameters .more precisely , we constrain two different linear combinations of by using first the measurements of the nodes of lageos and lageos ii , and then the measurements of the nodes of lageos and lageos ii and of the perigee of lageos ii . in particular , torsion parameters can not be constrained by satellite experiments in the case of extremal trajectories .while the torsion perturbations to the lense - thirring effect depend only on , it turns out that another relevant relativistic effect , namely the geodetic precession ( or de sitter effect ) , depends on the parameters and , and on a further parameter .this latter parameter is involved in a higher order parametrization of torsion , which is necessary for the description of the geodetic precession effect , while it is not necessary at the order of accuracy required in the present paper .all computations of orbital geodetic precession with torsion of a satellite are performed in the companion paper , to which we will sometimes refer for details .the paper is organized as follows . in section [ sec : spator ] we briefly recall the notion of spacetime with torsion . in section [ sec : geodet ] we discuss the case of extremal trajectories . in section[ sec : autopa ] we analyze the equations of autoparallel trajectories and derive the related system of ordinary differential equations to first order .the expression of the system clearly reveals the perturbation due to torsion with respect to the lense - thirring equations . in section [ sec : comput ] we derive the time evolution of the orbital elements , by applying the classical perturbation theory of celestial mechanics , in particular the gauss form of the lagrange planetary equations . in section[ sec : correz ] we calculate the secular variations of the orbital elements . in section [ sec : desitter ] we recall some results from where torsion solar perturbations are computed .these results will be used in section [ sec : simone ] , where we give the observational constraints that the lageos experiment can place on torsion parameters .conclusions are drawn in section [ sec : conclus ] .for convenience of the reader , in the appendix ( section [ sec : append ] ) we recall from how to parametrize the metric and torsion tensors , and hence how to parametrize the connection , under suitable symmetry assumptions .a manifold equipped with a lorentzian metric and a connection compatible with the metric is called a riemann - cartan spacetime , .compatibility means that , where denotes the covariant derivative .we recall in particular that for any vector field the connection is determined uniquely by and by the torsion tensor as follows : where is the levi - civita connection , defined by and is the contortion tensor . in the particular casewhen is symmetric with respect to the torsion tensor vanishes .we will be concerned here with the case of nonsymmetric connections .the case of vanishing torsion tensor corresponds to riemann spacetime of gr , while the case of vanishing riemann tensor corresponds to the weitzenbck spacetime . in the present paperwe use the natural gravitational units and .we will assume that earth can be approximated as a uniformly rotating spherical object of mass and angular momentum .following , we use spherical coordinates for a satellite moving in the gravitational field of earth , and we introduce the dimensionless parameters and . since the radii of the lageos orbits ( about 6000 km altitude ) are much larger than earth s schwarzschild radius , it follows that .moreover , since earth is slowly rotating , we have .therefore , all computations will be carried out perturbatively to first order in and . under spherical axisymmetry assumptions ,the metric tensor and the torsion tensor have been parametrized to first order in .accordingly , is parametrized by three parameters , , , and is parametrized by seven parameters , therefore becomes an explicit function of all metric and torsion parameters .it turns out that contribute to geodetic precession , while contribute to the frame - dragging precession . in the appendixwe report the explicit expressions of the parametrized metric and torsion tensors , and of the connection , that will be needed in the sequel of the paper .in gr structureless test bodies move along geodesics . in a riemann - cartan spacetimethere are two different classes of curves , autoparallel and extremal curves , respectively , which reduce to the geodesics of riemann spacetime when torsion is zero .autoparallels are curves along which the velocity vector is transported parallel to itself by the connection .extremals are curves of extremal length with respect to the metric . the velocity vector is transported parallel to itself along extremal curves by the levi - civita connection . in grthe two types of trajectories coincide while , in general , they may differ in presence of torsion .they are identical when the torsion is totally antisymmetric , a condition which is not satisfied within our parametrization .the equations of motion of bodies in the gravitational field follow from the field equations due to the bianchi identities .the method of papapetrou can be used to derive the equations of motion of a test body with internal structure , such as for instance a small extended object that may have either rotational angular momentum or net spin . in standard torsion theoriesthe trajectories of test bodies with internal structure , in general , are neither autoparallels nor extremals , , , while structureless test bodies , such as spinless test particles , follow extremal trajectories .the precise form of the equations of motion of bodies in the gravitational field depends on the way the matter couples to the metric and the torsion in the lagrangian ( or in the gravitational field equations ) .as explained in the introduction , we do not specify a coupling of torsion to matter , hence we do not specify the field equations . moreover , in our computations of orbits of a satellite ( considered as a test body ) , we will neglect its internal structure . in a theory - independent framework we can not derive the equations of motion from the gravitational field equations , hence we need some working assumptions on the trajectories of structureless test bodies : we will investigate the consequences of the assumption that the trajectories are either extremal or autoparallel curves . assuming the trajectory to be an extremal is natural and consistent with standard torsion theories .however , extremals depend only on the parameters of the metric , so that new predictions related to torsion can not arise . we will quickly report the computations for the sake of completeness , since the metric parameters can be immediately related to the parametrized post newtonian ( ppn ) parameters ( see ) , and the orbital lense - thirring effect in the case of extremal trajectories and a ppn metric is known .the system of equations of extremal trajectories reads as where is the proper time . for slow motion of the satellite we can make the substitution , so that for .we assume that the velocity of the satellite is small enough so that we can neglect the quadratic terms in the velocity .then , being we have for .all perturbations considered here are so small that can be superposed linearly . since we are only interested in the perturbations due to earth s rotation , as in the original lense - thirring paper we are allowed to neglect the quadratic terms in the velocities which yield an advance of the perigee of the satellite .the value of the advance of the perigee for an extremal orbit and a ppn metric can be found in ( * ? ? ? * chapter 7 , formula ( 7.54 ) ) .we use for spherical coordinates .the levi - civita connection can be obtained from the expression of given in the appendix by setting to zero all torsion parameters .substituting the resulting expression in one gets the equations of motions depend neither on the metric parameter nor on the torsion parameters .system to lowest order becomes where is the unit vector in the radial direction . imposing the newtonian limit yields as in a ppn metric( see also ( * ? ? ? * formula ( 23 ) ) ) .we now transform in rectangular coordinates , , .we compute the second derivatives of with respect to time in the approximation of slow motion . neglecting all terms containing squares and products of first derivatives with respect to , we get using and we obtain the following system for the equations of motion : , \\ \\\displaystyle \ddot{y}= -{\frac { \em } { { r}^{2 } } } y + \displaystyle \gg{\frac{\em\ea}{r^3 } } \big [ \left({x}^{2}+{y}^{2}-2{z}^{2 } \right)\dot{x}+ 3 xz\dot{z } \big ] , \\ \\\displaystyle \ddot{z}= -{\frac { \em } { { r}^{2 } } } z+\displaystyle \gg{\frac{\em\ea}{r^3}}3z\left(y\dot{x}-x\dot{y}\right ) .\end{array}\right.\ ] ] note that when system reduces to the equations of motion found by the lense - thirring ( * ? ? ?* formula ( 15 ) ) .hence the relativistic perturbation of the newtonian force is just multiplied by the factor with respect to the original lense - thirring equations .it follows that the formulae of precession of the orbital elements of a satellite can be obtained by multiplying the original lense - thirring formulae ( * ? ? ?* formula ( 17 ) ) by the factor .the details of the computation , based on the lagrange planetary equations of celestial mechanics , can be also retrieved from the computations for autoparallel trajectories given in the next sections , by setting to zero all torsion parameters . using the standard astronomical notation , we denote by the longitude of the node and by the argument of the perigee of the satellite s orbit .the secular contributions to the variations of and are : where is the semimajor axis of the satellite s orbit , is the eccentricity , is the orbital inclination , and is time .when the quantities in reduce to the classical corresponding lense - thirring ones .since the expressions of and depend only on , the measurements of satellites experiments can not be used to constrain the torsion parameters .in standard torsion theories the trajectories of structureless test bodies follow extremal trajectories , , which depend only on the metric .however , new predictions related to torsion may arise when considering the autoparallel trajectories . in the followingwe give some motivations which make worthwhile the investigation of autoparallel trajectories .since in spacetime with torsion parallelograms are in general not closed , but exhibit a closure failure proportional to the torsion , kleinert and pelster argue in that the variational procedure in the action principle for the motion of structureless test bodies must be modified . in the standard variational procedure for finding the extrema of the action , paths are varied keeping the endpoints fixed in such a way that variations form closed paths . however, in the formalism of , the closure failure makes the variation at the final point nonzero , and this gives rise to a force due to torsion . when this argument is applied to the action principle for structureless test bodies it turns out that the resulting torsion force changes extremal trajectories to autoparallel ones ( see for the details ) .kleinert and shabanov find an analogous result in where they show that the geometry of spacetime with torsion can be induced by embedding its curves in a euclidean space without torsion .kleinert et al . also argue in , that autoparallel trajectories are consistent with the principle of inertia , since a structureless test body will change its direction in a minimal way at each time , so that the trajectory is as straight as possible . the approach of kleinert et al .has been criticized by hehl and obukhov in since the equations of autoparallel trajectories have not been derived from the energy - momentum conservation laws .kleinert investigates this issue in and finds that , due to the closure failure , the energy - momentum tensor of spinless point particles satisfies a different conservation law with respect to the one satisfied in torsion theories such as , .the resulting conservation law yields autoparallel trajectories for spinless test particles .kleinert then addresses the question of whether this new conservation law allows for the construction of an extension of einstein field equations to spacetime with torsion .the author gives an answer for the case of torsion derived from a scalar potential ( see for a discussion of this kind of torsion ) . in this casethe autoparallel trajectories are derived from the gravitational field equations via the bianchi identities , though the field equation for the scalar field , which is the potential of torsion , is unknown . in dereli and tuckershow that the theory of brans - dicke can be reformulated as a field theory on a spacetime with dynamic torsion determined by the gradient of the brans - dicke scalar field .then in they suggest that the autoparallel trajectory of a spinless test particle in such a torsion geometry is a possibility that has to be taken into account . in autoparallel trajectories of massive spinless test particles are analyzed in the background of a spherically symmetric , static solution to the brans - dicke theory and the results are applied to the computations of the orbit of mercury . in the autoparallel trajectories of spinless particles are analyzed in the background of a kerr brans - dicke geometry . in , the equations of autoparallel trajectories are derived from the gravitational field equations and bianchi identities , in the special case of matter modeled as a pressureless fluid , and torsion expressed solely in terms of the gradient of the brans - dicke scalar field .the above quoted results show that there is an interest in the autoparallels in spacetime with torsion , which make worthwhile their investigation in the present paper .the system of equations of autoparallels reads as where is the proper time .observe that only the symmetric part of the connection enters in ; moreover , starting from the totally antisymmetric part of can not be measured .the trajectory of a test body has to be a time - like curve . since the connection is compatible with the metric , the quantity is conserved by parallel transport .the tangent vector to the trajectory undergoes parallel transport by the connection along the autoparallel .therefore , an autoparallel that is time - like at one point has this same orientation everywhere , so that the trajectory is strictly contained in the light cone determined by , in a neighbourhood of every of its points .hence the compatibility of the connection with the metric ensures that autoparallels fulfil a necessary requirement for causality . for slow motion of the satellite we can make the substitution , so that for .again , we assume that the velocity of the satellite is small enough so that we can neglect the terms which are quadratic in the velocity. then , being we have for .as in the previous section , all the perturbations that we are considering here are so small that can be superposed linearly .we are allowed to neglect the quadratic terms in the velocities which yield an advance of the perigee of the satellite .such an advance of the perigee for an autoparallel orbit in presence of torsion has been computed in .we use for spherical coordinates . substituting in the expression of given in the appendix one gets where note that equations of motions do not depend on the metric parameter and on the torsion parameter .moreover , the dependence on and appears only through their difference .system to lowest order becomes where is the unit vector in the radial direction . imposing the newtonian limit it follows that ( see also ( * ? ? ?* formula ( 23 ) ) ) since the newtonian limit fixes the value of , the equations of autoparallels depend only on the parameters ( called frame - dragging torsion parameters in ) .therefore the precession of satellite s orbital elements will depend only on such torsion parameters , as it has been found in for gyroscopes . using and we obtain the following system for the equations of motion : , \\ \\\displaystyle \ddot{y}= -{\frac { \em } { { r}^{2 } } } y + \displaystyle { \frac{\em\ea}{r^3 } } \big[-(\dd+\aa ) xy \dot{y}+ \left(-\aa{x}^{2}+\dd{y}^{2}-\bb{z}^{2 } \right)\dot{x}- \left ( \aa-\bb\right ) xz\dot{z } \big ] , \\ \\\displaystyle \ddot{z}= -{\frac { \em } { { r}^{2 } } } z+\displaystyle { \frac{\em\ea}{r^3}}(\dd+\bb)z\left(y\dot{x}-x\dot{y}\right ) .\end{array}\right.\ ] ] note that in case of no torsion ( i.e. for any ) and when system reduces to the equations of motion found by the lense - thirring ( * ? ? ?* formula ( 15 ) ) .the system expressing the motion along autoparallel trajectories can be written in the form where is the perturbation with respect to the newton force , , \\ \\\perty= \displaystyle { \frac{ma}{r^5 } } \big[-(\dd+\aa ) xy \dot{y}+ \left(-\aa{x}^{2}+\dd{y}^{2}-\bb{z}^{2 } \right)\dot{x}- \left ( \aa-\bb\right ) xz\dot{z } \big ] , \\\\ \pertz= \displaystyle { \frac{ma}{r^5}}(\dd+\bb)z\left(y\dot{x}-x\dot{y}\right ) .\end{array}\right.\ ] ] we use the standard coordinates transformation , used in celestial mechanics where is the orbital inclination , is the longitude of the node , and is the argument of latitude .the vector can be decomposed in the standard way along three mutually orthogonal axes as here is the component along the instantaneous radius vector , is the component perpendicular to the instantaneous radius vector in the direction of motion , and is the component normal to the osculating plane of the orbit ( colinear with the angular momentum vector ) .then , substituting into gives note that in case of no torsion and when formulae reduce to the components found by lense - thirring ( see equations ( 16 ) in ) .let us now recall , that , using the method of variation of constants , where is the semimajor axis of the satellite s orbit , is the eccentricity , is the true anomaly , and , the period of revolution . following the standard astronomical notation , we let be the argument of the perigee , and be the longitude of the perigee .we also recall the following planetary equations of lagrange in the gauss form ( * ? ? ?* ch . 6 , sec .6 ) : , \\ \\\displaystyle \frac{de}{dt}= \frac{(1-e^2)^{1/2}}{na } \left [ s\sin v + t\left(e+{r+a\over a}\cos v \right ) \right ] , \\ \\\displaystyle \frac{di}{dt}= \frac{1}{na^2 ( 1-e^2)^{1/2 } } ~ w r \cos u , \\ \\\displaystyle \frac{d\omega}{dt}= \frac{1}{na^2 ( 1-e^2)^{1/2 } \sin i } ~ w r \sin u , \\ \\\displaystyle \frac{d\omt}{dt}= \frac{(1-e^2)^{1/2}}{nae } \left [ -s\cos v + t\left(1+{r\over a(1-e^2 ) } \right)\sin v \right]+ 2\sin^2 \frac{i}{2 } ~{d\omega\over dt } , \\ \\l_0}{dt}=- \frac{2}{na^2 } ~ s r + { e^2\over 1+(1-e^2)^{1/2 } } ~ \frac{d\omt}{dt}+2 ( 1-e^2)^{1/2 } \sin^2 \frac{i}{2 } ~ { d\omega\over dt } , \end{array } \right.\ ] ] where is the longitude at epoch , and is the time of periapsis passage . using the expressions of , and given by and integrating the lagrange planetary equations we compute the variations of the orbital elements .according to perturbation theory , we regard the orbital elements as approximately constant in the computation of such integrals . since , we can make use of the approximation inserting - into yields , \\ \\\displaystyle \frac{di}{dt}&= & \displaystyle { j\sin i \cos u\over na^3(1-e^2)^{3/2 } } \big [ e \sin v \cos u\aa\dot{v}-\sin u ( 1+e\cos v ) \bb\dot{u } \big ] , \\\\ \displaystyle \frac{d\omega}{dt}&= & \displaystyle { j \sin u \over na^3(1-e^2)^{3/2 } } \big [ e \sin v \cos u \aa\dot{v}-\sin u(1+e\cos v)\bb\dot{u } \big ] , \\ \\\displaystyle \frac{d\omt}{dt}&= & \displaystyle { j\cos i\over na^3 e(1-e^2)^{3/2 } } \big [ ( 1+e\cos v)^2\cos v\dd\dot{u}- e \sin^2 v ( 2+e\cos v)\aa\dot{v } \big ] + 2\sin^2 \frac{i}{2 } ~ { d\omega\over dt } , \\ \\ \displaystyle \frac{d l_0}{dt}&= & \displaystyle { 2j\cos i\over na^3(1-e^2)}(1+e\cos v ) \dd\dot{u } \displaystyle + { e^2\over 1 + ( 1-e^2)^{1/2 } } ~ { d\omt\over dt } + 2 ( 1-e^2)^{1/2 } \sin^2 \frac{i}{2 } ~{d\omega\over dt}. \end{array}\right.\ ] ] recalling , we now integrate with respect to .therefore we find for the variations of the orbital elements : , \\ \\ \\\incr i&=&{j\sin i\over 12na^3(1-e^2)^{3/2 } } \bigg [ 4(\aa+2\bb)e\cos v\cos^2 u-4(\bb+2\aa ) e\cos v \\ \\ & & + 2(\bb+2\aa)e\sin v\sin(2u)+3\bb\cos(2u ) \bigg ] , \\ \\\omega&= & { j\over 6na^3(1-e^2)^{3/2 } } \bigg\ { -3\bb v+{3\bb\over 2}\sin(2u ) \\ & & + e\big [ 2(\aa-\bb)\sin v+ ( \aa+2\bb)\sin(2u)\cos v-2 ( 2\aa+\bb ) \sin v\cos^2 u \big ] \bigg\},\end{aligned}\ ] ] + ( \dd-\aa)ev\bigg\ } + 2\sin^2 \frac{i}{2}~ \incr \omega , \\\\ \incr l_0&=&{2 j\cos i\over na^3 ( 1-e^2)}\dd\left(v+e\sin v\right ) + { e^2\over 1 + ( 1-e^2)^{1/2 } } ~ \incr \omt+2 ( 1-e^2)^{1/2 } \sin^2 \frac{i}{2 } ~\incr \omega.\end{aligned}\ ] ] we note that the contributions of the components and to the derivative are proportional to and , respectively , with the same proportionality constant . using the approximation it turns out that in the classical lense - thirring case , where the torsion parameters vanish and , there is a cancellation of such contributions in such a way that vanishes .conversely , in presence of torsion , if the eccentricity of the orbit is nonzero , the contributions of the radial and of the tangential component of the perturbative force differ , so that does not vanish , yielding a periodic perturbation of the semimajor axis of the satellite s orbit .we observe that only periodic terms appear in , and .secular terms appear in , and . since ,the secular contributions to the variations of the corresponding orbital elements are : t , \\\\( \incr l_0)_{\rm sec } & = & \displaystyle \frac{j}{a^3(1-e^2 ) } \bigg\ { 2 \dd + { e^2\over 1+(1-e^2)^{1/2 } } ~ { 1\over ( 1-e^2)^{1/2 } } \left [ \dd-\aa-(\bb+2\dd-2\aa)\sin^2 \frac{i}{2 } \right ] \\\\ & & - ( \bb + 4 \dd ) \displaystyle \sin^2 \frac{i}{2 } \bigg\ } ~t . \end{array } \right.\ ] ] in the absence of torsion and when , it turns out that , as found by lense - thirring . using we rewrite . for the nodal ratewe obtain and for the longitudinal rate of the perigee ~t.\ ] ] since , for the rate of the argument of the perigee we find ~t.\ ] ] the parameters measure deviations from gr . indeed , when there is no torsion we have for .when , in addition , the metric is the weak field approximation of a kerr - like metric , and and we get the classical lense - thirring formulae .we also give the expression for the rate of the longitude at epoch , namely \sin^2 \frac{i}{2 } \bigg\}~t , \nonumber\end{aligned}\ ] ] where note that do not depend on .the secular perturbations of the orbital elements computed in the previous sections are not the only torsion induced perturbations that are expected . indeed , a further contribution due to solar perturbation is present , namely the geodetic precession in presence of torsion .the corresponding perturbations of the orbital elements have been computed in the companion paper and they depend only on the torsion parameters . since we are interested in putting constraints on the frame - dragging torsion parameters , there is a relevant difference between the case of gpb gyroscopes considered in and the present problem of orbits of satellites . in the average gyroscope precession rate is expressed as where is the angular momentum of the spinning gyroscope measured by an observer comoving with its center of mass , and the vector of the angular precession rate is a linear combination of ( the orbital angular velocity vector of the gyroscope ) and ( the rotational angular velocity vector of the earth around its axis ) . in coefficient of is a linear combination of the parameters , while the coefficient of is a linear combination of the parameters .since the gpb satellite has a polar orbit the vectors and are orthogonal .the contribution to the average precession due to is the geodetic precession of the gyroscope , while the contribution due to is frame - dragging , both in the presence of torsion .therefore , in the gpb experiment , when measuring the projections of the average precession rate of a gyroscope on the two corresponding orthogonal directions , it turns out that the linear combinations of the and of the torsion parameters can be constrained separately . on the other hand , in the case of orbital motion of satellites , in the presence of torsion the geodetic precession and the lense - thirring effect are superimposed as it happens in gr , in such a way that the precessions of the orbital elements are simultaneously influenced by both effects . in has been found that the contribution of geodetic precession depends on a linear combination of the torsion parameters , while the contribution of frame - dragging computed in the previous sections depends on a linear combination of the parameters .it turns out that the precession of orbital elements ( such as the node and the perigee ) both depend on and , in such a way that without a knowledge of the dependence of such precessions on , it is not possible to put constraints on the .the knowledge of the dependence on corresponds exactly to the knowledge of the geodetic precession of the orbital elements in presence of torsion . in grit is known that the geodetic precession is independent of the orbital elements of the satellites ( and therefore it is the same both for lageos and the moon ) .this property is used in gr in order to compute an upper bound to the uncertainty in modeling the geodetic precession , and in order to show that the result is negligible with respect to the uncertainty in the measurement of the lense - thirring effect ( see , supplementary discussion ) .such a result is important in order to extract the lense - thirring effect from lageos data , and it is achieved thanks to the precision of the measurement of geodetic precession by means of lunar laser ranging ( llr ) data . in section [ sec : simone ] we will show that the uncertainty in modeling the geodetic precession can be neglected also in presence of spacetime torsion . in particular , the upper bounds on the torsion parameters found in and recalled in the subsequent formula ( [ stimat2 ] ) will be useful in order to obtain such a conclusion .this is important in order to extract the lense - thirring effect from lageos data also in the presence of torsion , and that will allow us to constrain suitable linear combinations of the parameters separately . hence , in the following we briefly need to report the results obtained in .the geodetic precession of orbital elements of the satellite in the gravitational field of the earth and the sun ( both supposed to be nonrotating ) is computed , in a sun - centered reference system .it is shown that , to the required order of accuracy , the corresponding metric is described by a further parameter , where is the usual ppn parameter , and the parametrization of the torsion tensor involves a further parameter ( see for the details ) .the secular contributions to the precessions of the node and of the perigee due to torsion found in are the following : \right\ } ~t , \end{aligned}\ ] ] where here is the mass of the sun , is the revolution angular velocity of the earth around the sun , and is the distance of the earth from the sun . differently from the lense - thirring effect , the precessions depend on the torsion parameters and , and are independent of ; the parameter is identified using the newtonian limit .we recall that and enter the parametrization of torsion at the higher order of accuracy required in the computation of precessions .the perturbations have to be superimposed to the ones computed in section [ sec : correz ] .the first term on the right hand sides of the two formulas in can be interpreted as the geodetic precession effect , when torsion is present : accordingly we set in the ppn formalism we have using llr data and mercury radar ranging data respectively , the following upper bounds are given in ( * ? ? ?* section 13 ) : since for lageos satellites , we have taking into account the expression of , we have this formula yields the rate of geodetic precession around an axis which is normal to the ecliptic plane . the projection of this precession rate on the axis of rotation of earth is obtained by multiplying by , where degrees is the angle between the earth s equatorial plane and the ecliptic plane : this gives the values of the geodetic precession in a earth - centered reference system .in this section we describe how the lageos data can be used to extract a limit on the torsion parameters .we will assume in the following that all metric parameters take the same form as in the ppn formalism , according to .recent limits on various components of the torsion tensor , obtained in a different torsion model based on the fact that background torsion may violate effective local lorentz invariance , have been obtained in .see also , where constraints on possible new spin - coupled interactions using a torsion pendulum are described . herewe discuss how frame dragging torsion parameters can be constrained by the measurement of a suitable linear combination of the nodal rates of the two lageos satellites .equation can be rewritten as where we have defined , similarly to and , a multiplicative torsion `` bias '' relative to the gr prediction as being the lense - thirring precession in gr .we recall that the values of such precessions are and for lageos and lageos ii , respectively , where mas / yr denotes milli - arcseconds per year .let us now consider the contribution of the geodetic precession to the nodal rate .we write the secular contribution to the nodal rate , in a earth - centered reference system , in the form where depends on .precisely , taking into account that and using , we have moreover , the following numerical constraints are set on ppn parameters and by cassini tracking and llr data , respectively : differs from 1 by a few part in . therefore , using , and we get where the subscripts and denote lageos and lageos ii , respectively . here the total nodal rate of a lageos satellite denotes the nodal rate due to all kinds of perturbations , both gravitational and nongravitational .the coefficient is chosen to make the linear combination independent of any contribution of the earth s quadrupole moment , which describes the earth s oblateness . in the residual (observed minus calculated ) nodal rates , of the lageos satellites are obtained analyzing nearly eleven years of laser ranging data .the residuals are then combined according to the linear combination , analogue to .the lense - thirring effect is set equal to zero in the calculated nodal rates . the linear combination of the residuals , after removal of the main periodic signals , is fitted with a secular trend which corresponds to 99% of the theoretical lense - thirring prediction of gr ( see , for the details ) : the total uncertainty of the measurement is of the value predicted by gr , , .this uncertainty is a total error budget that includes all estimated systematic errors due to gravitational and non - gravitational perturbations , and stochastic errors .such a result is quoted as a level estimate in , , though an explicit indication of this fact is missing in .eventually , the authors allow for a total uncertainty to include underestimated and unmodelled error sources . in the followingwe assume a value of for the uncertainty of the measurement .using the upper bound , the uncertainty in modeling geodetic precession in the presence of torsion is \cos \epsilon \leq 0.0064\,\frac{27.2}{48.2 } \big [ ( \incr \omega_{{\rm i}})_{\rm sec}^{\rm gr } + \kappa ( \incr \omega_{{\rm ii}})_{\rm sec}^{\rm gr } \big],\ ] ] where \cos \epsilon=27.2 $ ] mas / yr is the contribution from geodetic precession predicted by gr for lageos satellites . compared to the % uncertainty in the measurement of the lense - thirring effect , the uncertainty in modeling geodetic precession can be neglected ( as in , ) even in the presence of spacetime torsion .this is a consequence of the torsion limits set with the moon and mercury in .then we can apply the results of , to our computations with torsion , and we obtain \big\vert < 0.10 \left [ ( \incr \omega_{{\rm i}})_{\rm sec}^{\rm gr } + \kappa ( \incr \omega_{{\rm ii}})_{\rm sec}^{\rm gr } \right],\ ] ] where and are given by .since the torsion bias does not depend on the orbital elements of the satellite , we have hence , using , we can constrain a linear combination of the frame - dragging torsion parameters , , setting the limit which is shown graphically in figure 1 , together with the other constraints on and .taking into account the numerical constraints the limit on torsion parameters from lageos becomes which implies the constraint ( [ richiamo ] ) on the torsion parameters depends on the quantitative assessment of the uncertainty of the measurement of the lense - thirring effect .however , the value 5 - 10% of the uncertainty reported in has been criticized by several authors .for example iorio argues in that the uncertainty might be 15 - 45% .the previous computations show that the upper bound on the quantity is given by the uncertainty of the measurement , so that one can find the constraint on the linear combination of the torsion parameters corresponding to a different value of the uncertainty .for instance , if the value of the uncertainty of the measurement is , the constraint on torsion parameters becomes one of the goals of the lageos , lageos ii , lares three - satellite experiment , together with improved earth s gravity field models of grace ( gravity recovery and climate experiment ) is to improve the experimental accuracy on the orbital lense - thirring effect to `` a few percent '' .we observe that , using the uncertainty in modeling geodetic precession in presence of torsion amounts to about of the lense - thirring effect , which is still a small contribution to a total root - square - sum error of a few percent .note that an improved determination of the geodetic precession has been recently achieved by gpb which , unlike lageos , is designed to separate the frame - dragging and geodetic precessions by measuring two different , orthogonal precessions of its gyroscopes .[ limitplot ] , ) and on frame - dragging torsion parameters ( ) from solar system tests .the grey area is the region excluded by lunar laser ranging and cassini tracking .the lageos nodes measurement of the lense - thirring effect , excludes values of outside the hatched region .general relativity corresponds to , and all torsion parameters = 0 ( black dot).,title="fig:",scaledwidth=80.0% ] in the case of gpb , the torsion bias for the precession of a gyroscope is this formula ( the analogue of the right hand side of equation ) involves a linear combination of all frame - dragging torsion parameters .such a linear combination can be constrained from gpb data .since lageos and gpb are sensitive to different linear combinations , together they can put more stringent torsion limits . after taking into account the contribution of the geodetic precession ,the combined constraints from gyroscope and orbital lense - thirring experiments are effective probes to search for the experimental signatures of spacetime torsion . in this sense ,lageos and gpb are to be considered complementary frame - dragging and , at the same time , torsion experiments , with the notable difference that gpb measures also the geodetic precession . in this sectionwe discuss how frame dragging torsion parameters can be constrained by the measurement of a linear combination of the nodal rates of lageos and lageos ii and the perigee rate of lageos ii .similarly to the previous section , we define a multiplicative torsion `` bias '' relative to the gr prediction also for the rate of the argument of the perigee : ,\ ] ] being the lense - thirring precession in gr : we recall that the value of this precession is for lageos ii . in the following , the torsion bias is referred to lageos ii . using the values of , and given in section [ sec : correz ] we find the measurement of the lense - thirring effect in is based on the following linear combination of the residuals of the nodes of lageos and lageos ii and of the perigee of lageos ii : where the coefficients and are chosen to make the linear combination independent of the first two even zonal harmonic coefficients and , and of their uncertainties . in the residualsare obtained analyzing four years of laser ranging data , and then combined according to the linear combination .the lense - thirring effect is set equal to zero in the calculated rates of the nodes and of the perigee .the linear combination of the residuals , after removal of the main periodic signals and of small observed inclination residuals , is fitted with a secular trend which corresponds to times the theoretical lense - thirring prediction of gr ( see for the details ) : the total uncertainty of the measurement found in is of the value predicted by gr .this uncertainty is a total error budget that includes all the estimated systematic errors due to gravitational and non - gravitational perturbations .such a result is quoted as a level estimate in , though an explicit indication of this fact is missing in .the contribution to the uncertainty of the measurement due to nongravitational perturbations , mainly thermal perturbative effects , on the perigee of lageos ii , amounts to of the value predicted by gr . in an estimate is confirmed , however the author , when considering more pessimistic assumptions on some thermal effects , estimates that the contribution of nongravitational perturbations to the total uncertainty does not exceed the of the gr value .here we will follow this more conservative estimate .inserting this value in the estimate of the total uncertainty computed in yields a total root - square - sum error of of the gr value . for reasons similar to the ones discussed in the previous section ,we are allowed to neglect the uncertainty in modeling the geodetic precession in presence of torsion .then we can apply the results of to our computations with torsion , and we obtain \big\vert \\ \\ & \qquad \qquad \qquad \qquad \qquad \qquad < 0.32 \left [ ( \incr \omega_{{\rm i}})_{\rm sec}^{\rm gr } + c_1(\incr \omega_{{\rm ii}})_{\rm sec}^{\rm gr } + c_2 ( \incr \omega_{{\rm ii}})_{\rm sec}^{\rm gr } \right ] .\end{aligned}\ ] ] a direct computation gives where inserting in the expressions of and given in , and taking into account that by formula , we obtain using the value of we finally deduce which is shown graphically in figure 2 , together with the other constraints on and . [ limitplot2 ] , ) and on frame - dragging torsion parameters ( ) from solar system tests .the grey area is the region excluded by lunar laser ranging and cassini tracking .the lageos nodes and perigee measurement of the lense - thirring effect , excludes values of outside the hatched region .general relativity corresponds to , and all torsion parameters = 0 ( black dot).,title="fig:",scaledwidth=80.0% ] the constraint on the linear combination of the frame - dragging parameters is rather weak , due to the uncertainty on the nongravitational perturbations .notice that the coefficients in front of and are of an order of magnitude smaller than the coefficients of the other parameters , so that the constraint on and is even looser .thermal thrusts ( tts ) are the main source of non - gravitational perturbations .one of the main drivers of lageos tts is the thermal relaxation time of its fused silica cube corner retroreflectors , which has been characterized in laboratory - simulated space conditions at the infn - lnf satellite / lunar laser ranging characterization facility ( scf ) , , .the measurements of lageos in a variety of thermal conditions provide the basis for possibly reducing the uncertainty on the thermal perturbative effects . as a consequence, the constraint could be improved .the constraint ( [ unsenepolepiu ] ) on the torsion parameters depends on the quantitative assessment of the uncertainty of the measurement of the lense - thirring effect .again , the value of the uncertainty reported in has been criticized by various authors . for example ries , eanes and tapleyargue in that the uncertainty is at best in the 50 - 100% range .the uncertainty of the measurement yields the upper bound on the right - hand side of the estimate ( [ sstima ] ) .hence , one can find the constraint on the linear combination of the torsion parameters corresponding to a different value of the uncertainty as it has been discussed in section [ sub : constnodes ] .we recall that in an upper bound on the combination is given .this constrains the torsion parameters within two parallel hyperplanes in a five - dimensional space .if we couple this bound with our two estimates and , we obtain that are constrained to lye in a five - dimensional set , which is unbounded only along two directions .hence , coupling gpb with slr measurements significantly reduces the degrees of freedom on the frame - dragging parameters .we conclude this section by observing that the recently approved juno mission to jupiter will make it possible , in principle , to attempt a measurement of the lense - thirring effect through the juno s node , which would be displaced by about 570 metres over the mission duration of one year .hence , such a mission yields an opportunity for a possible improvement of the costraints on torsion parameters .we have applied the framework recently developed in for gr with torsion , to the computation of the slow orbital motion of a satellite in the field generated by the earth .starting from the autoparallel trajectories , we computed the corrections to the classical orbital lense - thirring effect in the presence of torsion . by using perturbation theory ,we have found the explicit dependence of the secular variations of the longitudes of the node and of the perigee on the frame - dragging torsion parameters .the lageos nodes measurements , and the lageos nodes and perigee measurements , of the lense - thirring effect can be used to place constraints on torsion parameters , which are different and complementary to those set by gpb .under spherical axisymmetry assumptions , the metric tensor can be parametrized to first order as follows : dt^2 + \left[1 + \ff \frac{m}{r}\right ] dr^2 + r^2 ( d \theta^2 + \sin^2\theta~ d\phi^2 ) + 2 \gg \frac{j}{r } \sin^2\theta ~dt d\phi,\ ] ] where are three dimensionless parameters that can be immediately related to the parametrized post newtonian ( ppn ) parameters : here we follow the notation of the paper , instead of the ppn notation. this will be useful in section [ sec : desitter ] .the nonvanishing components of the torsion tensor are : the expression of the nonvanishing components of the connection approximated to first order in , and is the following : thank the university of roma `` tor vergata '' , cnr and infn for supporting this work .we thank i. ciufolini for suggesting this analysis after the publication of the paper by mtgc , and b. bertotti and a. riotto for useful advices .s. dellagnello et al . , in `` proceedings of the 16th international workshop on laser ranging '' ( 2008 ) , october 13 - 17 , poznan , poland , 121 .s. dellagnello et al . , adv .space res . , _ galileo special issue _ * 47 * , ( 2011 ) 822 - 842 .
|
we compute the corrections to the orbital lense - thirring effect ( or frame - dragging ) in the presence of spacetime torsion . we analyze the motion of a test body in the gravitational field of a rotating axisymmetric massive body , using the parametrized framework of mao , tegmark , guth and cabi . in the cases of autoparallel and extremal trajectories , we derive the specific approximate expression of the corresponding system of ordinary differential equations , which are then solved with methods of celestial mechanics . we calculate the secular variations of the longitudes of the node and of the pericenter . we also show how the laser geodynamics satellites ( lageos ) can be used to constrain torsion parameters . we report the experimental constraints obtained using both the nodes and perigee measurements of the orbital lense - thirring effect . this makes lageos and gravity probe b ( gpb ) complementary frame - dragging and torsion experiments , since they constrain three different combinations of torsion parameters . [ multiblock footnote omitted ] _ keywords _ : riemann - cartan spacetime , torsion , autoparallel trajectories , frame dragging , geodetic precession , satellite laser ranging , gravity probe b.
|
modern real - world infrastructures can be modeled as a system of several interdependent networks .for example , a power grid and the communication network that executes control over its power stations constitute a system of two interdependent networks .power stations depend on communication networks to function , and communication networks can not function without electricity .there have been several recent attempts to model these systems .one of these is based on a model of mutual percolation ( momp ) in which a node in each network can function only if ( 1 ) it receives a crucial commodity from support nodes in other networks and ( 2 ) it belongs to the giant component ( gc ) formed by other functional nodes in its own network . if the nodes within each network of the system are randomly connected , and the support links connecting the nodes in different networks are also random , then the momp for an arbitrary network of networks ( non ) can be solved analytically using the framework of generating functions , which allows to map the stochastic model into node percolation .it turns out that a non is significantly more vulnerable than a single network with the same degree distribution . in regular percolation of a single network ,the size of the gc gradually approaches zero when the fraction of nodes that survived the initial failure , approaches the critical value .in contrast , in the momp , the fraction of nodes in the mutual gc , undergoes a discontinuous first - order phase transition at , dropping from a positive value , , for to zero , for . the authors of ref . extended momp to euclidian lattices by studying the process of cascading failures in two lattices and of the same size in which the dependency links are limited by a distance constraint . in this casethere is a particular value of denoted by below which there is a second - order transition and above which the system collapses in a first - order transition .this process is characterized by the formation of spatial holes that burn the entire system when .the first rule of momp is quite general and can be easily verified from an engineering standpoint , but the second rule is not easy to verify .although it seems that a functioning node must belong to the giant component in order to receive sufficient power , information , or fuel from its own network , this condition can be relaxed , i.e. , the second rule in the momp can be replaced by a more general rule ( ) in which a node in order to be functional must belong to a connected component of size greater than or equal to , formed by other functional nodes of this network .this rule is significantly more general and realistic than rule ( 2 ) because the nodes in finite components are still able to receive sufficient commodities to continue functioning .note that the original rule ( 2 ) is actually a particular case of rule ( ) for . in this paper, we will show how the replacement of condition ( 2 ) by the more general condition with affects the results in complex networks and euclidean lattices .the most important role of the momp of a non is played by the function such that is the fraction of nodes in the _ giant component _ of network of the non after a random failure of a fraction of its nodes .the generating function of the degree distribution of network is given by . where is the degree distribution of network and the generating function of the excess degree distribution is where is the average degree of network .the fraction of nodes in the giant component relative to the fraction of surviving nodes , is given by , \label{e:3}\ ] ] where is the probability that the branches do not reach the gc , which satisfies the recursive equation .\label{e:4}\ ] ] we also compute the generating function of the component size distribution , \label{e:5}\ ] ] where is the fraction of nodes belonging to components of size in network relative to the fraction of surviving nodes , and satisfies the recursive equation . \label{e:6}\ ] ] note that when , eqs . ( [ e:5 ] ) and ( [ e:6 ] ) are equivalent to eqs . ( [ e:3 ] ) and ( [ e:4 ] ) , respectively , and hence to move from rule ( 2 ) to rule we replace function with function , defined the same as but replacing the words _ giant component _ with _ components of size larger than or equal to . thus , this section we present the analytic solution for two random regular ( rr ) and two erds rny ( er ) interdependent networks . from eq .( [ e:8 ] ) , using the lagrange inversion formula we obtain the coefficients for ^s| _ { x=0 } \ ; , \label{e : piis}\ ] ] and for er graphs with a poisson degree distribution and an average degree and for rr graphs with degree , we can obtain an analytical solution for eq .( [ e : piis ] ) for . for er networks given by and for rr graphs , with degree , for , is given by and when , is !}{(s-1)![s\;(z-2)+2]!}. \label{f : pirr}\ ] ]to illustrate our model , we consider two networks and with degree distributions in which bidirectional interdependency links establish a one - to - one correspondence between their nodes as in ref .the initial random failure of a fraction of nodes in one network at produces a failure cascade in both networks . at step of the failure cascade , the effective fraction of surviving nodes and of networks and , respectively , satisfies the recursive equations and the fractions of nodes belonging to components of size greater than or equal to , and , are given by where and .the process is iterated until the steady state is reached , where and when , the order parameter of our model , , transitions from when to when . inthe most simple case when the networks have identical degree distributions , . at the threshold , and where . because , the second derivative of is always negative , and thus eq .( [ e : mu1 ] ) has a trivial solution at from which , where , and as a consequence the system undergoes a continuous phase transition . for networks with a non - divergent second moment of the degree distributionthe transition is third - order , but for networks with a divergent second moment the transition is of a higher order .however , when , changes the sign of its second derivative from positive at to negative at , and hence eq .( [ e : mu1 ] ) has a nontrivial solution in the interval at which abruptly changes from a positive value above to zero below .thus for we always have a first - order transition , which was previously found , but only for .the different kinds of transitions that we find in our model are reminiscent of the ones found in _k_-core percolation .k_-core of a graph is a maximal connected subgraph of the original graph in which all vertices have degree at least , formed by repeatedly deleting all vertices of degree less than . in particular , in 2-core there is a continuous transition , while for the transition is first - order , as in our model for and respectively .the key difference between the _k_-core transition and our model is that in our model the functionality of a node is not based on its degree but rather on the size of the finite components to which it belongs .the similarity between the phase transitions in our model and the ones in _k_-core is due to a resemblance between the pruning rules of both processes .for example , in our model with , the final state is constituted of nodes with at least one active link in their own network and one dependency link and , hence , all nodes have two active links as in the final state of 2-core .next we will see that the similarities of the phase transitions arise due to the similarities in the leading terms of the taylor expansions of the equations that govern _ k_-core and our model .however , we will also demonstrate that both models do not belong to the same universality class . from eq .( [ e : mu1 ] ) for , at the steady state , the effective fraction of remaining nodes is given by ,\ ] ] where is the fraction of nodes that survived the initial damage , and is the generating function of the degree distribution . for rr , er , and scale - free networks with nondivergent second moment ( ) , close to the threshold at which , expanding eq .( [ eq.0 ] ) around gives ,\ ] ] and solving this equation for leads to equation ( [ eq.3q ] ) shows that , when ; thus there is a continuous phase transition at .recalling that for any degree distribution with converging first and second moments , , , we can rewrite eq .( [ eq.3q ] ) as where , with . since the denominator does not diverge then , with . for two interdependent networks with the same degree distribution ,the order parameter is given by and thus with .for , the second moment diverges , thus using the tauberian theorem the expansion of is given by ,\ ] ] in which , for , so , and as a consequence [ see eq .( [ eq.mu ] ) ] .thus there is a fourth - order phase transition for . in general , for scale - free ( sf ) networks with , the transition is of order for .in contrast with eq .( [ eq.0 ] ) , for 2-core percolation , the fraction of active nodes obeys the equation ,\ ] ] where is the fraction of nodes that survived the initial damage and is the effective fraction of survived links obeying a self - consistent equation .\ ] ] for homogeneous networks , such as rr and er , after expanding eq .( [ eq.4 ] ) around , we obtain .\ ] ] if , then as in regular percolation , and }{p\;g^{\prime\prime\prime}(1)}.\ ] ] finally expanding eq .( [ eq.3 ] ) around leads to , which indicates a third - order phase transition .for sf networks , if , from the tauberian theorem ,\ ] ] from where with and and the transition becomes of the order if . if , , then , and .thus for there is a phase transition but at , and the order parameter of this transition changes in reverse order from infinity for to for with .thus we have a close analogy between the model of functional finite component interdependent networks with and -core percolation in terms of the order of the phase transition .this analogy stems from the similarities in the taylor expansion of the equations describing these two models , but the physical basis on which these equations are constructed totally differs . in addition, the order of the transitions differs for sf networks with , and thus the two models do not belong to the same universality class .we test our theoretical arguments with stochastic simulations in which we use the molloy - reed algorithm to construct networks with a given degree distribution .the procedure is as follows : \(1 ) at we remove a random fraction of nodes in network , remove all the nodes in the components of network smaller than , and remove all the dependent nodes in network .\(2 ) at we remove all the nodes in the components of network smaller than and remove all the nodes in network dependent on dead nodes in .\(3 ) we repeat ( 2 ) until no more nodes can be removed .we perform simulations for a system of two er graphs , two rr graphs in which all nodes have the same degree , each of nodes , and two sf graphs with ( see fig . [ fig.2 ] ) .the sf networks have a degree distribution with , where is the exponent of the sf network .we set and . to compare our simulations with the theoretical results [ eq .( [ e : mu1 ] ) ] we use analytical expressions for given in the case of er and rr networks by eq .( [ e : piis ] ) . for sf networks we compute numerically .the details of the analytical solution for er and rr networks are presented in sec .[ s.anal ] .fig1a.eps ( 85,18)*(a ) * ( 20,50)*rr - rr * fig1b.eps ( 85,18)*(b ) * ( 20,50)*er - er * fig1c.eps ( 85,18)*(c ) * ( 20,50)*sf - sf * fig1d.eps ( 85,18)*(d ) * figures [ fig.2](a ) , [ fig.2](b ) , and [ fig.2](c ) show perfect agreement between the theoretical results and the simulations .figure [ fig.2](d ) shows a plot of as a function of for two rr networks with degree , two er networks with , and two sf networks with , and an average degree .as predicted , for and increases as increases .for we recover the mutual percolation threshold of ref . shown as dashed lines in fig . [ fig.2](d ) .we also study the same model for square lattices , generalizing refs . . when there are random interdependency links , i.e. , when there is no geometric constraint on the interdependencies , we use the exact results for the perimeter polynomials of the finite components to compute , where , are the perimeter polynomials for small components on a square lattice .here the system undergoes a first - order phase transition when at the predicted values of for and for , obtained by solving eq .( [ e : mu1 ] ) .when the interdependency links satisfy distance restrictions , we define the distance between the two interdependent nodes in lattices and as the shortest path between the nodes along the bonds of the lattices , i.e. , , where and are the coordinates of the interdependent nodes in lattices and , respectively . using simulations we see a first - order phase transition emerging at a certain value of in qualitative agreement with the case studied by li _et al_. . at this value of system reaches maximum vulnerability , indicated by a maximum of as a function of [ see fig .[ f : pc4](a ) ] .fig2a.eps ( 20,50)*(a ) * fig2b.eps ( 20,50)*(b ) * the value is much greater than the value obtained for the momp . for close to , the cascading failures propagate via node destruction on the domain perimeters composed of surviving node components , and this creates moving interfaces when the size of the void separating the domains is greater than .these moving interfaces belong to the class of depinning transitions characterized by a threshold that increases with ( see fig . [f : pc4 ] ) . here is the critical fraction of nodes remaining after the initial failure , such that for the interface of an infinitely large void will be eventually pinned and stop to propagate .in contrast , when , the interface of the voids propagates freely without pinning and eventually burns the entire system .near , the velocity of the domain interfaces approaches zero with a power - law behavior , where is a critical exponent . in order to compute ,we compute the velocity of the growing interface as a function until we get a straight line in a log - log plot , which corresponds to the value of the critical threshold .the value of the slope of is the critical exponent .we find , suggesting that the interface belongs to the universality class of a kardar - parisi - zhang ( kpz ) equation with quenched noise .as increases , the probability that large voids with a diameter greater than will spontaneously form , decreases , and becomes vanishingly small in a system of a finite size .thus in a finite system we must decrease below in order to create these voids .when , the interface of the voids begins to freely propagate without pinning and eventually , like a forest fire , burns through the entire system .thus the emergence of a first - order transition in a finite system depends on the system size , i.e. , the larger the system , the larger the value at which the effective first - order ( all - or - nothing ) transition is observable . fig3a.eps ( 20,50 ) * * fig3b.eps ( 20,50 ) * * figure [ f : pc4](a ) shows that as continues to increase , begins to decrease and slowly approaches the value for random interdependence as .there is no second - order percolation transition for finite and small that governs the size of the voids , in contrast to what was found by li _et al . _ for . for finite ,a second - order transition emerges when the value is large , , but when there is no transition , the fraction of survived nodes is zero only at , and it continues to be differentiable and independent of the system size for any positive value of .note , however , that as approaches the derivative of develops a sharp peak at a certain value of below which is very small but finite . at see a second - order transition because the height of the peak of the derivative of now increases with the lattice size , which is typical of a second - order transition .this behavior is associated with different regimes of domain formation . for small values , , the first stages of the cascading failure fragment the system into small independent regions , each of which has its own pinned interface(see fig .[ f : picture ] ) . in this regime , after the first stages of the cascade of failures the system practically does not change .after the first stages , the interfaces propagate very slow and can stop at any point leaving the resulting snapshots indistinguishable from the one obtained in the steady state .a single interface emerges only when these regions coalesce at , and a second - order phase transition related to the propagation of this interface through the entire system emerges .this second - order phase transition observed for has a unimodal distribution of the fraction of surviving nodes , and we use the maximum slope of the graph to compute the critical point . as increases between and , the distribution of becomes bimodal , and we compute the transition point using the condition of equal probability of both modes .note that reaches a maximum at where the two peaks of the distribution of separate completely , as indicated by a wide plateau in the cumulative distribution of .the cumulative distribution of for square lattices is presented in fig .[ f : pmu ] fig4a.eps ( 15,15)*(a ) * fig4b.eps ( 15,15)*(b ) * + fig4c.eps ( 15,15)*(c ) * fig4d.eps ( 15,15)*(d ) * + fig4e.eps ( 15,15)*(e ) * fig4f.eps ( 15,15)*(f ) * the emergence of the first order - phase transition above is related to the decrease of the correlation length as we move away from .we thus find that when is small , is significantly larger than . for the shortest path metric , and and for . as increases , gradually decreases and coincides with for .in summary , we find that in complex networks with , our model has a first order transition as for the previously studied case of momp with . for ,our model has a higher - than - second - order transition similar to that found in _k_-core , but the order of the transitions in sf networks differs depending on the exponent of the degree distribution .however , the finite component generalization of momp in spatially embedded networks has a totally different behavior , which is not related to _ k_-core . in this case, the transitions , when they exist , are dominated by the behavior of the pinning transition of void s interfaces .our model in spatially embedded networks is a rich and interesting phenomenon , which has many practical applications for studying the cascade of failures in real - world infrastructures embedded in space .our work can be extended to any non model incorporating momp , but our finite component model is significantly more general and realistic .we can generalize our model to derive equations for a partially interdependent non . herethe second - order transition will also appear when if the fraction of interdependent nodes is small .the value of can differ in different networks of the non and can be a stochastic variable , such that a component of size survives with probability , as in the heterogeneous _k_-core .the boston and yeshiva university work was supported by dtra grant no .hdtra1 - 14 - 1 - 0017 , by doe contract no .de - ac07 - 05id14517 ; and by nsf grants no .cmmi-1125290 , no .phy-1505000 , and no .s.v.b acknowledges the partial support of this research through the b.w .gamson computational science center at yeshiva university . l.a.b and m.a.d.m . thank unmdp and foncyt , pict 0429/13 , for financial support .
|
we present a cascading failure model of two interdependent networks in which functional nodes belong to components of size greater than or equal to . we find theoretically and via simulation that in complex networks with random dependency links the transition is first order for and continuous for . we also study interdependent lattices with a distance constraint in the dependency links and find that increasing moves the system from a regime without a phase transition to one with a second - order transition . as continues to increase , the system collapses in a first - order transition . each regime is associated with a different structure of domain formation of functional nodes .
|
the internal antarctic plateau is , at present , a site of potential great interest for astronomical applications .the extreme low temperatures , the dryness , the typical high altitude of the internal antarctic plateau ( more than 2500 m ) , joint to the fact that the optical turbulence seems to be concentrated in a thin surface layer whose thickness is of the order of a few tens of meters do of this site a place in which , potentially , we could achieve astronomical observations otherwise possible only by space . in spite of the exciting first results ( see refs . , , ) the uncertainties on the effective gain that astronomers might achieve from ground - based astronomical observation from this location still suffers from serious uncertainties and doubts that have been pointed out in previous work ( see refs . , , ) .a better estimate of the properties of the optical turbulence above the internal antarctic plateau can be achieved with both dedicated measurements done in simultaneous ways with different instruments and simulations provided by atmospheric models .simulations offer the advantage to provide volumetric maps of the optical turbulence ( ) extended on the whole internal plateau and , ideally , to retrieve comparative estimates in a relative short time and homogeneous way on different places of the plateau . in a previous paper group performed a detailed analysis of the meteorological parameters from which the optical turbulence depends on , provided by the general circulation model ( gcm ) of the european center for medium - range weather forecasts ( ecmwf ) . in that work we quantified the accuracy of the ecmwf estimates of all the major meteorological parameters and , at the same time , we pointed out which are the limitations of the general circulation models . in contexts in which the gcms fail , mesoscale models can supply more information .the latter are indeed conceived to reconstruct phenomena ( such as the optical turbulence ) that develop at a too small spatial and temporal scale to be described by a gcm . in spite of the fact that mesoscale models can attain higher resolution than the gcm , thesparameters such as the optical turbulence are not explicitly resolved but are parameterized , i.e. the fluctuations of the microscopic physical quantities are expressed as a function of the corresponding macroscopic quantities averaged on a larger spatial scale ( cell of the model ) . for classical meteorological parameters the use of a mesoscale model should be useless if gcms such as the one of the ecmwf could provide estimate with equivalent level of accuracy .for this reason the hagelin et al paper has been a first step towards the exploitation of the mesoscale meso - nh model .we retrieved all what it was possible from the ecmwf analyses and we defined their limitations at the same time .we concluded that in the first 10 - 20 m , the ecmwf analyses show a discrepancy with respect to measurements of the order of 2 - 3 m.s for the wind speed and of 4 - 5 k for the temperature .preliminary tests concerning the optimization of the model configuration and sensitivity to the horizontal and the vertical resolution with the meso - nh model have already been conducted by our team for the internal antarctic plateau . in this paperwe present further progress of that work .more precisely , we intend : * to compare the performances of the mesoscale meso - nh model and the ecmwf general circulation model in reconstructing wind speed and absolute temperature ( main meteorological parameters from which the optical turbulence depends on ) with respect to the measurements .this analysis will quantify the performances of the meso - nh model with respect to the gcm from the ecmwf . * to perform simulations of the optical turbulence above dome c employing different model configurations and compare the typical simulated thickness of the surface layer with the one measured by trinquet et al . in this way we aim to establish which configurationis necessary to reconstruct correctly the . [ cols="^ " , ] the three nights presented here exhibit different turbulent conditions .they show the ability of the model to react and predict the evolution of different surface layers . during the first night ( 2005july 11 ) the surface layer is constant in time , with a mean forecasted thickness of 82 m ( with the grid - nested simulation , fig .[ fig : ot]b ) .the observation for this night gave a value of 98 m at 14:10 utc . the second night ( 2005 july 25 )has a forecasted mean surface layer height lower ( with the grid - nested simulation , fig .[ fig : ot]d ) : 17 m. the observed value was 22 m at 13:53 utc .the third and last night displayed ( 2005 august 29 ) has a forecasted mean surface layer height of 97 m ( always with the grid - nested simulation , [ fig : ot]f ) .the observed value for this night was 47 m at 14:47 utc . in two of these three nights , the thickness of the surface layer retrieved by the model is well correlated with the observed one .one can notice that for these two nights , the monomodel simulations give higher values than the grid - nested simulations .the third night shows some interesting variability of the surface layer height , which are not present in the other nights and that is a signature of an evident temporal evolution of the turbulent energy distribution even in conditions of a pretty stratified atmosphere . for this nightmeso - nh gives a thickness for the surface layer of around 97 m instead of the observed one ( 47 m ) .in this paper we studied the performances of a mesoscale meteorological model , meso - nh , in reconstructing wind and temperature vertical profiles above concordia station , a site in the internal antarctic plateau .two different configurations were tested : monomodel low horizontal resolution , and grid - nesting high horizontal resolution .the results were compared to the ecmwf general circulation model and radiosoundings .* we showed that near the surface , meso - nh retrieved better the wind vertical gradient than the ecmwf analyses , thanks to the use of a highest vertical resolution .more over , the analysis of the first vertical grid point permits us to conclude that , as is , the meso - nh model surface temperature is closest to the observations than the ecmwf general circulation model which is too warm . * the outputs from the grid - nested simulations are closer to the observations than the monomodel simulations .this study highlighted the necessity of the use of high horizontal resolution to reconstruct a good meteorological field in antarctica , even if the orography is almost flat over the internal antarctica plateau .the computations estimates from a previous study are probably affected by the low horizontal resolution they used in their simulations .* for what concerns the optical turbulence , both configurations predict a mean surface layer height higher than in the observations . however , it is always inferior at 100 m ( when we used the criterion [ eq : bl1 ] of trinquet et al ) .the configuration of meso - nh giving better estimate ( closer to observations ) is in grid - nesting mode . *the results of meso - nh concerning the computation of the mean thickness of the surface layer are not very dependent of the time interval used to average it .this widely simplifies the analysis of simulations . *the criterion used in trinquet et al appears to be misleading .indeed , it underestimates the typical thickness of the turbulent surface layer .we propose the use of another critera ( presented in the last section of the paper ) instead .it gives a mean value higher of around 15 m for the eleven nights .this result has an important implication for astronomical application .it is indeed not so realistic to envisage a telescope placed above a tower of more than 50 m and the use of the adaptive optics remains the unique path to follow to envisage astronomical facilities at dome c. in this paper we did not present results concerning the seeing .we will focus our work ahead on this parameter , discriminating between two different partial contributions : the seeing in the free atmosphere , and the seeing of the surface layer .we plan to make comparisons between meso - nh output and observations .the study has been carried out using radiosoundings from the progetto di ricerca `` osservatorio meteo climatologico '' of the programma nazionale di ricerche in antartide ( pnra ) , _ecmwf analyses are extracted from the mars catalog , _ http://www.ecmwf.int_. this study has been funded by the marie curie excellence grant ( forot ) - mext - ct-2005 - 023878 .e. masciadri , j. vernin and p. bougeault , `` 3d mapping of optical turbulence using an atmospheric numerical model .i : a useful tool for the ground - based astronomy . '' ,a&ass , 137 , pp . 185 - 202 , 1999 .e. masciadri , j. vernin and p. bougeault , `` 3d mapping of optical turbulence using an atmospheric numerical model .ii : first results at cerro paranal . '' , a&ass , 137 , pp . 203 - 216 , 1999 .e. masciadri and p. jabouille , `` improvements in the optical turbulence parameterization for 3d simulations in a region around a telescope '' , a&a , 376 , pp. 727 - 734 , 2001 .e. masciadri , r. avila and l.j .sanchez , `` statistic reliability of the meso - nh atmospherical model for 3d simulations '' , rmxaa , 40 , pp .3 - 14 , 2004 .a. agabi , e. aristidi , m. azaouit , e. fossat , f. martin , t. sadibekova , j. vernin and a. ziad , `` first whole atmosphere nighttime seeing measurements at dome c , antarctica '' , pasp , 118 , pp . 344 - 348 , 2006 . j. lawrence , m. ashley , a. tokovinin , t. travouillon , `` exceptional astronomical seeing conditions above dome c in antarctica '' , nature , 431 , pp . 278 - 281 , 2004 .h. trinquet , a. agabi , j. vernin , m. azout , e. aristidi and e. fossat , `` nighttime optical turbulence vertical str ucture above dome c in antarctica '' , pasp , 120 , pp . 203 - 211 , 2008 .s. hagelin , e. masciadri , f. lascaux and j. stoesz , `` comparison of the atmosphere above the south pole , dome c and dome a : first attempt '' , mnras , accepted .e. aristidi , k. agabi , m. azouit , e. fossat , j. vernin , t. travouillon , j.s .lawrence , c. meyer , j.w.v .storey , b. halter , w.l roth and v. walden , `` an analysis of temperatures and wind speeds above dome c , antarctica '' , a&a , 430 , pp .739 - 746 , 2005 .e. masciadri , f. lascaux , j. stoesz , s. hagelin and k. geissler , `` a different ' ' glance `` to the site testing above dome c '' , roscoff arena workshop 13 - 20 october 2006 . k. geissler and e. masciadri , `` meteorological parameter analysis above dome c using data from the european centre for medium - range weather forecasts '' , pasp , 118 , 845 , pp .1048 - 1065 , 2006 .f. lascaux , e. masciadri , j. stoesz and s. hagelin , `` mesoscale simulations above antarctica for astronomical applications : first approaches '' , symposium on seeing , kona - hawaii , 20 - 22 march 2007 .proceeding available at _ http://weather.hawaii.edu / symposium / publications_. j. p. lafore , j. stein , n. asencio , p. bougeault , v. ducrocq , j. duron , c. fischer , p. hereil , p. mascart , v. masson , j .-pinty , j .- l .redelsperger , e. richard and j. vil - guerau de arellano , `` the meso - nh atmospheric simulation system .part i : adiabatic formulation and control simulations '' , annales geophysicae , 16 , pp .90 - 109 , 1998 .f. lipps and r. s. hemler , `` a scale analysis of deep moist convection and some related numerical calculations '' , j. atmos ., 39 , pp .2192 - 2210 , 1982. t. gal - chen and c.j .sommerville , `` on the use of a coordinate transformation for the solution of the navier - stokes equations '' , j. comput .phys . , 17 , pp . 209 - 228 , 1975 .a. arakawa and f. messinger , `` numerical methods used in atmospheric models '' , garp tech ., 17 , wmo / icsu , geneva , switzerland , 1976 .r. asselin , `` frequency filter for time integration '' , mon ., 100 , pp . 487 - 490 , 1972 .j. cuxart , p. bougeault and j .- l .redelsperger , `` a turbulence scheme allowing for mesoscale and large - eddy simulations '' , q. j. r. meteorol .soc . , 126 , pp . 1 - 30 , 2000 .p. bougeault and p. lacarrre , `` parameterization of orographic induced turbulence in a mesobeta scale model '' , mon ., 117 , pp . 1972 - 1890 , 1989 .j. noilhan and s. planton , `` a simple paramterization of land surface processes for meteorological models '' , mon . weather ., 117 , pp . 536 - 549 , 1989 .e. masciadri and s. egner , `` first seasonal study of optical turbulence with an atmospheric model '' , pasp , 118 , 849 , pp .1604 - 1619 , 2006. j. stein , e. richard , j. p. lafore , j .- p .pinty , n. asencio and s. cosma , `` high - resolution non - hydrostatic simulations of flashflood episodes with grid nesting and ice phase parameterization '' , meteorol ., 72 , pp . 203 - 221 , 2000 .m. swain and h. galle , `` antarctic boundary layer seeing '' , pasp , 118 , pp .1190 - 1197 , 2006 .
|
mesoscale model such as meso - nh have proven to be highly reliable in reproducing 3d maps of optical turbulence ( see refs . , , , ) above mid - latitude astronomical sites . these last years ground - based astronomy has been looking towards antarctica . especially its summits and the internal continental plateau where the optical turbulence appears to be confined in a shallow layer close to the icy surface . preliminary measurements have so far indicated pretty good value for the seeing above 30 - 35 m : 0.36 `` ( see ref . ) and 0.27 '' ( see refs . , ) at dome c. site testing campaigns are however extremely expensive , instruments provide only local measurements and atmospheric modelling might represent a step ahead towards the search and selection of astronomical sites thanks to the possibility to reconstruct 3d maps over a surface of several kilometers . the antarctic plateau represents therefore an important benchmark test to evaluate the possibility to discriminate sites on the same plateau . our group has proven that the analyses from the ecmwf global model do not describe with the required accuracy the antarctic boundary and surface layer in the plateau . a better description could be obtained with a mesoscale meteorological model . in this contribution we present the progress status report of numerical simulations ( including the optical turbulence - ) obtained with meso - nh above the internal antarctic plateau . among the topic attacked : the influence of different configurations of the model ( low and high horizontal resolution ) , use of the grid - nesting interactive technique , forecasting of the optical turbulence during some winter nights .
|
a characteristic feature of many real - world random phenomena is that the _ magnitude _ or the _ intensity _ of realized fluctuations varies in time or space , or both .there are various terms used in different contexts that roughly correspond to this characteristic . to highlight two of them , in studies of turbulence , this is called _ intermittency _ , whereas in finance and economics the corresponding notion is ( stochastic ) _volatility_. sudden extreme fluctuations say , rapid changes in wind velocity or prices of financial securities have often dire consequences , so understanding their statistical properties is clearly of key importance .barndorff - nielsen and schmiegel have introduced a class of lvy - based random fields , for which they coined the name _ ambit field _ , to model space - time random phenomena that exhibit intermittency or stochastic volatility .the primary application of ambit fields has been phenomenological modeling of turbulent velocity fields .additionally , barndorff - nielsen , benth , and veraart have recently applied ambit fields to modeling of the term structure of forward prices of electricity .electricity prices , in particular , are prone to rapid changes and spikes since the supply of electricity is inherently inelastic and electricity can not be stored efficiently .it is also worth mentioning that , at a more theoretical level , some ambit fields have been found to arise as solutions to certain stochastic partial differential equations .barndorff - nielsen , benth , and veraart provide a survey on recent results on ambit fields and related _ ambit processes_. in this paper , we study the asymptotic behavior of power variations of a two - parameter ambit field driven by white noise , with a view towards measuring the realized volatility of the ambit field .specifically , we consider ambit field ^ 2} ] , so that only past innovations can influence the present .we refer to for a discussion on the possible shapes of in various modeling contexts .we consider here only the case where the volatility field and the white noise are _ independent_. in this case the integral in can be defined in a straightforward manner as a _ wiener integral _ , conditional on .( ambit fields with volatilities that do depend on the driving white noise can be defined , but then the integration theory becomes more involved , see for details .moreover , the general framework of ambit fields also accommodates non - gaussian random measures , lvy bases , as driving noise . )the power variations we study are defined over observations of on a _ square lattice _ in ^ 2 ] converges to .this could be seen as a first step towards a theory of volatility estimation for ambit fields .there is a wealth of literature on laws of large numbers and central limit theorems for power , bipower , and multipower variations of ( one - parameter ) stochastic processes .notably , semimartingales are well catered for , see the monograph by jacod and protter for a recent survey of the results .similar results for non - semimartingales are , for obvious reasons , more case - specific . closely relevant to the present paper are the results for gaussian processes with _ stationary increments _ and _ brownian semistationary processes _ .in fact , a brownian semistationary process is the one - parameter counterpart of an ambit field driven by white noise .the proofs of the central limit theorems in use method that involves gaussian approximations of _ iterated wiener integrals _ , due to nualart and peccati .we employ a similar approach , adapted to the two - parameter setting , in the proof of our central limit theorem .barndorff - nielsen and graversen have recently obtained a law of large numbers for the quadratic variation of an ambit process driven by white noise in a space - time setting . the probabilistic setup they consider is identical to ours , but their quadratic variation is defined over observations along a line in two - dimensional spacetime , instead of a square lattice .the proof of our law of large numbers is inspired by the arguments used in .compared to the one - parameter case , asymptotic results for lattice power variations of random fields with two or more parameters are scarser .there are , however , several results for gaussian random fields , under various assumptions constraining their covariance structure .kawada proves a law of large numbers for general variations of a class of multi - parameter gaussian random fields , extending an earlier result of berman .guyon derives a law of large numbers for power variations ( using two kinds of increments ) of a stationary , two - parameter gaussian random field with a covariance that behaves approximately like a power function near the origin .an early functional central limit theorem for quadratic variations of a multi - parameter gaussian random field , is due to deo .motivated by an application to statistical estimation of fractal dimension , chan and wood prove a central limit theorem for quadratic variations of a stationary gaussian random field satisfying a covariance condition that is somewhat similar to the one of guyon .more recently , rveillac has obtained central limit theorems for weighted quadratic variations of ordinary and fractional brownian sheets .similar results , which include also non - central limit theorems , applying to more general _ hermite variations _ of fractional brownian sheets appear in the papers by breton and rveillac , stauch , and tudor .for any , non - empty , and , we write , and .moreover , stands for the closure of in . for any , ,we use and , , , and . it will be convenient to write ( resp . ) whenever there exists that depends only on the parameter , such that ( resp . ) .we write to signify that both and hold .we denote the weak convergence of probability measures by , the convergence of random elements in law by , and the space of borel probability measures on by .the support of , or briefly , is the smallest closed set with full -measure , given by .the lebesgue measure on is denoted by and the dirac measure at by . for any , we write ] with the lebesgue measure as the _ control measure_. recall that this means that is a zero - mean gaussian process indexed by ^ 2) ] for any , ^ 2) ] is a continuous , strictly positive _ volatility field _ , independent of .let us denote by the _ essential support _ of ( see , e.g. , for the definition ) . in it suffices to integrate over the set thus, we recover the setting outlined in and with . to ensure that ^ 2 ], we assume that ^ 2 ] , and ^ 2,{\mathbb{r}}_+ ) , \mathcal{b}(c([-1,1]^2,{\mathbb{r}}_+)),{\mathbf{p}}_\sigma\big)\ ] ] is the canonical probability space of , i.e. , for any and ^ 2 ] , we define to be the wiener integral of the function , which belongs to ^ 2) ] and ( see , e.g. , for details ) , no issues will arise with the measurability of .given any continuous function \longrightarrow [ 0,1]^2 ] , giving the description of the ambit field as seen by an observer moving along the curve .such processes are called _ambit processes_. barndorff - nielsen and graversen study the limit behavior of the quadratic variation of } ] is defined as the definition is standard in the literature of random fields , and can be recovered for example by partial differencing of with respect to and or vice versa . although not needed in the sequel , it is worth pointing out the fact that the map can be extended to a finitely additive random measure on the algebra generated by finite unions and intersections of rectangles in ^ 2 ] for any .based on the values of on the lattice , we may compute the increments of over the rectangles \times \big((j-1)/n , j / n \big ] , \quad \textrm{,~.}\ ] ] using them , we define the -th power variation of over by ^ 2.\ ] ] where is a _thinning parameter_. this allows us to take only every -th increment into account when computing the power variation .the case corresponds to ordinary power variations whereas letting gives rise to thinned power variations . note that we regard as a random field on ^ 2 ] stands for the natural two - parameter generalization of the cdlg space )\subset { \mathbb{r}}^{[0,1]} ] , along with some useful related facts .[ thm : powerlln ] if assumption [ asm : lln ] holds , then {{\mathbf{p } } } m_p \sigma^{(p,\pi ) } \quad \textrm{in ,}\ ] ] where ^ 2.\ ] ] assumption [ asm : lln ] is slightly more restrictive than mere _ mutual singularity _ of and .indeed , the proof of theorem [ thm : powerlln ] uses a separation argument that relies on the existence of a _ closed _-null set with full -measure .the case where , for some ^ 2 ] such that for all , 1 .[ c : null ] , 2 .[ c : inter ] for any such that , 3 .[ c : decay ] .the sets should be seen as shrinking `` neighborhoods '' of the point .in fact , items and imply that for all , \times[t_0-\varepsilon_n , t_0+\varepsilon_n].\ ] ] thus , by item , assumption [ asm : lln ] holds with .concrete examples of specifications of the weight function that satisfy assumption [ asm : clt ] are provided in and , below .the central limit theorem is stated in terms of _ stable convergence in law _ , a notion due to rnyi , which is the standard mode of convergence used in central limit theorems for power , bipower , and multipower variations of stochastic processes . for the convenience of the reader ,we recall here the definition .let be random elements in a metric space , defined on the probability space , and let be a random element in , defined on , an extension of .when is a -algebra , we say that converge _-stably in law _ to and write , if \xrightarrow [ n\rightarrow \infty ] { } { \mathbf{e}}'[f(u ) v]\ ] ] for any bounded , -measurable random variable and bounded . choosing in shows that stable convergence implies ordinary convergence in law . however , the converse is not true in general .[ thm : powerclt ] if assumption [ asm : clt ] holds , then \big ) \xrightarrow [ n\rightarrow \infty]{l_\mathcal{f } } ( m_{2p}-m^2_p)^{1/2 } \xi^{(p ) } \quad \textrm{in ,}\ ] ] where \times[-t_0,t - t_0 ] } \sigma^p_{(u , v ) } { w}^\perp({\mathrm{d}}u , { \mathrm{d}}v ) , \quad ( s , t ) \in [ 0,1]^2\ ] ] and is a white noise on ^ 2 ] .assumption [ asm : clt ] , of course , can not hold under this specification of . to satisfy assumption [ asm : clt ] , the weights imposed by should be concentrated to a neighborhood of some point in ^ 2 d([0,1]^2) ] , instead of the limit given by the law of large numbers . while it is shown in the proof of theorem [ thm : powerlln ] that , under assumption [ asm : lln ] , \xrightarrow [ n\rightarrow \infty ] { } m_p \sigma^{(p,\pi)}_{(s , t ) } \quad \textrm{for any ,}\ ] ] the rate of convergence in appears to be in most , if not all , cases too slow that we could replace ] , and , consequently , - m_p \sigma^{(p,\pi)}_{(s , t)}\bigg ) < 0.\ ] ] this peculiarity limits the usefulness of theorem [ thm : powerclt ] in the context of statistical inference ( e.g. , regarding confidence intervals ) on .is it possible to extend theorem [ thm : powerclt ] to cover _ ordinary _ power variations ?quite possibly , but we expect that the limit would not remain the same .in fact , we conjecture that the situation is analogous to brownian semistationary ( ) processes ( see ) . recall that ordinary power variations of processes , under certain conditions , satisfy a central limit theorem ( * ? ? ?* theorem 3.2 ) with a limit analogous to , but multiplied with a constant that is strictly larger than , whereas the limit in the corresponding result for thinned power variations ( * ? ? ?* theorem 4.5 ) has the factor .this is a consequence of the non - generate limiting correlation structure ( which identical to the one of _ fractional brownian noise _ ) of the increments of a process . thinning decreases the asymptotic variance in the central limit theorem through `` decorrelation '' of the increments , but at the expense of rate of convergence . while our theorem [ thm : powerclt ] is analogous to theorem 4.5 of , obtaining a central limit theorem for unthinned power variations , akin to theorem 3.2 of , is currently an open problem , which we hope to address in future work , along with allowing for that depends on the driving noise .the key problem is the identification of the limiting correlation structure of the increments .however , it seems that such a result can not be accomplished by a straightforward modification of the arguments in since the one - dimensional regular variation techniques used with processes appear unapplicable in our setting due to the additional dimension .we also expect that , like in , such a result would require stronger assumptions on the dependence structure of the ambit field beyond what we formulate using the concentration measures and a smoothness condition on .in this section , we prove the law of large numbers for power variations , theorem [ thm : powerlln ] . the proof is based on the conditional gaussianity of the ambit field given and , in particular , on a covariance bound for nonlinear transformations of jointly gaussian random variables , which we will review first .note that conditional on is typically non - stationary and the existing laws of large numbers for gaussian random fields appear not to be ( at least directly ) applicable to this setting .recall that the _ hermite polynomials_ on are uniquely defined through the generating function they are orthogonal polynomials with respect to the gaussian measure on .more precisely , if is a gaussian random vector such that = { \mathbf{e}}[x_2 ] = 0 ] , then ( cf .* lemma 1.1.1 ) ) = \begin{cases } { \mathbf{e}}[x_1x_2]^n , & n = m,\\ 0 , & n\neq m. \end{cases}\ ] ] thus , is an orthonormal basis of and , in particular , for any there exists such that the index of the leading non - zero coefficient in the expansion , that is , , is known as the _ hermite rank _ of the function .using and , it is straightforward to establish the following bound for covariances of functions of jointly gaussian random variables that is sometimes attributed to j. bretagnolle ( see , e.g. , ( * ? ? ?* lemme 1 ) ) .this simple inequality is , in fact , a special case of a far more general result due to taqqu ( * ? ? ?* lemma 4.5 ) .[ lem : hermiterank ] let be as above .if has hermite rank , then | \lesssim_f |{\mathbf{e}}[x_1x_2]|^q \quad \textrm{for any .}\ ] ] for any , write , .clearly , and gaussian integration by parts shows that the hermite rank of is .thus , lemma [ lem : hermiterank ] implies that , below .prior to proving theorem [ thm : powerlln ] , we still need to establish a simple fact that follows from the convergence . to this end ,recall that the _ lvy prohorov distance _ of , is defined as the lvy prohorov distance is a metric on and holds if and only if ( see , e.g. , ) .below , we write , for the sake of brevity .[ lem : prohorov ] if , then there exists positive numbers such that and .let be such that and for any . by the definition of the lvy prohorov distance , for any .since , we have . clearly , we have if and .thus , by lemma [ lem : unifconv ] , it suffices to establish pointwise convergence {{\mathbf{p } } } m_p \int_0^{s } \int_0^{t } \bigg(\int \sigma_{(u-\xi , v-\tau)}^2\pi({\mathrm{d}}\xi,{\mathrm{d}}\tau)\bigg)^{p/2 } { \mathrm{d}}u { \mathrm{d}}v\ ] ] for any ^ 2 ] and as a gaussian random field .let us first show that = m_p \int_0^{s } \int_0^{t } \bigg(\int \sigma_{(u-\xi , v-\tau)}^2\pi({\mathrm{d}}\xi,{\mathrm{d}}\tau)\bigg)^{p/2 } { \mathrm{d}}u { \mathrm{d}}v.\ ] ] since & = m_p { \mathbf{e}}_{w}\big[\big|y\big(r^{(n)}_{(i , j)}\big)\big|^2\big]^{p/2 } \\ & = m_p c_n^{p/2 } \bigg(\int \sigma^2_{(i / n - \xi , j / n - \tau ) } \pi_n({\mathrm{d}}\xi,{\mathrm{d}}\tau)\bigg)^{p/2 } , \end{split}\ ] ] we have & = m_p\varepsilon^2_n \sum_{i=1}^{\lfloor s/\varepsilon_n \rfloor}\sum_{j=1}^{\lfloor t/\varepsilon_n \rfloor } \bigg(\int \sigma^2_{(\varepsilon_n i - \xi , \varepsilon_n j - \tau ) } \pi_n({\mathrm{d}}\xi,{\mathrm{d}}\tau)\bigg)^{p/2 } \\ & = m_p \int_0^{\lfloor s \rfloor_n}\int_0^{\lfloor t \rfloor_n}\bigg(\int \sigma^2_{(\lceil u \rceil_n - \xi , \lceil v \rceil_n - \tau ) } \pi_n({\mathrm{d}}\xi,{\mathrm{d}}\tau)\bigg)^{p/2}{\mathrm{d}}u { \mathrm{d}}v , \end{split}\ ] ] where and for any and .since and as , the convergence follows from lebesgue s dominated convergence theorem , provided that for any ^ 2 ] .thus , by lebesgue s dominated convergence theorem , it suffices to show that tends to zero almost everywhere as .we will split this task into two parts by treating separately and where is a sequence of positive real numbers such that and , the existence of which is ensured by lemma [ lem : prohorov ] .applying the cauchy schwarz inequality to , we obtain { } 0 .\end{split}\ ] ] similarly , in the case of we obtain where , however , a slightly more elaborate argument , inspired by the proof of lemma 1 in , is needed to show convergence to zero . by urysohn s lemma , for any there exists ) ] , which will have a special role in what follows .moreover , let ^{2k}) ] , for any , the -fold _ iterated wiener integral _ of the kernel with respect to the white noise , denoted by , can be defined as a linear map with the key property = k!\| f\|^2_{{\mathcal{h}}^{\otimes k}}.\ ] ] ( for the details of the construction , see . )the remarkable feature of these integrals is that any admits a unique _ chaos decomposition _* theorem 1.1.2 ) , where for any , with the convention that ] , where since given , andsince the hermite rank of the function is , we have the expansion \big ) \\ & = \varepsilon_n c^{-p/2}_n \sum_{i=1}^{\lfloor s/\varepsilon_n \rfloor}\sum_{j=1}^{\lfloor t/\varepsilon_n \rfloor } \|f_{n,(i , j)}\|_{\mathcal{h}}^{p } u_p \big(\|f_{n,(i , j)}\|_{\mathcal{h}}^{-1}y\big(r^{(n)}_{(k_n i , k_n j ) } \big ) \big ) \\ & = \varepsilon_n c^{-p/2}_n \sum_{i=1}^{\lfloor s/\varepsilon_n \rfloor}\sum_{j=1}^{\lfloor t/\varepsilon_n \rfloor } \|f_{n,(i , j)}\|_{\mathcal{h}}^{p } \sum_{k=2}^\infty \alpha_k h_k\big(\|f_{n,(i , j)}\|_{\mathcal{h}}^{-1}y\big(r^{(n)}_{(k_n i , k_n j ) } \big ) \big ) .\end{split}\ ] ] as , the hermite representation of iterated wiener integrals ( * ? ? ?* theorem 13.25 ) yields by plugging into and rearranging , we arrive at the asserted chaos decomposition .[ rem : hermite ] since are the non - zero coefficients in the hermite expansion of , we have we will use lemma [ lem : chaosclt ] to prove the convergence of the finite - dimensional distributions of , using the chaos decomposition . to this end , we study the asymptotic behavior of the kernels in .[ lem : kerasy ] if assumption [ asm : clt ] holds , then for any , ^ 2 ] consists of functions ^ 2 \longrightarrow { \mathbb{r}} ] , the following two conditions hold .* we have if is a sequence in such that , * for any , there exists that satisfies if is a sequence in such that .in other words , ^ 2) ]. the space ^ 2) ] such that , where and are increasing bijections \longrightarrow[0,1] ] .we say that in the _ skorohod topology _ if there exist such that ^ 2 } | f_n \circ \lambda_n ( s , t ) - f(s , t)|+ \sup_{(s ,t)\in [ 0,1]^2 } \| \lambda_n(s , t)-(s , t)\| \xrightarrow [ n\rightarrow \infty ] { } 0.\ ] ] there is a _ skorohod metric _ on ^ 2) ] enjoys the usual properties of separability and completeness ( i.e. , it is a _ polish _ space ) , similarly to ) ] with the ( non - separable ) _ uniform topology _ , thanks to the following result .[ lem : skorohod ] let ^ 2) ] . then , in the skorohod topology if and only if uniformly .it is obvious that uniform convergence implies convergence in the skorohod topology . to show the converse ,let us fix .since is uniformly continuous , there exists such that if .now , let be such that holds .then there exists such that for all , ^ 2 } | f_n \circ \lambda_n ( s , t ) - f(s , t)|+\sup_{(s , t)\in [ 0,1]^2 } \| \lambda_n(s , t)-(s , t)\| < \frac{\varepsilon}{2 } \wedge \delta.\ ] ] by the triangle inequality , we have thus for all , ^ 2}|f_n(s , t)-f(s , t)| & = \sup_{(s , t)\in [ 0,1]^2}|f_n\circ \lambda_n(s , t)-f \circ \lambda_n(s , t)| \\ & \leqslant \sup_{(s , t)\in [ 0,1]^2}|f_n \circ \lambda_n(s , t)-f(s , t)|\\ & \quad + \sup_{(s , t)\in [ 0,1]^2}|f(s , t)-f \circ \lambda_n(s , t)| < \varepsilon , \end{split}\ ] ] which completes the proof .the following simple lemma is a key tool in proofs of stable convergence in law .it is certainly well - known and , indeed , used ( implicitly ) in several papers ( e.g. , ) , but due to lack of a reference , we provide a proof for the convenience of the reader .[ lem : stable ] let and be polish spaces . if are random elements in and is a random element in , all defined on a common probability space , such that {l } ( u , v),\ ] ] then {l_{\sigma(v ) } } u.\ ] ] we will use a monotone class argument . to this end , let be bounded and write = { \mathbf{e}}'[f(u)x ] \big\}.\ ] ] clearly , is vector space that contains all constant random variables . moreover ,if and , then - { \mathbf{e}}'[f(u)x]|\lesssim_f |{\mathbf{e}}'[f(u_n)\tilde{x } ] - { \mathbf{e}}'[f(u)\tilde{x}]| + { \mathbf{e}}'[|x-\tilde{x}|].\ ] ] hence , is closed under uniform convergence and if is such that for some constant , then .now , note that is closed under multiplication and by the continuous mapping theorem .thus , by the functional monotone class lemma , contains any bounded -measurable random variable . since is separable , we have and the assertion follows . o. e. barndorff - nielsen , f. e. benth , and a. e. d. veraart ( 2011 ) : ambit processes and stochastic partial differential equations . in_ advanced mathematical methods for finance _ , pp. 3574 .springer , heidelberg .o. e. barndorff - nielsen , f. e. benth , and a. e. d. veraart ( 2012 ) : recent advances in ambit stochastics with a view towards tempo - spatial stochastic volatility / intermittency .] o. e. barndorff - nielsen , j. m. corcuera , and m. podolskij ( 2013 ) : limit theorems for functionals of higher order differences of brownian semi - stationary processes . in a.n. shiryaev , s. r. s. varadhan , and e. l. presman , eds . , _prokhorov and contemporary probability theory _ , pp .springer , berlin .o. e. barndorff - nielsen and j. schmiegel ( 2007 ) : ambit processes : with applications to turbulence and tumour growth . in _ stochastic analysis and applications _ ,vol . 2 of _ abel symp .. 93124 .springer , berlin .j. m. corcuera , e. hedevang , m. s. pakkanen , and m. podolskij ( 2013 ) : asymptotic theory for brownian semi - stationary processes with application to turbulence ._ stochastic process .appl . _ * 123*(7 ) , 25522574 .g. peccati and c. a. tudor ( 2005 ) : gaussian limits for vector - valued multiple stochastic integrals . in _ sminaire de probabilits xxxviii _ , vol .1857 of _ lecture notes in math ._ , pp . 247262 .springer , berlin .j. schmiegel , o. e. barndorff - nielsen , and h. c. eggers ( 2005 ) : a class of spatio - temporal and causal stochastic processes with application to multiscaling and multifractality ._ * 101 * , 513519 . m. s. taqqu ( 1977 ) : law of the iterated logarithm for sums of non - linear functions of gaussian variables that exhibit a long range dependence ._ z. wahrscheinlichkeitstheorie und verw. gebiete _ * 40*(3 ) , 203238 .
|
we study the asymptotic behavior of lattice power variations of two - parameter ambit fields that are driven by white noise . our first result is a law of large numbers for such power variations . under a constraint on the memory of the ambit field , normalized power variations are shown to converge to certain integral functionals of the volatility field associated to the ambit field , when the lattice spacing tends to zero . this law of large numbers holds also for thinned power variations that are computed by only including increments that are separated by gaps with a particular asymptotic behavior . our second result is a related stable central limit theorem for thinned power variations . additionally , we provide concrete examples of ambit fields that satisfy the assumptions of our limit theorems . = 1 _ keywords : _ ambit field , power variation , law of large numbers , central limit theorem , chaos decomposition _ 2010 mathematics subject classification : _ 60g60 ( primary ) , 60f17 ( secondary )
|
in the last few years , research of complex networks have become a focus of attention from the scientific community .one of the main reasons behind the popularity of complex networks is their flexibility and generality for representing real systems in nature and society .researchers have done a lot of empirical studies , uncovering that various real - life networks sharing some generic properties : power - law degree distribution , small - world effect including small average path length ( apl ) and high clustering coefficient .recently , many authors have described some real - world systems in terms of weighted networks , where an interesting empirical phenomenon has been observed that there exists a power - law scaling relation between the strength and degree of nodes , i.e. with . with the intention of studying the above properties of real - world systems , a wide variety of modelshave been proposed .watts and strogatz , in their pioneering paper , introduced the famous small - world network model ( ws model ) , which exhibits small apl and high clustering coefficient .another well - known model is barabsi and albert s scale - free network model ( ba model ) , which has a degree distribution of power - law form .however , in these two elegant models , scale - free feature and high clustering are exclusive . driven by the two seminal papers , a considerable number of other models have been developed that may represent processes more realistically taking place in real - world networks .very recently , barrat , barthlemy , and vespignani have introduced a model ( bbv ) for the growth of weighted networks , which is the first weighted network model that yields a scale - free behavior for strength and degree distributions .enlightened by bbv s remarkable work , various weighted network models have been proposed to explain the properties found in real systems . these models may give some insight into the realities .particulary , some of them presente all the above - mentioned three characteristics such as power - law degree distribution , small - world effect , and power - law strength - degree correlation .although great progresses have been made in the research of network topology , modeling complex networks with general structural properties is still of current interest . on the other hand , fractals are an important tool for the investigation of physical phenomena .they were used to describe physical characteristics of things in nature and life systems such as clouds , trees , mountains , rivers , coastlines , waves on a lake , bronchi , and the human circulatory system , to mention but a few. a vast literature on the theory and application of fractals has appeared . among many deterministic and statistical fractals ,the sierpinski gasket is one of the earliest deterministic fractals ; it has provided a rich source for examples of fractal behavior .our initial physical motivation for this work lies in the use of the sierpinski gasket as models for complex networks . in this letter , based on the well - known sierpinski family fractals, we introduce a class of deterministic networks , named sierpinski networks .we propose a minimal iterative algorithm for constructing the networks and studying their structural properties .the networks are maximal planar graphs , show scale - free distributions of degree and strength , exhibit small - world effect , and display power - law strength - degree correlation , which may provide valuable insights into the real - life systems .we first introduce a family of fractals , called sierpinski fractals , by generalizing the construction of the sierpinski gasket . the classic sierpinski gasket , shown in fig 1(a ) ,is constructed as follows .we start with an equilateral triangle , and we denote this initial configuration by generation . then in the first generation , the three sides of the equilateral triangle are bisected and the central triangle removed .this forms three copies of the original triangle , and the procedure is repeated indefinitely for all the new copies . in the limit of infinite generations , we obtain the well - known sierpinski gasket denoted by . another fractal based on the equilateral triangle can be obtained if we perform a trisection of the sides and remove the three down pointing triangles , to form six copies of the original triangle . continue this procedure in each copy recursively to form a gasket , denoted , shown in fig .indeed , this can be generalized to , for any positive integer with , by dividing the sides in , joining these points and removing all the downward pointing triangles .thus , we obtain a family of fractals ( sierpinski fractals ) , whose hausdorff dimension is /\log(\omega) ] .the clustering coefficient of the whole network is the average of over all nodes in the network . for our network ,the analytical expression of clustering coefficient for a single node with degree can be derived exactly .when a node enters the system , both and are 4 . in the following iterations ,each of its active triangles increases both and by 2 and 3 , respectively .thus , equals to for all nodes at all steps .so one can see that there exists a one - to - one correspondence between the degree of a node and its clustering . for a node of degree , we have }{k(k-1)}=\frac{4}{k}-\frac{1}{k-1}.\ ] ] in the limit of large , is inversely proportional to degree .the same scaling of has also been observed in several real - life networks .semilogarithmic plot of average clustering coefficient versus network order .,scaledwidth=45.0% ] using eq .( [ ck ] ) , we can obtain the clustering of the networks at step : ,\ ] ] where the sum runs over all the nodes and is the degree of the nodes created at step , which is given by eq .( [ ki ] ) . in the infinite network order limit ( ) , eq .( [ acc ] ) converges to a nonzero value , as shown in fig .[ clustering ] . therefore , the average clustering coefficient of the network is very high .shortest paths play an important role both in the transport and communication within a network and in the characterization of the internal structure of the network .we represent all the shortest path lengths of as a matrix in which the entry is the geodesic path from node to node , where geodesic path is one of the paths connecting two nodes with minimum length .the maximum value of is called the diameter of the network .a measure of the typical separation between two nodes in is given by the average path length , also known as characteristic path length , defined as the mean of geodesic lengths over all couples of nodes .average path length versus network order on a semilogarithmic scale .the solid line is a guide to the eye.,scaledwidth=45.0% ] in fig .[ distance ] , we report the dependence relation of apl on network size . from fig .[ distance ] , one can see that the average path length grows logarithmically with increasing size of the network .this logarithmic scaling of with network size , together with the large clustering coefficient obtained in the preceding subsection , shows that the considered graph has a small - world effect .strength usually represents resources or substances allocated to each node , such as wealth of individuals in financial contact networks , the number of passengers in airports in world - wide airport networks , the throughput of power stations in electric power grids , and so on . in our model , the strength of a node is defined as the area of the removed triangle it corresponds to . for uniformity ,let the initial three nodes born at step 0 have the same strength as those created at step 1 .we assume that the area of the initial equilateral triangle of the sierpinski gasket is . by the very construction of the network ,all simultaneously emerging nodes have the same strength , because their corresponding triangles have identical area .it is easy to find that each removed triangle covers the portion of one removed triangle in the preceding generation .after iterations , all nodes which are generated at a certain step have the strength : from which we have where is the area of a triangle removed at step .( [ ki ] ) and ( [ s02 ] ) yield a power - law correlation between strength and degree of a node : which implies when is large enough , .this nontrivial power - law scaling between strength of a node and its degree has been empirically observed in a variety of real networks , such as the airport networks , the shareholder networks , and the internet .analogously to computation of degree distribution , we one can find that the strength distribution is also scale - free with exponent as : as known to us all , for weighted networks with non - linear strength - degree correlation , if their distributions of degree and strength behave as power laws , and , then there is a general relation between and as : we have shown that in our model and . according to eq .( [ gammas02 ] ) , the exponent of strength distribution is , giving the same value as that obtained in the direct calculation of the strength distribution , see equation ( [ gammas01 ] ) .deterministic model makes it easier to gain a visual understanding of how do different nodes relate to each other forming complex networks . on the basis of sierpinski fractals ,we have proposed and studied a kind of deterministic networks . according to the network construction processes we have presented an algorithm to generate the networks , based on which we have obtained the analytical results for degree distribution , clustering coefficient , strength distribution , as well as strength - degree correlation .we have shown that the networks have three important properties : power - law distributions , small - world effect , and power - law strength - degree correlation , which are in good accordance with a variety of real - life networks .in addition , the networks are maximal planar graphs , which may be helpful for designing printed circuits .although we have studied only a particular network , in a similar way , one can easily investigate other sierpinski networks with various values of and , and their general properties such as small - world effect and power - law strength - degree relation are similar .moreover , using the idea presented , one can also establish random networks , which display similar features as their deterministic counterparts studied here .as the classic sierpinski gaskets are important for the understanding of geometrical fractals in real systems , we believe that our research could be useful in the understanding and modeling of real - world networks .this research was supported by the national natural science foundation of china under grant nos .60496327 , 60573183 , and 90612007 , the postdoctoral science foundation of china under grant no . 20060400162 , and the program for new century excellent talents in university of china ( ncet-06 - 0376 ) .zhang also acknowledges the support from the huawei foundation of science and technology .hambly , probab . theory related fields * 94 * , 1 ( 1992 ) .s. hutchinson , indiana univ .math . j. * 30 * , 713 ( 1981 ) .west , _ introduction to graph theory _( prentice - hall , upper saddle river , nj , 2001 ) .
|
many real networks share three generic properties : they are scale - free , display a small - world effect , and show a power - law strength - degree correlation . in this paper , we propose a type of deterministically growing networks called sierpinski networks , which are induced by the famous sierpinski fractals and constructed in a simple iterative way . we derive analytical expressions for degree distribution , strength distribution , clustering coefficient , and strength - degree correlation , which agree well with the characterizations of various real - life networks . moreover , we show that the introduced sierpinski networks are maximal planar graphs .
|
in recent years photon - counting detectors have come into operation in for example the ultraviolet / optical telescope ( uvot ; ) on the _ swift _ gamma - ray bursts satellite , and the _ xmm _ optical monitor ( om ; ) .the mic detectors used in these instruments have been discussed by .these photon - counting detectors operate as follows : incoming photons exite electrons on a photo - cathode .the electrons are amplified by a stack of microchannel plates and then the amplified electron signal is converted back to a light - pulse using a phosphor screen . below this , a fibre bundle directs the light to a fast - scanning , frame - transfer ccd .after each frame is read - out , the resulting charge events in the ccd are centroided by the on - board electronics . at high incident fluxes ,a photon - counting detector is limited due to coincident photon arrivals in a single read - out of the detector .this represents a clear difference between the photon - counting technique and measurements made by direct illumination of a ccd , which can handle large fluxes , but has a higher background . normally ,when measuring the number of counts arriving in a certain time interval , little futher thought is given to the statistics of such a measurement , which were worked out long ago by .indeed , photon counting instrumentation , like photo - multiplier tubes , are usually seen as an exemplary case of poisson statistics. however , due to the instrumental limitations imposed by centroiding and event - detection of the mic detectors , no more than a single event recording per pixel is possible in the smallest timeslice of measurement .this handicap prevents the full distribution of photon arrivals being sampled and thus the measurements are not poissonian , though the incoming photons follow a poissonian distribution . as a result the errors on the photometry from the uvot and om do not follow poisson statistics .for each observation , however , one can derive the measurement statistics , which we show in section [ sec2 ] to follow a binomial distribution , and relate them to the poisson distribution of the incident photons .based on the measured distribution and the functional relation that it has to the incident poisson distribution , we derive the errors in the measurement and in the inferred incident photon count rate in section [ sec2 ] .this paper aims at providing the users of the uvot , om and similar instruments , a proper way to estimate the errors in their photometry .for the detectors of interest , an exposure will be for a certain time period and consist of time - slices usually called ` frames ' . exposing and reading out each frame takes a certain fixed time , called the frame - time .since during read - out of the detector no incoming photons are detected , a fraction , called the dead - time , needs to be accounted for when determining the count rate . in the following, we will use variables for the total observation .for example , observed counts refer to all observed counts during the observation .this simplifies the treatment of the errors somewhat , and conversion to commonly used count rates and their errors is quite straightforward .now consider a single pixel . during an exposure measurementsare taken from that pixel , measuring either 0 or 1 count per frame , since coincident counts are recorded as a single event .it is here , where the difference with a poissonian measurement comes in , since multiple detections in a single frame count only for one .we can use that fact to relate the probability of observing 1 or 0 photons to the fact that the incoming photons follow a poisson distribution .the poisson probability that incoming photons fall on one frame is a function of the mean incident counts per frame : the first two moments of the poisson distribution are and .the effective exposure time is less than the elapsed time due to the dead time .therefore , the mean number of incoming photons during the observation relates to the mean probability of measurement as , where ) has been introduced for notational convenience .the measured number of photons in frames , considering that for only one photon is counted , is \ ] ] using the equations above , this can be written as this functionally relates the incoming counts to the measured counts , and was originally derived by .we first show that the incident poisson distribution leads to an observed binomial distribution due to the coincidence - loss in the measurements , and then discuss the calculation of the measurement errors . if we had an instrument that would be able to record the incoming photon distribution , the probability of recording incident photons in frames is given by the poisson distribution .in actuality , not more than one photon can be measured per frame , so the distribution becomes modified in that term .therefore , the probability of recording incident photons in frames is given by : where reduces to measured photons , since for each frame where , only one count is recorded .substituting for , using equation [ eq1 ] , and defining for convience we can rewrite this as : which is indeed a binomial distribution .that means that the observed counts are are governed by a binomial distribution , and that errors need to be accounted for accordingly .the observed error in the mean number of counts in the observation for the binomial measured distribution will be determined by the binomial error using the observed error , the incident photon count rate error can be derived using the non - linear equation [ eq1 ] , because the relation has a 1 - 1 correspondence .substracting the mean count rate from the count rate with a error added or substracted , we obtain the following expression relating the upper and lower error in the incident counts to the error in the observed counts : for the highest incoming photon fluxes , the upper error becomes larger than the lower error , but for frame rates less than 0.9 , the error is in a linear regime and they are nearly equal in absolute size . . for comparison ,the error in the poisson - limit has been plotted also .the assumed number of frames for error computation was 4 000 ., width=222 ]for a point source with a certain count rate , the incoming counts will fluctuate in a poissonian sense around the mean . as discussed in section [ sec2 ] ,the measured counts are binomial . because of this, the counts above the mean will be mapped into a smaller range of observed count rate than those below the mean , which is ultimately due to the coincidence - loss . in this sense ,the width of the distribution , as defined by is not an equal measure for the area under the distribution above and below the mean .we therefore need to be careful when interpreting the standard deviation derived here , especially for high observed counts per frame values .there is a certain inherent width in the distribution of incoming counts which results in poissonian variation around the mean , usually expressed as the poisson error .the question is how that error relates to the final error in the measurement . in the limit of a small number of counts per frame , they become equal . for larger numbers of counts per framethey diverge , and the measurement error , after being mapped back to the uncertainty range in the incoming count rate , becomes dominant .since the magnitude of this effect is not very apparent from the theory above , an example has been prepared in figure [ fig2 ] . for simplicity , the number of observed counts has been set at for frames . the dead - time is assumed to give . using the equations above , the incident rate is then , with an associated poisson error of 181 counts . in the figure we placethe incident counts and its error on the top horizontal line .if we map the incident counts at to the measured values they come out to be 7 counts above and below the mean observed counts .the binomial error on the observed counts , however , is 20 , much larger than what would be expected from the mapped - back incident distribution .mapping the measured counts at from the measured counts back to the incoming counts , it is readily seen that these have a much larger spread than the incoming distribution .this effect becomes smaller for lower ratios of .please note that the values we chose for our example have a high coincidence - loss which makes these effects more discernable .confidence levels measure what percentage of the distribution of the measured quantity fall within certain limits . in a way they are more useful than the standard deviation in the presence of asymmetries , because they provide information on the reliability of the measurement .it is well known how to determine confidence levels for the measured count rate , because it follows the well - known binomial distribution .however , the values reported are the incident count rates which bear a non - linear relation to the measured ones .likewise , a certain confidence level in the measured count rate will not imply the same level in the incident count rate , precisely because of the asymmetry mentioned above .the effect is largest at the highest count rates , where we showed by example above , that the measured distribution is much broader than the incident ( poisson ) distribution . as a result , at high count per frame rates , the uncertainties in the measured count ratedominate those in the derived incident count rate .also , in the limit of a low number of counts per frame the binomial confidence levels on the measured counts will approach the confidence levels of the poisson - distributed incident counts because the distributions are identical in the low limit .the coincidence - loss correction at the limit of low counts is also negligible .this suggests that using the confidence limits for the measured binomial counts will be a good approximation for the confidence limits on the derived incident count rate .. in general , for low count rates the effects from coincidence - loss are negligible .this is especially true for the background .however , it was found that in some uvot observations a correction for coincidence - loss to the background was necessary and had an impact on the net source rates derived .since the background is diffuse in nature , the arguments brought forward for considering the coincidence - loss in diffuse situations by need to be taken into account .they discussed this case in terms of the coincidence - loss area over which coincidence - loss acts and the exposure area .their equation reverts to the single pixel case for the background .it is therefore important to realize that the expressions above , which were derived in the single - pixel approximation , need to be applied with caution to the background . if the measurement background area covers more than one ccd pixel , a normalization to the coincidence area , which is presumably one ccd pixel , needs to be made to apply the formulas above .for example , if a physical pixel has 8x8 subpixels , the normalisation is as follows .if background counts were measured from a region of subpixels , larger than 64 subpixels , then the coincidence - loss correction for the background should be based on counts . in practice ,the correction is not as firmly known as that because the centroiding may make the coincidence area larger or smaller .the uvot ftools software uses 78 subpixels which was chosen because that is close to the theoretical value and also the pixel - area that was used to derive the empirical coincidence - loss correction ( see [ sec3.4 ] ) . the coincidence - loss formula under the single - pixel approximation has been very successful in predicting the correct rates in the uvot . other support for the use of the single - pixel - approximation to calculate the coincidence - loss effect on the observed count rate comes from studies during the construction of the detectors , and the implementation of the centroiding .the measurement algorithm locates the centre of the photon splash , which generally falls across 2 - 3 ccd pixels , and has an accuracy of a small fraction of a ccd pixel , ( allowing recording of uvot and om data with an accuracy of 1/8th of the physical ccd pixel size . )anomalies are rejected using four out of nine ccd pixels . as a result, the action of coincident photons is distributed over several pixels on the detector and are also folded through a screening algorithm .the net effect turns out to be a strengthening of the single pixel approximation , although the exact size of the coincidence - loss region , and its relation to the physical ccd pixels , is still under study .were the detections really independent single - pixel measurements , then it is easy to show , that photon splashes which would fall in different ways over pixel - boundaries would reduce the effects of coincidence - loss by 10 - 20% at high count rates . in reality ,a small upwards empirical correction of the order of 6% is found to be needed to the theoretical single - pixel - rate in the uvot and om , which is perhaps due to loss of some measurements of truly coincident , but slightly displaced , photons . those could distort the symmetry of the electron splash on the detector suffiently to be screened out as bad data . in the original formulation of the coincidence - loss correction the effects of the detector dead - time in each framewere discussed but were not explicitly included in the coincidence - loss correction equation . as a result ,early corrections for the coincidence - loss did not include this term . since the current formulation includes this term , no further correction for dead - time is needed after application of equation [ eq1 ] .currently most astronomical photometry software , like iraf and daophot may incorrectly report the error for measurements like these , because generally the assumption is made that the photometric measurements are dominated by poisson - noise .that is considered a good assumption for photo - multiplier and normal ccd measurements . as we show in figure [ fig2 ], the poisson measurement error underestimates the error in these photon - counting instruments affected by coincidence - loss .we have shown in this paper how to derive the error in measurements made with photon - counting detectors of the type used in the _ swift _ uvot and _ xmm _ om instruments . by comparing to the poisson error usually used in photometry we make clear how significant this effect can be , and consider that users of these instrument must use our formalism to derive the errors in their measurements .we benefitted from stimulating discussions with alice breeveld , tracey poole , wayne landsman , chris brindle , keith mason , antonio talavera , and vladimir yershov during the development of these ideas .we thank patricia schady for comments on an early version of this paper .support of this work was through the swift operations at ucl - mssl through a grant from the uk science and facilities council .
|
the probability of photon measurement in some photon counting instrumentation , such as the optical monitor on the xmm - newton satellite , and the uvot on the swift satellite , does not follow a poisson distribution due to the detector characteristics , but a binomial distribution . for a single - pixel approximation , an expression was derived for the incident countrate as a function of the measured count rate by . we show that the measured countrate error is binomial , and extend their formalism to derive the error in the incident count rate . the error on the incident count rate at large count rates is larger than the poisson - error of the incident count rate . instrumentation : detectors methods : statistical techniques : photometric methods : data analysis
|
general statistical properties of deterministic expanding maps of the interval with a neutral fixed point are by now well understood . in pianigiani proved existence of invariant densities of such maps . in was independently proved that such maps exhibit a polynomial rate of correlation decay .later gouzel showed the rate obtained in is in fact sharp .the slow mixing behaviour of such maps made them a useful testing ground for physical problems with intermittent behaviour : systems whose orbits spend very long time in a certain small part of the phase space . in this paperwe are interested in studying i.i.d .randomized compositions of two intermittent maps sharing a common indifferent fixed point .it is intuitively clear that the annealed and theorem [ main ] .this should be contrasted with the notion of _ quenched dynamics _ , the behaviour of the system with one random choice of the randomizing sequence .the term _ almost sure dynamics _ is also used to refer to quenched dynamics . ]dynamics of the random process will also have a polynomial rate of correlation decay .however , we are interested in the following question : how do the asymptotics of the random map relate to those of the original maps ; in particular , the rate of correlation decay ?we show that the map with the fast relaxation rate dominates the asymptotics ( see theorem [ main ] for a precise statement ) .interestingly , in our setting , the map with slow relaxation rate is allowed to be of ` boundary - type ' , and consequently admit an infinite ( -finite ) invariant measure , but the random system will always admit an absolutely continuous invariant _probability _ measure .we obtain our result by using a version of the skew product representation for more details . ]studied in and a young - tower technique . in section 2we introduce our random system and its skew product representation .the statement of our main result theorem [ main ] is also in section 2 . in section 3we build a young - tower for the skew product representation .proofs , including the proof of theorem [ main ] , are in section 4 .let be the measure space , with , \mathfrak{b}(i) ] which will be useful in the construction of a suitable young tower .the points lie in ] , defined by that is , are preimages of in ] .let be the first return time function and be the return map . is referred to as the base of the tower which is given by let be the map acting on the tower as follows : we refer to as the level of the tower . for , set ] .observe that every point in will return to ] or }^{-1} ] in equation ( [ def_backorbit ] ) .denote these non - random iterates by and respectively .it is immediate from lemma [ lem_domination ] that for every .furthermore , it is well - known that with similar estimates for the parameter .( see , for example , estimates at the beginning of section 6.2 of . )suppose , to the contrary , for some .note that if for all then , contradicting our assumption .let be smallest integer such that .then since is increasing and here we have invoked corollary [ cor_domination ] . iterating this argument for each index where gives which is again a contradiction .a similar argument shows for all . by an argument similar to the proof of lemma [ lem_rough_estimates ] , using compared to the identity map we have on the other hand , comparing to the identity map and applying lemma [ lem_domination ] gives pick any , fix and let .there are many standard large deviation estimates for i.i.d . random variables that will ensure that _ most _ encounter at least instances of in their first iterates .as we are aiming for exponential decay in the tail estimate , we invoke a classical result due to hoeffding that works especially well for our case of bernoulli random variables .it is precisely at this point that we avoid generating an upper bound constraint on as was the case in gouzel .if instead we were to use the more general estimates from the well - known berry - essen theorem ( e.g. theorem 1 , section xvi.5 in ) , for example , we would obtain power law decay in the tail leading to the requirement in order to complete the proof .let count the number of times the value occurs in the first iterates .observe that in theorem 1 of let and let .then the bottom probability in equation ( [ eqn_hoeffding ] ) equals the exponential estimate now follows from ( 2.3 ) in theorem 1 of .* for fixed , with as above , let .set lemma [ lem_hoeffding ] estimates ] then therefore , and the result follows by induction on recall the schwarzian derivative of a function is given by : it is also well known that the schwarzian derivative of the composition of two functions satisfies consequently , schwarzian derivative of the composition is negative if both functions have negative schwarzian derivatives .let denote the composition of the left branches of and the right branch of .notice that on we have since , we have for the left branch , and ; in particular , if and only if . thus , for each let ] this means contains a -scaled neighborhood of with constant therefore , by koebe principle there exists a constant such that and consequently , it follows that hence , , which completes the proof. let then they have same realization for using this fact and for , we have : for any by using lemma [ distcoeff ] , lemma [ distlog ] and the following inequality : we obtain 10 ayyer , a. , liverani , c. , stenlund , m. _ quenched clt for random toral autormorphisms_. discr . andsyst .. 24 # 2 ( 2009 ) , 331348 .bahsoun , w. , bose , c. , quas , a. , _ deterministic representation for position dependent random maps_. discrete contin .( 2008 ) , 529540 .feller , w. , _ introduction to probability theory and its applications , vol 2_. john wiley and sons ( 1971 ) .gouzel , s. , _ sharp polynomial estimates for the decay of correlations_. israel j. math . 139( 2004 ) , 2965 .gouzel , s. , _ statistical properties of a skew product with a curve of neutral points_. ergodic theory dynam .systems 27 ( 2007 ) , 123151 .hoeffding , w. , _ probability inequalities for sums of bounded random variables _ j. amer .58 # 301 ( 1963 ) , 1330 .hu , h. , _ decay of correlations for piecewise smooth maps with indifferent fixed points_. ergodic theory dynam .systems 24 ( 2004 ) , 495524 .liverani , c. , saussol , b. and vaienti , s. , _ a probabilistic approach to intermittency _ , ergodic theory dynam .systems 19 ( 1999 ) , 671685 .de melo , w. , van strien , s. _ one - dimensional dynamics_. springer - verlag , berlin , 1993 .melbourne , i. , terhesiu , d. _ operator renewal theory and mixing rates for dynamical systems with infinite measure_. invent .math . 189 ( 2012 ) , no .1 , 61110 .pne , f. , _ averaging method for differential equations perturbed by dynamical systems_. esaim probab. statist . 6 ( 2002 ) , 3388 .pianigiani , g. .israel j. math .35 ( 1980 ) , 3248 .thaler , m. , .israel j. math ., 37 ( 1980 ) , 303314 .young , l - s ., _ recurrence times and rates of mixing_. israel j. math ., 110 ( 1999 ) , 153188 .
|
we study a class of random transformations built over finitely many intermittent maps sharing a _ common _ indifferent fixed point . using a young - tower technique , we show that the map with the fastest relaxation rate dominates the asymptotics . in particular , we prove that the rate of correlation decay for the annealed dynamics of the random map is the same as the _ sharp rate _ of correlation decay for the map with the fastest relaxation rate .
|
the existing contemporary communications systems can be abstractly characterized by the conceptual seven - layer open systems interconnection model .the lowest ( or first ) layer , known as the _ physical layer _ , aims to describe the communication process over an actual physical medium .due to the increasing demand for flexibility , information exchange nowadays often occurs via antennas at the transmitting and receiving end of a wireless medium , _e.g. _ , using mobile phones or tablets for data transmission and reception .an electromagnetic signal transmitted over a wireless channel is prone to interference , fading , and environmental effects caused by , _e.g. _ , surrounding buildings , trees , and vehicles , making reliable wireless communications a challenging technological problem . with the advances in communications engineering , it was soon noticed that increasing the number of spatially separated antennas at both ends of a wireless channel , as well as adding redundancy by repeatedly transmitting the same information encoded over multiple time instances , can dramatically improve the transmission quality .a code representing both diversity over time and space is thus called a _ space time code_. let us assume and antennas at the transmitting and receiving end of the channel respectively , as well as consecutive time instances for transmission .if , the channel is called _ symmetric _ , and otherwise asymmetric , which more precisely typically refers to the case . for the time being , a space time code will just be a finite collection of complex matrices in .the well known channel equation in this multiple - input multiple - output ( mimo ) setting takes the form where and are the received and transmitted codeword matrices , respectively . in the above equation ,fading is usually modeled as a rayleigh distributed random process and represented by the random complex _channel matrix _ , and additive noise is modeled by the _ noise matrix _ , whose entries are independent , identically distributed complex gaussian random variables with zero mean .the main object in is the _ space time code matrix _ , a complex matrix which captures the data to be transmitted across multiple antennas .namely , the complex number is transmitted from the antenna in the channel use .let us briefly discuss what constitutes a `` good '' code . consider a space time code , and let be code matrices ranging over .two basic design criteria can be derived in order to minimize the probability of error . * _ diversity gain criterion : _ to be able to distinguish between two different codewords , we should first maximize the minimum rank of the difference of pairwise distinct matrices .a space time code achieving the maximal minimum rank is called a _ full - diversity _ code . * _ coding gain criterion : _ if we assume a full - diversity code , then the pairwise decoding error probability of a codeword being confused with another one , _ i.e. _ , is recovered when was transmitted , can be asymptotically upper bounded by where describes the quality of the channel is related to the so - called _ signal - to - noise ratio ( snr ) _ ,for more information see . `asymptotically ' above means that we assume is relatively large , that is , the signal is of good quality .this is a standard assumption in code design . ] .thus , should be as big as possible . if we let the size of the code grow , , and still have , the space time code is said to have the _ nonvanishing determinant _ property .in other words , a nonvanishing determinant guarantees that the minimum determinant is bounded from below by a positive constant even in the limit , and hence the error probability will not blow up when increasing the code size .consequently , a good space time code should ideally be composed of full - rank matrices with large , nonvanishing minimum determinants .`` large '' here is a relative notion and to make it sensible we need some kind of a normalization .we will come back to this in section [ sec : sec2 ] . in 2003 ,the usefulness of central simple algebras to construct space time codes meeting both of the above criteria was established in ; especially ( cyclic ) division algebras , for which the property of being division immediately implies full diversity .thereupon the construction of space time codes started to rely on cleverly designed algebraic structures .the initially considered cyclic division algebras were however constructed using transcendental elements , which in turn resulted in codes with vanishing minimum determinants .later on , it was shown in that in a cyclic division algebra based code , the nonvanishing determinant property is enough to achieve the optimal trade - off between diversity and multiplexing ; it was also proved that achieving the nonvanishing determinant property of a cyclic - division - algebra - based code can be ensured by restricting the entries of the codewords to certain subrings of the algebra alongside with a smart choice for the base field .further investigation carried out in showed that codes constructed from orders , in particular _maximal orders _ , of cyclic division algebras actually outperform codes that had been considered unbeatable .the improvement in performance , however , came with the price of somewhat higher complexity in encoding and decoding , as well as more problematic bit labeling of the codewords . as is well known , a maximal order of a cyclic division algebra is not necessarily unique . as a consequence , carrying out the explicit computations required for the purpose of space time coding is a very challenging task .for this reason , and as a compromise for reducing the complexity of communication while still guaranteeing good performance , the use of _ natural orders _ the main objects of our work is often preferred instead .however , the current explicit constructions are typically limited to the symmetric case , while the asymmetric case remains largely open . the main goal of this paper is to fill this gap in the construction of explicit optimal ( with certain given assumptions , in our case the order being natural ) asymmetric space time codes .in section 2 we will shortly introduce mimo space time coding and the construction of space time codes using representations of orders in central simple algebras .section 3 contains the main results of this article .we will consider the most interesting asymmetric mimo channel setups and fix or as the base field to guarantee the nonvanishing determinant property matches with the quadrature amplitude modulation ( qam ) commonly used in engineering . ] .for each such setup , we will find an explicit field extension and an explicit -central cyclic division algebra with coefficients in , such that the norm of the discriminant of its natural order is minimal .this will translate into the largest possible determinant ( see for the proof ) and thus provide us with the maximal coding gain one can achieve by using a natural order .from now on , and for the sake of simplicity , we set the number of transmitting antennas equal to the number of time slots used for transmission .thus , the considered codewords will be square matrices .very simplistically defined , a space time code is a finite set of complex matrices .however , in order to avoid accumulation points at the receiver , in practical implementations it is convenient to impose an additional discrete structure on the code , such as a lattice structure .consequently , we define a _ space time code _ to be a finite subset of a _ lattice _ in .we recall that a _ full _ lattice is a lattice with .we call a space - time lattice code _ symmetric _ , if its underlying lattice is full . otherwise it is called _ _it is not difficult to see that , given a lattice and , this implies that any lattice satisfying the nonvanishing determinant property can be scaled so that achieves any wanted nonzero value .consequently , in order to be able to compare different lattices for the purpose of space time coding , we will need some kind of a normalization . to this end , let form a basis of a full lattice with volume , and consider its gram matrix {1 \le i , j \le 2n^2}.\ ] ] we have .* the _ normalized minimum determinant _ of is the minimum determinant of after scaling it to have a unit size fundamental parallelotope , that is , * the _ normalized density _ of is we get the immediate relation , guaranteeing that in order to maximize the coding gain it suffices to maximize the density of the lattice . maximizing the density , for its part ,translates into a certain * discriminant minimization problem * .this observation is crucial and will be the main motivation behind our work in section 3 .we aim at constructing space time codes from lattices within central simple division algebras defined over number fields .we recall that a finite dimensional algebra over a number field is an -_central simple algebra _ ,if its center is precisely and it has no nontrivial ideals .an algebra is said to be _ division _ if all of its nonzero elements have a multiplicative inverse .as we shall soon see , as long as the underlying algebraic structure of a space time code is a division algebra , the full - diversity property of the code will be guaranteed .it turns out that if is an algebraic number field , then every -central simple algebra is of a certain special type known as _ cyclic algebras _* thm . 32.20 ) . throughout the paper, we will denote the relative field norm map of by and the absolute norm map shortly by .when considering orders , we may specify this by writing although the map naturally remains the same .let be a cyclic extension with the galois group .we fix an element and consider the -central simple algebra as a right -vector space with left multiplication defined by for all , and .the algebra is referred to as a _cyclic algebra _ of _ index _ .a necessary and sufficient condition that is a norm in is . ]3.5 ) for a cyclic algebra of index to be division is to have for all prime factors of .consequently , we refer to an element satisfying this condition as a _ non - norm element_. the obvious choice of lattices in will be its orders .we recall that if is a dedekind ring , an _-order _ in is a subring which shares the same identity as , is a finitely generated -module , and generates as a linear space over .of special interest is the _ natural order _ of , the -module if the element fails to be an algebraic integer , then will not be closed under multiplication .consequently , we will always choose such that while the ring of algebraic integers is the unique maximal order in an algebraic number field , an -central division algebra may contain several maximal orders .they all share the same _ discriminant over _ , known as the discriminant of the algebra .we recall that given a dedekind ring and an -order with basis over , the -discriminant of is the ideal in generated by where denotes the reduced trace ( cf . ) . given two -orders , , it is clear that if , then divides ) .consequently , for every -order in , and the ideal norm is the smallest possible among all -orders of .the constructions that we will derive in section 3 will rely on some key properties of cyclic division algebras and their orders that we will next present as lemmata .10.1 ) let be any order in a cyclic division algebra .then , for any nonzero element , its reduced norm and reduced trace ( cf . ) are nonzero elements of the ring of integers .5.4 ) [ pro : disc ] let be a cyclic division algebra of index with a non - norm element .we have hence , if , then ( * ? ? ? * thm .6.12 ) [ pro : bound ] assume that is a number field and that and are a pair of norm - wise smallest prime ideals in . if we do not allow ramification on infinite primes , then the smallest possible discriminant of all central division algebras over of index is let now be a cyclic division algebra of index .we fix compatible embeddings of and into , and identify and with their images under these embeddings . to construct matrices to serve as codewords ,we consider as an -dimensional right vector space .the -linear transformation of given by left multiplication by an element results in an -algebra homomorphism to which we refer to as the _ maximal representation_. an element can be identified via the maximal representation with the matrix the determinant and trace define the reduced norm and reduced trace of , respectively .next , given a lattice in , we may use the maximal representation to define an injective map .any finite subset of or its transpose will be a space time lattice code . in the literature ,cyclic - division - algebra based space time codes are often referred to as _ algebraic space time codes_. due to the division algebra structure the above matrices will be invertible by definition , and hence any algebraic space - time code constructed in this way will have full diversity .[ exp : golden ] let be a quadratic real extension of number fields , with of class number 1 , and ] , = n ] , and with galois groups and .we fix a non - norm element , and consider the cyclic division algebra . given any order in , we identify each element with its maximal representation and construct the following infinite block - diagonal lattice achieving the nonvanishing determinant property , provided that the base field is either or quadratic imaginary : the _ code rate _ of a space time code carved out from the infinite block - diagonal lattice above , that is , the ratio of the number of transmitted independent ( complex ) information symbols ( = dimensions ) to the number of channel uses , is * , if the base field is quadratic imaginary .* , if the base field is .we point out that is the maximum code rate that allows for avoiding accumulation points at the receiving end with receive antennas ; see the footnote below . in summary , in order to construct an algebraic space time code , we first choose a central simple algebra over a suitable base field and then look for a dense lattice in it .this amounts to selecting an adequate order in the algebra .as has already been mentioned , orders with small discriminants are optimal for the applications in space time coding which makes maximal orders the obvious candidates .unfortunately , they are in general very difficult to compute and may result in highly skewed lattices making the bit labeling a delicate problem on its own .therefore , _ natural orders _ with a simpler structure have become a more frequent choice as they provide a good compromise between the two common extremes : using maximal orders to optimize coding gain , on the one hand , and restricting to orthogonal lattices to simplify bit labeling , encoding , and decoding , on the other hand .in this section we will consider the tower of extensions depicted in figure [ fig : tower ] . in order to get the nonvanishing determinant property , the base field chosen to be either the rationals or the imaginary quadratic extension .let us now fix the base field and the extension degrees and . our goal will be to find an explicit field extension and a non - norm element such that is a cyclic extension , is a cyclic division algebra , and the norm of the discriminant is the minimum possible among all cyclic division algebras satisfying the fixed conditions .our findings are summarized in the following table and will be proved , row by row , in the subsequent theorems .here denotes a root of the polynomial .[ thm : res1 ] let , square - free .any cyclic division algebra satisfies , and equality is achieved for , .the smallest possible quadratic discriminant over is 3 , corresponding to the field , with a primitive cubic root of unity .since all of the six units in ^\times ] . consequently , is a division algebra and , using , .[ thm : res2 ] let with = 4 ] . if is a cyclic division algebra , then , with equality for and , where is a primitive root of unity .the fields and can be uniquely expressed as , , with such that is square - free and odd , is square - free , and ( see , ) .we study the possible cases . 1 .if , then .this expression takes its minimum value for , and . since for all ,using we get 2 . if , then .the minimum value of the above expression is attained for and , hence 3 .if , and , we have .since will take its minimum value for , , and , we have 4 . finally ,if , , , , we have .this expression attains its minimum for , , and , thus the last case gives us the minimal natural order discriminant , corresponding to the fields and , and to the element to conclude the proof , it suffices to show that , _i.e. _ , the algebra is division .suppose that ] and we can write , where is a root of unity and ( see ( * ? ? ? * prop .consequently , .but is not a square in .[ thm : res3 ] let , with = 4 ] .any cyclic division algebra satisfies }(\operatorname{disc}({\mathcal{o}}_{\operatorname{nat}}/{\mathbb{z}}[i ] ) ) \ge 5 ^ 6 ] , it is not difficult to verify that , , and , so the only prime that ramifies in the extension is .hence , the obvious choice is , a totally and tamely ramified cyclic local extension of degree 2 , with . in order to see that , we determine .let be a uniformizer ; then , is the group generated by and and the group is a subgroup of and , by local class field theory ( see ( * ? ? ?1.1 ) ) , we have , as well as = e\left(e_{\mathfrak t_{5}}|l_{\mathfrakq_{5}}\right ) = \left[e_{\mathfrak t_{5}}^\times : l_{\mathfrak q_{5}}^\times{\mathcal{o}}_{e_{\mathfrak t_{5}}}^\times\right ] = 2.\ ] ] let be a 4 root of unity and write . since the extension is totally ramified , the residue fields of and agree and , so , . on the other hand , otherwise the group would contain an element with some , contradicting .we conclude that and , more specifically , , with .since has multiplicative order 4 in the residue field , we deduce that .[ thm : res4 ] let with =6 ] .any cyclic division algebra satisfies }(\operatorname{disc}({\mathcal{o}}_{\operatorname{nat}}/{\mathbb{z}}[i]))\ge3^{18}\cdot 13^{12} ] is smallest possible and for : we start by finding the smallest possible discriminant over ] , or for the decomposition of ( ramified , split or inert , respectively ) in ] . for an ideal ] . we know that for each ideal there exists a unique extension of such that /\mathfrak m)^\times/{{\rm im}}(u_{\mathfrak m}) ] , leaving the choices or , where is a product of prime ideals with norm congruent to 1 modulo 3 .the smallest possible norm for primes in is }(\mathfrak p_{13 } ) = \operatorname{nm}_{{\mathbb{z}}[i]}(\mathfrak p_{13}') ] and , consequently , there exists a cubic extension of that ramifies exactly at .furthermore , since the ramification index is 3 and hence relatively prime to , by a theorem of dedekind the discriminant of the extension is , and }(l_3/{\mathbb{q}}(i ) ) = 13 ^ 2 ] .thus , is the discriminant of a quadratic extension of ; indeed , it is the discriminant of , where is a root of unity . if for some ideal , the best we can do is to take .then , and /\mathfrak m)^\times/{{\rm im}}(u_{\mathfrak m})| = 2 ] .for , we get , with }(\operatorname{disc}(e_6'/{\mathbb{q}}(i ) ) ) = 13 ^ 5 \cdot 2 ^ 6 ] for cyclic extensions of degree 6 is , achieved in the extension .the involved rings of integers are ] where and is as above . to compute ,we observe that )\cdot [ { \mathcal{o}}_{3}:{\mathbb{z}}[i , \alpha]]^2. ] .: next , we search for non - norm element of smallest possible norm .equation in lemma [ pro : bound ] provides us with a lower bound for . in order to obtain it ,we need to find a pair of smallest primes in .computing the factorization of primes . ] and relative discriminants of the extensions involved , we find the following pairs of smallest primes . *a pair of smallest primes in are and of respective norms 4 and 9 .* since the primes above and have norms at least 49 and 121 , respectively , a pair of smallest primes in are and of respective norms and 13 .the discriminants of the extensions involved are summarized in table [ tab : disc ] below . if we let , then equations and and the above computations , tell us that any element satisfying equation so that is a division algebra with natural order , will satisfy the following inequality : since , there are no restrictions , and our searched for non - norm element could be a unit .consequently , }(\operatorname{disc}({\mathcal{o}}_{\operatorname{nat}}/{\mathbb{z}}[i]) ] , and the theorem will be proved .: to simplify notation , we set , , and .we will use hasse norm theorem , and a local argument analogous to the one used in theorem [ thm : res3 ] , to prove that the unit satisfies , .as , the only prime that ramifies in the extension is . if is a prime of extending , the extension is a totally and tamely ramified cyclic extension of degree 3 , with .let be a uniformizer , then , and = e\left(e_{\mathfrak t_{13}}|l_{\mathfrak q_{13}}\right ) = \left[e_{\mathfrak t_{13}}^\times : l_{\mathfrak q_{13}}^\times{\mathcal{o}}_{e_{\mathfrak t_{13}}}^\times\right ] = 3.\ ] ] let and write , where is a 12 root of unity , the extension being totally ramified , the residue fields of and agree and . on the other hand , else , the group would contain an element with , some , contradicting .thus , we have , and , with .since has multiplicative order 6 in the residue field , we can conclude that neither nor are in .[ thm : res5 ] let with =6 ] , and let be a cyclic division algebra . then }(\operatorname{disc}({\mathcal{o}}_{\operatorname{nat}}/{\mathbb{z}}[i]))\ge3^{12}\cdot 13 ^ 8 ] , among all possible discriminants of cyclic sextic extensions over .hence , it suffices to prove that is a non - norm element in .let be a prime of extending , the only prime that ramifies in the extension .if , using hasse norm theorem , we can show that , we will be done .let be a 12 root of unity and . using a local argument analogous to the ones used in theorems 3 and 4 , we get .we are left with showing that is a unit in such that its image in the residue field is not in .it is easy to see that if , then .thus , since , is a unit . also , can not be a root of unity ; otherwise would be abelian over , which it is not , as the primes over 13 split in different ways .further , the image of in the residue field is 1 , which implies that the image of the unit is , since 5 is not a square modulo 13 .thus , the choice gives us the required result .in this article we have introduced the reader to a technique used in multiple - input multiple - output wireless communications known as space time coding . within this framework ,we have shown how to construct well - performing codes from representations of orders in central simple algebras , explaining why it is crucial to choose orders with small discriminants .while maximal orders achieve the minimum discriminant , we have motivated why in practice it may sometimes be favorable to use the so - called natural orders instead . for the base fields or imaginary quadratic( corresponding to the most typical signaling alphabets ) , and pairs of extension degrees in an asymmetric channel setup , we have computed an explicit number field extension and an element giving rise to a cyclic division algebra whose ideal norm of the corresponding natural order , viewed as an -module , achieves the minimum possible among all cyclic division algebras with the same degree assumptions . this way we have produced explicit space time codes attaining the optimal coding gain among codes arising from natural orders .a. barreal and c. hollanti are financially supported by the academy of finland grants # 276031 , # 282938 , and # 283262 , as well as a grant from the finnish foundation for technology promotion .the authors thank jean martinet for his useful suggestions .99 v. tarokh , n. seshadri , a. r. calderbank , space time codes for high data rate wireless communication : performance criterion and code construction , _ ieee transactions on information theory _ 44 ( 2 ) ( 1998 ) pp . 744765 .belfiore , g. rekaya , quaternionic lattices for space time coding , _ proceedings of the ieee information theory workshop _ , paris ( 2003 ) .b. a. sethuraman , b. s. rajan , v. shashidhar , full - diversity , high - rate space time block codes from division algebras , _ ieee transactions on information theory _ 49 ( 10 ) ( 2003 ) pp .25962616 . c. hollanti , j. lahtonen , h .- f .lu , maximal orders in the design of dense space - time lattice codes , _ ieee transactions on information theory _ 54 ( 10 ) ( 2008 ) pp .44934510 .r. vehkalahti , c. hollanti , j. lahtonen , k. ranto , on the densest mimo lattices from cyclic division algebras , _ ieee transactions on information theory _ 55 ( 8) ( 2009 ) pp . 37513780 . c. hollanti , h .- f .lu , construction methods for asymmetric and multiblock space time codes , _ ieee transactions on information theory _ 55 ( 3 ) ( 2009 ) pp . 10861103 .i. reiner , maximal orders , _ london mathematical society monographs new series _ 28 ( 2003 ) .f. oggier , g. rekaya , j .- c .belfiore , e. viterbo , perfect space time block codes , _ ieee transactions on information theory _ 52 ( 9 ) ( 2006 ) pp . 38853902 .k. hardy , r.h .hudson , d. richman , k. s. williams , n. m. holz , calculation of class numbers of imaginary cyclic quartic fields , _ carleton - ottawa mathematical lecture notes series _ 7 ( 1986 ) .r. h. hudson , k. s. williams , the integers of a cyclic quartic field , _ rocky mountains journal of mathematics _ 20 ( 1 ) ( 1990 ) pp . 145150 .j. milne , algebraic number theory , _ graduate course notes _ v2.0 ( 2014 )http://www.jmilne.org / math / coursenotes/. j. milne , class field theory , _ graduate course notes _ v4.02 ( 2013 )http://www.jmilne.org / math / coursenotes/. a .- m .berg , j. martinet , m. olivier .the computation of sextic fields with a quadratic subfield , _ mathematics of computation _ 54 190 ( 1990 ) pp . 869884 .we compute the factorization of primes and relative discriminants in the extensions involved in theorem 4 . the notation is explained below the table .
|
algebraic space time coding a powerful technique developed in the context of multiple - input multiple output ( mimo ) wireless communications can only be expected to realize its full potential with the help of class field theory and , more concretely , the theory of central simple algebras and their orders . during the last decade , the study of space time codes for practical applications , and more recently for future generation ( 5g+ ) wireless systems , has provided a practical motivation for the consideration of many interesting mathematical problems . one of them is the explicit computation of orders of central simple algebras with small discriminants . we will consider the most interesting asymmetric mimo channel setups and , for each of these cases , we will provide explicit pairs of fields giving rise to a cyclic division algebra whose natural order has the minimum possible discriminant .
|
one of the most distinctive features of quantum mechanics is the necessary disturbance to the quantum state associated with any measurement that acquires information about the state .this information gain - disturbance relation places restrictions on what types of measurements are allowed within quantum theory .weak measurements are a limiting case of a class of measurements with which it is possible to measure the average value of some observable using an ensemble of particles , all prepared in the same initial state , with minimal disturbance to the state of each individual particle .such measurements have a long history in quantum theory ( see , for example , refs . ) . performing a weak measurementleaves the state of the particle largely undisturbed , and one can consider performing a subsequent measurement , possibly of a different observable .consider an ensemble of particles prepared in the same state subjected to a weak measurement of observable followed by a projective measurement of observable , and then postselecting only those experiments corresponding to a specific outcome of .it is within the context of such experiments that aharonov _ et al . _ introduced the _ weak value _, as the measurement outcome of the observable for the preselected and postselected ensemble .subsequently , there was considerable debate over the meaning of this weak value , as it is in general a complex number .an operational interpretation of the weak value as a complex number , whose real and imaginary parts manifest as shifts in the average position and momentum of the post selected measurement devices , was given by jozsa .the interpretation of the real part of the weak value as the conditional expectation value of the variable has been used in analysing counterfactual quantum paradoxes .the imaginary part of the weak value has been connected to the shift in momentum of the pointer associated with measurement disturbance .another debated property of the weak value is that it is not constrained by the eigenvalue spectrum of the variable , that is , the weak value can be larger than the largest eigenvalue of the variable .such anomalous weak values have been considered for signal amplification .the appearance of anomalous weak values can be used to provide a proof of contexuality , which suggests that interpreting the real part of the weak value as a conditional expectation value needs to be reevaluated .much of the difficulty in interpreting the weak value may be because it seeks to analyse the measurement outcomes of two noncommuting observables on a given state of a particle , which is known to be problematic in quantum theory due to the lack of an ontology for measurement outcomes associated with observables .it is worthwhile , then , to consider whether the weak value can arise in a theory that does possess a clear ontology .recently , it has been shown that similar features to the weak value in quantum theory can arise within a simple statistical model supplemented with a backaction due to measurement , suggesting that the weak value is a statistical feature in theories involving measurement disturbance .the suggestion that weak values can arise in a classical analog is controversial , and it has been argued that weak values have no analog in classical statistics . in this paper , we analyse weak values using a theory of classical mechanics ( thereby possessing a clear ontology ) supplemented with a restriction on the observer s knowledge .this theory is the _ epistemically restricted liouville ( erl ) _ mechanics of ref . , and it is known to reproduce many of the features of quantum measurement . in this theory , all particles evolve under classical equations of motion and it is operationally equivalent to gaussian quantum mechanics ; this connection is best seen through the description of gaussian quantum mechanics using nonnegative wigner functions .notably , the epistemic restriction provides a sensible notion of weak measurement within the erl theory , one that directly reproduces many of the key features of quantum weak measurement .we emphasise that erl theory adds neither extra stochasticity to classical dynamics nor any additional disturbance mechanism ; rather , all of the features analogous to quantum theory appear naturally within a deterministic theory supplemented only by ignorance on the part of the observer .within erl theory , as in quantum mechanics , we find that the weak value appears operationally as shifts in the mean position and momentum distributions of the measurement device upon postselection ( as first discussed by josza ) .the analysis in the erl theory gives us a direct interpretation of the origin of these shifts , and thus of the weak value .specifically , the real component of the weak value represents the shift in the position of the measurement device as a result of its interaction with the measured particle , as expected from a measurement .the imaginary component of the weak value , however , quantifies not the result of any dynamical changes to the measurement device but simply a bias on the distribution of the measurement device as a result of postselection .that is , we have an operational interpretation of the imaginary part of the weak value as a measure of how much postselection will bias the distribution of the measurement device .the weak value is not a unique feature of quantum theory , but can arise in other theories that possess a restriction or limitation on the observer s knowledge of the initial state of the particle or the measurement device , which is arguably a very natural physical restriction .we note that anomalous weak values do not appear in our analysis , as all observables in our model possess an unbounded spectrum .consistent with the results of ref . , our model is also noncontextual : the erl mechanics provides an explicit noncontextual ontological model for all procedures described here .in this section , we introduce the formalism of weak measurements within quantum theory , as well as briefly introduce the weak value .we first review the standard formalism for von neumann measurements , including strong ( projective ) measurements , and then introduce weak measurements within this model .we then demonstrate the appearance of the weak value ( both real and imaginary parts ) in the conditional expectation values of the position and momentum of the measurement device after postselection . here , we review the framework of quantum measurement , wherein an observable is coupled to a measurement device followed by a projective measurement of the measurement device s position . with this framework, we can describe both strong ( projective ) as well as arbitrarily weak measurements .we describe the measurement device by a one - dimensional quantum system with canonical position observable and momentum observable satisfying =i\hbar ] such that where is the effective interaction strength .consider an initial state of the particle given as , where is an eigenstate of with eigenvalue .after the interaction , the state of the device and particle will be consider the case where the initial uncertainty in the position of the pointer is zero , and thus is a position eigenstate with eigenvalue . in this case , after the interaction , the measurement device s position is maximally entangled with the eigenstates of of the particle .a projective measurement of the position of the device pointer perfectly resolves the eigenvalue of , and collapses the state of the particle into an eigenstate of .this measurement , then , corresponds to a strong , projective measurement of on the particle .the weak measurement limit is the opposite limit of this strong measurement , and aims to reduce the disturbance to an arbitrarily small amount at the expense of a correspondingly small information gain . within the measurement model described above ,we introduce two different ways in which a measurement can be made weak .first , each particle can be coupled arbitrarily weakly to a measuring device by using a vanishingly small interaction strength .alternatively , a weak measurement can also be obtained by using an initial state of the measurement device with and , which would imply from .( while these two limits lead to identical measurement statistics within quantum theory , we explore each of them separately , as they will correspond to different processes in the context of the epistemically - restricted theory of classical mechanics explored in the next section . ) in both of these limits , the disturbance caused by measurement , as well as the amount of information gained , are both very small .if this measurement is repeated on a large number of particles , each prepared in the same initial state , it is possible to measure the average value of an observable of the ensemble of particles with arbitrary accuracy as the number of particles becomes large . shown on the right . the shift in the mean position of the measurement devices that weakly interacted with the particles on the postselected set ( shaded ) corresponding to outcome has terms proportional to the real and imaginary parts of the weak value ., scaledwidth=45.0% ] with the concept of weak measurement , we now derive the weak value , with an emphasis on the difference between the two weak measurement methods described above . the _ weak value_ arises in a measurement scenario wherein a weak measurement of an observable is followed by a strong ( projective ) measurement of another observable , together with postselection on a particular outcome labelled by eigenvalue of the measurement of .the observable on which results are postselected need not commute with the variable weakly measured as illustrated in fig 1 .in such a situation , the weak value is defined to be where .using , the real part of this complex number is =\frac{\sum_{j , l}\alpha_{j}\alpha_{l}^*\langle b|a_{j}\rangle\langle a_{l}|b\rangle ( \frac{a_{j}+a_{l}}{2})}{\sum_{j ,l}\alpha_{j}\alpha_{l}^*\langle b|a_{j}\rangle\langle a_{l}|b\rangle}\,,\ ] ] and its imaginary part is = \frac{\sum_{j , l}\alpha_{j}\alpha_{l}^*\langle b|a_{j}\rangle\langle a_{l}|b\rangle ( \frac{a_{j}-a_{l}}{2i})}{\sum_{j , l}\alpha_{j}\alpha_{l}^*\langle b|a_{j}\rangle\langle a_{l}|b\rangle}\,.\ ] ] we will now show explicitly how the real and imaginary parts of the weak value appear as phase - space displacements in the mean position and momentum of the postselected distribution of the measurement devices performing the weak measurements . the unnormalised state of the post selected particles and the devices after weak measurement is recall that is a gaussian state of the form of eq .( [ eq:1a ] ) . having postselected particle - measurement device pairs for which the particle is in the final state ,the selected devices are described by the unnormalised state .the mean position of this device state after postselection on , denoted , is the mean momentum of the device after the weak measurement and postselection on is where is the fourier transform of up to a normalisation constant .we now consider these expressions using the first method to obtain weak measurements , wherein the initial position of the measurement device becomes highly uncertain . in the limit of , we also have .consider the case where the mean momentum of the device , , is also set to zero .using eq .( [ eq:14l ] ) , the mean position of the device is + g \omega \text{im}[\langle\hat{a}_w\rangle ] \,,\end{aligned}\ ] ] where we have ignored terms of order and higher as a result of taking to be small . in the limit , the covariance and this shift becomes \,.\ ] ] from eq .( [ eq:16 m ] ) , the mean momentum of the device is \label{eq:17nl1}\end{aligned}\ ] ] which in the limit becomes in this limit , the momentum of the device remains unchanged , and there is no disturbance to the state of the particle .there is , however , a shift in the mean position of the device proportional to the real part of the weak value . because this limit implies , it is a shift in a uniform distribution and hence not physically resolvable .consider now the second method for obtaining weak measurements , where the coupling strength is small and the mean momentum of the device , , is also set to zero . in the limit of , there is no disturbance to the system , however , in this limit , there is also no shift in the average position and momentum of the post selected devices .hence , we then calculate the mean position and momentum of the measurement device after postselection to leading order in . the mean position of the device after postselection is }{\sum_{j , l}\alpha_j\alpha_l^*\langle b|a_j\rangle\langle a_l|b\rangle } \nonumber \\ & = g \text{re}[\langle\hat{a}_w\rangle]+ g \omega \text{im}[\langle\hat{a}_w\rangle ] \,,\end{aligned}\ ] ] and the mean momentum of the device after postselection is .\end{aligned}\ ] ] ( in ref . , the expression for is written as + g m\frac{d\delta_{q}}{dt } \text{im}[\hat{a}_w] ] .let the initial quantum state of the particle be this is a gaussian wavefunction with position mean and variance , and momentum mean and variance . while this state has been chosen to have zero convariance , we emphasise that our results are completely general ( see note below ) .the particle and the measuring device are coupled under the hamiltonian in eq . , where the observable we are measuring is one of the form we then postselect using a projective measurement of the observable , with eigenstate the weak value of for this postselection is where .we note that this expression depends linearly on , , and , and while the real part of this expression has the same form as expected from bayes rule , the imaginary components of is also clearly identified ._ note : _ in eq .( [ eq : psi ] ) , we consider a gaussian state with zero covariance for simplicity .however , this choice is equivalent to using a gaussian state with nonzero covariance simply by changing the quadratures of of both weak and strong measurement appropriately , that is , changing and .hence our analysis holds for general gaussian states .in this section , we will analyse the weak measurement and postselection procedure described above in the context of a theory with a clear classical ontology : the epistemically restricted liouville ( erl ) theory of ref . .this theory describes particles evolving in a phase space according to classical equations of motion .what makes this theory interesting is an epistemic restriction that limits the knowledge that an observer can possess about the state of these particles .it has been shown that there is a complete operational equivalence between the dynamics of the restricted liouville distribution that describes an observer s knowledge in the erl theory , and that of a subset of quantum theory , namely , gaussian quantum mechanics . for a full description of the erl theory ,the form of the epistemic restriction , and its consequences , see ref .briefly , the classical state of the particles in the theory are points in a phase space , i.e. , positions and momenta . an observer s knowledge about the state of a particleis given by a liouville distribution , i.e. , a probability distribution on phase space .these phase space distributions are mathematically equivalent to wigner functions of gaussian quantum states and satisfy the uncertainty principle , in other words , gaussians whose covariance matrices satisfy the following relationship where is the covariance matrix defined as and is defined as in the remainder of this section , we will analyse weak measurements , postselection , and the corresponding weak value for gaussian states within erl theory . in this section, we will treat the measurement procedure outlined in section [ sec : level2a ] within erl theory .that is , we will interact the particles and measuring devices using a classical interaction and hamilton s equations of motion .however , we will require that all initial liouville distributions describing these systems obey the epistemic restriction .this restriction will lead us to a concept of weak measurement within erl theory , including a tradeoff between information gain and disturbance much like quantum theory . by following the structure of the quantum derivation in sec .[ sec : level2a ] , we introduce a general measurement model within erl theory and then investigate the situations under which the disturbance to the particles is minimised .we model both the particle to be measured and the measurement device as one - dimensional canonical systems , with and the position and momentum coordinates of the particle , and and the position and momentum coordinates of the measurement device .the particle and the measurement device are coupled via the hamiltonian where the observable we are measuring is of the form the distribution of the particle and the measurement device changes according the the classical hamiltonian equations as a result of this interaction , following after the measurement , the position and momentum of the particle are where and are the initial momentum and position of the particle being measured respectively and is the initial momentum of the measuring device .the position and momentum of the measurement device after this interaction are hence the change in the position of the device gives the measurement outcome .note there is no change in the momentum of each device due to the measurement interaction , which suggests that any change to the mean momentum of the ensemble of devices , must be statistical bias .for an ideal measurement , we would require the initial position of the device to be known with complete certainty , and this would lead to a perfect correlation between the observable and the position of the measurement device after interaction .however , due to our epistemic restriction , the uncertainty in the momentum of the device must be infinite in this case , i.e. , the observer has no knowledge of the initial momentum of the measurement device . from eqs . and , the position and momentum of the particleare each displaced by an amount proportional to the initial momentum of the device as a result of the interaction .if the initial momentum of the measurement device is unknown , so is the phase space displacement of the particle . thus we see how , within the erl theory , such a measurement that acquires complete information about an observable of the particle is accompanied by a corresponding unknown disturbance on the state of the particle .it is in this way that an information gain - disturbance tradeoff appears in the erl theory , despite its classical ontology .if we removed the epistemic restriction , the state of the particle would still be displaced by an amount proportional to the momentum of the device .however , the observer could in principle know the initial value of the momentum of the device as well as its position and therefore correct for this change .in other words , disturbance due to measurement arises in newtonian mechanics , and the epistemic restriction just makes the disturbance unrectifiable .note that , despite the change in the position and momentum of the particles due to the interaction , the expectation value of the observable has not changed , i.e. , as in quantum theory , measurements in erl theory are repeatable , and the disturbance is associated with uncertainty in canonically conjugate observables .we now introduce weak measurements in the erl theory , again following by analogy the quantum formalism of sec . [sec : level2aa ] . from eqs . and, one can again see two methods by which the disturbance due to measurement can be made small : first , by considering small coupling , and second , by requiring the initial momentum of the measurement device to be very close to zero . in the second method , to ensure that the momentum of the measurement device has ( and not just the mean value of the liouville distribution ) , we must require that the variance is very small as well as the mean value .due to the epistemic restriction , the initial uncertainty in position of the measurement device must then be very large .therefore , in the limit of small , we have vanishing knowledge of the initial position of the device and hence any change in this position after the measurement ( eq . )thus , in both methods , we obtain weak measurements that yield an arbitrarily small information gain about the system and correspondingly small disturbance . within erl mechanics, we note that these two methods of obtaining weak measurements are physically distinct . in both cases , in the limits or we have both no disturbance to the system as well as no information gained about the system .there is a difference in the ontology , however . in the limit of , there is no physical change to the measurement device .in contrast , in the limit of , there is a shift in the mean position of the measurement device , but our uncertainty about the device s initial position makes the shift undetectable . with a meaningful notion of weak measurement in erl theory , we can now consider the appearance of a weak value .let the initial phase space distribution of the particles being measured be described by a position distribution with mean and variance and a momentum distribution with mean and variance , i.e. , as the liouville distribution this distribution is precisely the wigner function of the initial quantum state of eq . .the initial phase space distribution of the measurement device is the wigner function of the initial state in eq . , where form the off diagonal elements of the covariance matrix .observable is measured using weak measurements as described above .after the weak measurement , the actual position and momentum of each particle and device changes according to eqs . - ,and the phase space distributions of the particles and devices become correlated as the particles are then measured using a strong measurement of observable , and postselect on the outcome .the postselected distribution of the measurement devices is then the mean position of the postselected subset is where and are real - valued functions defined as the mean momentum of the postselected subset is as with the quantum case , we consider these expressions using the first method to obtain weak measurements , wherein the initial position of the measurement device becomes highly uncertain .we characterise this case by choosing and take the limiting case , which implies as well .we find we can express the average shift in the position of the measurement devices upon postselection using the same expression for as calculated in eq . , to give + g \omega \text{im}[\langle\hat{a}_w\rangle ] \,,\ ] ] where again we have ignored terms of order and higher . in the limit , the covariance andthis becomes \,.\ ] ] this exactly reproduces the quantum mechanical shift in mean position .as our model has a classical ontology , it allows joint probability distributions over all variables , unlike quantum mechanics . as a result , we are able to calculate conditional expectation values of any two observables .in our situation , we can exploit this fact to compare the shift in the average position of the device with the expectation value of of the particles conditioned on an outcome of observable .we find that the real part of the weak value is indeed the same as the conditional expectation value .in this limit we find that the mean momentum of the devices is \,,\ ] ] which in the limit becomes again , this exactly reproduces the quantum mechanical expression .of the particles are weakly measured with devices sampled from a momentum distribution with mean zero and .each particle s momentum is shifted by an amount proportional to the momentum of the device , following .if one now postselects measurement devices based on the particle having momentum , then the distribution of these devices is shown on the plane intersecting the plot .the mean of this postselected distribution is no longer zero ., scaledwidth=45.0% ] just as in the quantum case , we take the mean momentum . if we take to be finite but small enough to ignore higher than first order terms of , the shift in the average position of the postselected devices is + g \omega \text{im } [ \langle \hat{a } \rangle_w ] \,,\ ] ] and the average momentum shift is .\ ] ] from eq ., we see that the momentum of the individual measurement devices are not changed due to the weak measurement .however in this method for weak measurements , we do not require , i.e. , there is uncertainty of the initial momentum of the measurement devices .the state of the particles can be shifted in phase space by the momentum of the devices , as shown in and , and so initial uncertainty in the momentum of the measurement device leads to an unknown disturbance of the particles . by postselecting on particlesthat have been perturbed by the momentum of the device , we arrive at a final momentum distribution of the device that is biased .an example of this effect is illustrated in fig . 2 , where the position of the particle is weakly measured , followed by postselection on the particle s momentum , that is , and .the shift in the mean momentum of the measurement devices does not arise as a result of the dynamics of the interaction , but instead from postselection biasing the momentum distribution . if the position distribution of the devices is correlated to the momentum distribution via a nonzero covariance ,the position distribution of the devices will also be biased . in summary , we have seen how terms in both the position and momentum shifts that are proportional to the imaginary part of the weak value are primarily the result of postselection biasing the device distributions , not dynamics .this observation allows us to formulate an operational interpretation of the imaginary part of the weak value as a measure of how much postselection will bias the device distribution , given finite disturbance , which we emphasise results from uncertainty in the initial properties of the devices . as an additional insight that arises from our analysis of the weak value, we now provide a better bound on the range of couplings for which the standard weak value results will hold . in order to consider to be small ( and therefore to ignore higher order terms ) , from equations , , , , one can see that this approximation is good provided where is an eigenvalue of the measurement operator .this condition is very restrictive and is not useful for variables with a continuous eigenspectrum .however , we can derive a less restrictive upper bound by looking at the approximations made on the classical distributions of the devices . from eqs . and , we can see that in order to ignore higher order terms of , we require because erl theory is operationally equivalent to gaussian quantum mechanics , this upper bound is also true for gaussian quantum states , and could equally well be derived from the quantum formalism . this is a significantly less restrictive upper bound for gaussian states than the one discussed in ref .the weak value has long been argued to be a fundamentally quantum phenomenon .here , we have analysed the weak value in a theory with a clear classical ontology but one in which information about a ( classical system ) is limited .this epistemically - restricted theory provides an analogy of the weak value : one which is exact when compared with weak value experiments in gaussian quantum mechanics . within this epistemically - restricted theory, we see the same average shifts in the position and momentum of the devices as in our quantum analysis .because our erl model has a clear ontology , it gives us insight into the statistical effects of postselecting on weakly disturbed states .we find that the real part of weak value is the conditional expectation and the imaginary part of the weak value is a measure of how much post selection biases the distribution of the system being measured given any finite disturbance .we do not see the appearance of any anomalous weak values in our analysis .this is because in addition to the observables in our model having an unbounded spectrum , our analysis is restricted to gaussian quantum mechanics , which is known to be non - contextual as it allows a non - negative quasiprobability representation ( the wigner function ) .our interpretation of the real and imaginary parts of the weak value can not be naively extended to states and measurements that allow for the observation of anomalous weak values , as these would imply a proof of contextuality that explicitly rules out the existence of the type of epistemically - restricted classical mechanics we employ .what our results show is that for states and measurements that are noncontextual , the weak measurement procedure followed by post selection reproduces shifts proportional to the real and imaginary parts of the weak value even in a model based on classical mechanics .our work also suggests that quasiprobabilistic representations might prove to be a useful tool in analysing the weak value for the more general case .we thank chris ferrie , nick menicucci , rafael alexander and harrison ball for interesting discussions and helpful feedback .this work is supported by the arc via the centre of excellence in engineered quantum systems ( equs ) project number ce110001013 .a. brodutch , * 114 * , 118901 ( 2015 ) ; c. ferrie and j. combes , * 114 * , 118902 ( 2015 ) ; e. cohen , arxiv:1409.8555 ( 2014 ) ; y. aharonov and d. rohrlich , arxiv:1410.0381 ( 2014 ) ; d. sokolovski , arxiv:1410.0570 ( 2014 ) .
|
weak measurement of a quantum system followed by postselection based on a subsequent strong measurement gives rise to a quantity called the _ weak value _ : a complex number for which the interpretation has long been debated . we analyse the procedure of weak measurement and postselection , and the interpretation of the associated weak value , using a theory of classical mechanics supplemented by an epistemic restriction that is known to be operationally equivalent to a subtheory of quantum mechanics . both the real and imaginary components of the weak value appear as phase space displacements in the postselected expectation values of the measurement device s position and momentum distributions , and we recover the same displacements as in the quantum case by studying the corresponding evolution in our theory of classical mechanics with an epistemic restriction . by using this epistemically restricted theory , we gain insight into the appearance of the weak value as a result of the statistical effects of post selection , and this provides us with an operational interpretation of the weak value , both its real and imaginary parts . we find that the imaginary part of the weak value is a measure of how much postselection biases the mean phase space distribution for a given amount of measurement disturbance . all such biases proportional to the imaginary part of the weak value vanish in the limit where disturbance due to measurement goes to zero . our analysis also offers intuitive insight into how measurement disturbance can be minimised and the limits of weak measurement .
|
as the weakest magnetic field of the sun , solar inter - network filed consists of vertical and horizontal field components . the inter - network vertical field has been found since 1975 ( livingston & harvey 1975 ; smithson 1975 ) , but until the late 1980s the possible presence of inter - network horizontal field was suggested by analysing the visible inter - network longitudinal field from solar disk center to limb ( martin 1988 ) . in fact , the arcsecond scale ( typically 1 - 2 arcseconds or smaller ) , short - lived ( lasting lifetime of about 5 minutes ) horizontal field was firstly discovered in 1996 based on the observations from advanced stokes polarimeter ( lites et al .1996 ) . with the very sensitivity solis vector spectromagnetograph ( keller et al .2003 ) , harvey et al .( 2007 ) deduced the ubiquitous seething " horizontal field throughout the solar inter - network region .the further evidence of inter - network horizontal field is provided by the analysis of the space - borne observations of solar optical telescope ( sot ; tsuneta et al .2008a ; suematsu et al .2008 ; shimizu et al . 2008 ; ichimoto et al .2008 ) on board hinode ( kosugi et al . 2007 ) .the spectro - polarimeter ( sp ) observations have revealed that the inclination angle of inter - network magnetic field has a peak distribution at 90 degrees ( orozco surez et al .2007 ; jin et al . 2012 ) , which corresponds to the horizontal field .the horizontal field primarily concentrates in the patches on the edges of granule ( lites et al .2008 ; jin et al .2009a ) , and has smaller spatial scale than the solar granule ( ishikawa et al .2008 ; jin et al .2009a ) . with the ubiquitous appearance all over the sun including the polar regions ( tsuneta et al .2008b ; jin & wang 2011 ) , solar inter - network horizontal field contributes 10 mx magnetic flux to solar photosphere per day ( jin et al .2009b ) , which is one order of magnitude lower than the contribution from solar inter - network vertical field ( 10 mx ; zhou et al .2013 ) but three orders of magnitude higher than that of ephemeral active region ( 10 mx ; harvey et al .moreover , the horizontal field may also contributes to the hidden turbulent flux suggested by the studies involving hanle depolarization of scattered radiation ( lites et al .in addition , the magnetic energy provided by the horizontal magnetic field to the quiet sun is comparable to the total chromospheric energy loss and about ten times of the total energy loss of the corona ( ishikawa & tsuneta 2009 ) . *according to the numerical simulation of quiet magnetism , rempel ( 2014 ) found that 50% of magnetic energy resides on scales smaller than about 0.1 arcsec , which means more magnetic energy still hidden .based on the numerical simulations , steiner et al .( 2008 ) pointed that the inter - network horizontal field gets pushed to the middle and upper photosphere by overshooting convection , where it forms a layer of horizontal field of enhanced flux density , reaching up into the lower chromosphere .the inter - network horizontal field might plays a role in the atmospheric heating . *the identification and critical importance of inter - network horizontal field have encouraged active studies to understand why so much inter - network horizontal flux is generated and what is the ultimate origin of horizontal magnetic field .the radiative magnetohydrodynamic simulations display that a local dynamo can produce the horizontal field in the quiet sun ( e.g. , schssler & vgler 2008 ) .the similar appearance rates of horizontal field in quiet region and plage region discovered by ishikawa & tsuneta ( 2009 ) suggest that a common local dynamo which is independent of global dynamo produces the horizontal field .furthermore , the fraction of selected pixels with polarization signals shows no overall variation by studying the long - term observations ( buehler et al .however , stenflo ( 2012 ) argued that the global dynamo is still the main source of magnetic flux even in the quiet sun .* the distinction between the global dynamo which creates sunspot cycle and local dynamo which is independent of sunspot cycle would provide supporting evidence for the origin of solar inter - network horizontal field . *the long - term sp observations with high spatial resolution and polarization sensitivity provide us the opportunity to carry out this study . in order to improve the signal - to - noise ratio in weak field region ,we adopt the wavelength - integration method to extract the horizontal magnetic signals observed from 2008 april to 2015 february , roughly from the solar minimum to the maximum of the current cycle .the next section is devoted to describe the observations and data analysis . in section 3 ,we present the result of cyclic invariance of inter - network horizontal field , and discuss some possible uncertainties of our result .the sot / sp measurements provide high spatial resolution and polarization sensitivity observations since 2006 november .these observations were recorded by 112 wavelength points covering the spectral range from 630.08 nm to 630.32 nm , including the entire information of stokes parameters ( i , q , u , and v ) in two magnetic sensitivity fe ii lines .* however , because of the telemetry problems , dual mode images were unavailable after 2008 january . in order to keep consistency of image quality, the present study utilizes the observations taken after 2008 january because the uniform single mode images have been adopted since then . avoiding those regions that were close to solar limb and polar region ,a total of 430 sp measurements of quiet sun are selected in this study , which covers the period from 2008 april to 2015 february . in order to compare the inter - network horizontal fields in different magnetic environments , 146 active regions and 109 plage regionsare also selected in the same period .the exposure time of all these magnetograms is 3.2 s. * * in order to enhance the sensitivity of weak polarization signals in the presence of measurement noise , the linear polarization signal is extracted based on the method of wavelength - integration ( lites et al .the method avoids the problems of non - convergence and non - uniqueness that arise in the inversions of noisy profiles , and provides the optimal sensitivity to the weak polarization signal . * * just as accounted by buehler et al .( 2013 ) and lites et al .( 2014 ) , the rms contrast of the stokes i is not constant but shows a fluctuation due to the temperature fluctuation on the spacecraft during the long - term observations . in order to examine the instrumental effect, we first show the rms intensity contrast as a function of the means distance from solar disk center , which is shown in the left panel of figure 1 .an obvious dropping of the rms intensity contrast is found when the observed regions were far away from solar disk center .however , within the distance of 0.2 solar radius , the changing of intensity contrast is not obvious . to examine the long - term variation of rms contrast , we selected all quiet magnetograms from sp measurement within the distance of 0.2 solar radius , and analyzed the rms contrast in the period of investigation .the result is shown in the middle panel of figure 1 .the fluctuation reaches as much as 1.1% , confirming the result of buehler et al .( 2013 ) and lites et al .( 2014 ) . to reduce the fluctuation ,we degrade all observations spatially by a 3 smoothing average , and the result is shown in the right panel of figure 1 .it can be found that the average fluctuation has a slight decline , falling down to 0.7% after smoothing , while the average rms contrast drops to 5.23 from original value of 6.99 .furthermore , during the investigation , the rms contrast does not show regular variation but a fluctuation around a constantly horizontal line with time .therefore , we do not make further instrumental amendment except the spatial smoothing .* * in this study , not all observations are located on the disk center , so the center - limb effect of magnetic field needs to be considered .the limb - weakening of circular polarization has been identified ( e.g. , jin & wang 2011 ; lites et al .however , there is still few analysis on the center - limb effect of the linear polarization . here , we adopt two methods to analyze the variation of linear polarization when the observed regions were far away from solar disk center . on the one hand, we assume that there are only two possible cyclic variations of the horizontal field in quiet sun : linear correlation ( or anti - correlation ) with sunspot cycle and keeping constant in the sunspot cycle .based on the assumption we analyze these magnetograms of quiet sun observed from 2008 april to 2009 july , an interval of invariant sunspot number .the result is shown in the top panel of figure 2 .we can find that moving far away from solar disk center , the horizontal field in quiet sun does not display obvious changes . on the other hand, we avoid the observation from solar polar region , and study these local observations of quiet sun from solar disk center to limb in a few days , i.e. , a period from 2008 december 29 to 2009 january 4 .these magnetograms locate at almost the same magnetic environment of the sun s disk , resembling a series of observations from solar center to limb at the same time .the corresponding result is shown in the bottom panel of figure 2 .for the magnetic distribution from solar disk center to limb , the variation of horizontal field is still not obvious .therefore , in this study , the magnitude of horizontal field is considered as independent of observational location on the solar disk .* the pixel area in each magnetogram is corrected by , where is the heliocentric angle .we identify the horizontal magnetic structures by setting the thresholds of 0.2 mm on the area ( i.e. , equivalent to the area of 4 pixels ) and 1.5 times of the noise level , where the magnetic noise of horizontal field is about 50 g. note that , unlike the vertical field , the horizontal field times the pixel area is not the measurement of magnetic flux since the field is transverse to the line - of - sight .we first adopt the equivalent spatial scale of the horizontal magnetic structures in the transverse surface , i.e. , , where is the area of horizontal magnetic structure in the transverse surface .then we assume a vertical height of km ( lites et al .1996 ; jin et al .2009b ) , which is comparable to the photospheric scale height , and compute the area of horizontal magnetic structures by .finally we obtain the magnetic flux of these horizontal magnetic structures by considering their average flux density and area . in order to avoid the effect from the solar network field and plage regions as well as active regions ,we exclude those horizontal magnetic structures with larger spatial scale and stronger magnetic flux density by considering the magnetic flux larger than 5.0 mx .the identification of in region is displayed in figure 3 .the region within the red contours represents the horizontal magnetic structures in the active region , plage region as well as network region .the average area ratio of inter - network region to the entire sp quiet magnetogram reaches 98.0% , and exceeds 42.7% even through in the measurement including active region .in this study , we set the threshold of 1.5 times of the noise level on the inter - network horizontal magnetic field to analyze its flux density . on average17.7% of pixels in a quiet map carry the significant horizontal magnetic signal , and the average area ratio of significant horizontal magnetic signals also reaches 13.7% in a magnetogram including active region .these magnetograms are also analyzed by using different thresholds , such as the 2 and 2.5 times of noise level , but the results display no obvious dependence of the threshold choice .higher thresholds severely reduce the area occupancy of magnetic signals and enhance the magnetic flux density , and result in significantly poor statistics . because our results are not affected , description of employing higher thresholds is not discussed in this study .we first compute the horizontal magnetic flux density of each quiet magnetogram for the identified solar inter - network region , and obtain the monthly average distribution and the standard deviation of inter - network horizontal field , which is shown by the black symbol in the top panel of figure 4 . the corresponding monthly average sunspot number in this period is displayed in the bottom panel of figure 4 .the average inter - network horizontal flux density reaches 87 g. comparing the inter - network vertical flux density above the 1.5 times of noise level , the obvious imbalance between vertical and horizontal flux densities is confirmed .furthermore , it can be found that the imbalance is invariable , i.e. , a constant of 8.7.5 , during the ascending phase of solar cycle 24 , which is shown in the middle panel of figure 4 . within the scope of deviation, the distribution of inter - network horizontal field does not vary from the solar minimum to the current maximum of cycle 24 either .the fact of no cyclic variation of inter - network horizontal field suggests that it is predominantly governed by a process that is independent of the global solar cycle . * due to the randomness of sp local observations , it is necessary to examine whether the local information of horizontal field can represent its full disk information .we choose two target regions : inter - network regions close to an active region and within a large - scale quiet region .the two target regions perhaps suffer from different influences from either strong or weak surrounding flux distribution by some types of flux diffusion or magnetic interaction .based on the consideration , we also analyse the inter - network horizontal measurements surrounding active region or plage region .the result is shown by the green symbol in the top panel of figure 4 .it can be found the in horizontal field surrounding the active region does not display the cyclic variation , either . comparing the cyclic variations of inter - network horizontal field in the large - scale quiet region and at the surroundings of active or plage regions, we can find that their variational ranges are almost the same . just as the cyclic behavior of longitudinal field ( jin et al . 2011 ) , the diffusion of active region is likely to make an obvious effect only on the strong network field .the diffusion of active region seems to be little and even insignificant for the inter - network field . * our resultsalso indirectly confirm early findings obtained by ito et al .( 2010 ) and shiota et al .( 2012 ) who found the invariance of the inter - network magnetic field in the solar polar region by analysing hinode / sot data .some detailed analysis of the solar inter - network region has strongly suggested the presence of local dynamo ( lites 2011 ; buehler et al .2013 ; lites et al .furthermore , the numerical simulation of local dynamo can reproduce the process of generating the solar inter - network horizontal field ( vgler & schssler 2007 ; schssler & vgler 2008 ) .such a local dynamo process seems to naturally explain the cyclic invariance of inter - network horizontal field .the conclusion still needs to be examined by the observations with large field - of - view and continuous temporal coverage . in a sense the partition of network and inter - network regions only by the magnitude of magnetic flux is of uncertainties .the magnetic flux of network magnetic structures in the rapid evolving phase is sometimes very weak , and the magnetic flux of some inter - network magnetic structures is even larger than that of network magnetic structures ( wang et al . 1995 ; zhou et al .2013 ) . the observations with full - disk coverage and high spatial resolution and polarization sensitivity are still needed to examine our result .the authors are grateful to the team members who have made great contribution to the hinode mission .hinode is a japanese mission developed and launched by isas / jaxa , with naoj as domestic partner and nasa and stfc ( uk ) as international partners .it is operated by these agencies in co - operation with esa and nsc ( norway ) .the work is supported by the national basic research program of china ( g2011cb811403 ) and the national natural science foundation of china ( 11003024 , 11373004 , 11322329 , 11221063 , kjcx2-ew - t07 , and 11025315 ) .buehler , d. , lagg , a. & solanki , s. k. 2013 , a&a , 555 , a33 cattaneo , f. 1999 , apjl , 515 , 39 centeno , r. , socas - navarro , h. , lites , b. , et al .2007 , apjl , 666 , 137 hagenaar , h. j. , schrijver , c. j. & title , a. m. 2013 , apj , 584 , 1107 harvey , k. l. , harvey , j. w. & martin , s. f. 1975 , sol .phys . , 40 , 87 harvey , j. w. , branston , d. , henney , c. j , et al .2007 , apjl , 659 , 177 ichimoto , k. , lites , b. , elmore , d. , et al .2008 , sol.phys . , 249 , 233 ishikawa , r. & tsuneta , s. 2009 , a&a , 495 , 607 ishikawa , r. , tsuneta , s. , ichimoto , k. , et al .2008 , a&a , 481 , 25 ito , h. , tsuneta , s. , shiota , d. , tokumaru , m. & fujiki , k. 2010 , apj , 719 , 131 jin , c. l. & wang , j. x. 2011 , apj , 732 , 4 jin , c. l. , wang , j. x. , & xie , z. x. 2012 , sol .phys . , 280 , 51 jin , c. l. , wang , j. x. , & zhao , m. 2009a , apj , 690 , 279 jin , c. l. , wang , j. x. , & zhou , g. p. 2009b, apj , 697 , 693 keller , c. u. , harvey , j. w. & solis team 2003 , aspd , 307 , 13 kosugi , t. , matsuzaki , k. , sakao , t. , et al .2007 , sol.phys ., 243 , 3 lites , b. w. 2011 , apj , 737 , 52 lites , b. w. , leka , k. d. , skumanich , a. , et al .1996 , apj , 460 , 1019 lites , b. w. , kubo , m. , socas - navarro , h. , et al .2008 , apj , 672 , 1237 * lites , b. w. , centeno r. & mcintosh , s. w. 2014 , pasj , 66 , 4 * livingston , w. c. & harvey , j. 1975 , baas , 7 , 346 martin , s. f. 1988 , sol ., 117 , 243 orozco surez , d. , bellot rubio , l. r. , del toro iniesta , j. c , et al .2007 , apjl , 670 , 61 * remple , m. 2014 , apj , 789 , 132 * schussler , m. & vgler , a. 2008 , a&a , 481 , 5 shimizu , t. , nagata , s. , tsuneta , s. , et al .2008 , sol.phys . , 249 , 221 shiota , d. , tsuneta , s. , shimojo , m. , et al . 2012 , apj , 753 , 157 smithson , r. c. 1975 , baas , 7 , 346 stenflo , j. o. 2012 , a&a , 547 , 93 suematsu , y. , tsuneta , s. , ichimoto , k. , et al .2008 , sol.phys . , 249 , 197 tsuneta , s. , ichimoto , k. , katsukawa , y. , et al . 2008b , apj , 688 , 1374 tsuneta , s. , ichimoto , k. , katsukawa , y. , et al .2008a , sol.phys . , 249 , 167 vgler , a. & schussler , m. 2007 , a&a , 465 , 43 wang , j. x. , wang , h. m. , tang , f. et al .1995 , sol.phys . , 160 , 277 zhou , g. p. , wang , j. x. , & jin , c. l. 2013 , sol.phys . ,
|
the ubiquitousness of solar inter - network horizontal magnetic field has been revealed by the space - borne observations with high spatial resolution and polarization sensitivity . however , no consensus has been achieved on the origin of the horizontal field among solar physicists . for a better understanding , in this study we analyze the cyclic variation of inter - network horizontal field by using the spectro - polarimeter observations provided by solar optical telescope on board hinode , covering the interval from 2008 april to 2015 february . the method of wavelength integration is adopted to achieve a high signal - to - noise ratio . it is found that from 2008 to 2015 the inter - network horizontal field does not vary when solar activity increases , and the average flux density of inter - network horizontal field is 87 g , in addition , the imbalance between horizontal and vertical field also keeps invariant within the scope of deviation , i.e. , 8.7.5 , from the solar minimum to maximum of solar cycle 24 . this result confirms that the inter - network horizontal field is independent of sunspot cycle . the revelation favors the idea that a local dynamo is creating and maintaining the solar inter - network horizontal field .
|
human laugh is a crucial social signal due to the range of inner meanings it carries .this social signaling event may denote the topical changes , communication synchrony and positive affect ; on the other hand , it may also show disagreement or satirist views .therefore , automatic human laugh occurrence or laughter detection in speech may have many applications in spoken dialog and discourse analysis .in addition , the detection of this speech event may lead to increase in the word accuracies in the spontaneous automatic speech recognition .human laugh is developed as an inarticulate utterance to serve as an expressive - communicative social signal .the entire laugh period generally persists from 2 seconds to 8 seconds .there exists many types of laugh . from the acoustical point of view, the sound of laugh can be voiced , as well as it can be unvoiced , resulting into the vocalized and non - vocalized laughter . the whole laugh episode is constituted with a mixture of vocalized and non - vocalized laugh .it was found that the voiced and rhythmic laughs were significantly and more likely to elicit positive responses than the variants such as unvoiced grunts and snort like sounds .the laugh sound or laugh bout can be segmented into three parts , viz.(1 ) onset : explosive laugh , short and steep , ( 2 ) apex : vocalized part of laugh and ( 3 ) offset : post - vocalized fading part of laugh .the vocalized apex part is composed of laugh cycles ( for example , the laugh sound `` ha ha '' ) , each cycle is composed of laugh pulses .the number of pulses depends on the power of the lungs , it can be 4 to 12 for one cycle .these laugh pulses have a rhythmic pattern .although it is found that in sustained laughter the apex might be interrupted by inhalations , human laughter is easily recognized through the detection of apex part .therefore , it is clear that the sound of laugh may also be based on the rhythmic breathing resulting in a staccato vocalization , i.e. , the vocalization with each sound or note sharply detached or separated from the others .detection of the apex part plays dominant role to recognize the human laughter . the majority of the previous works on laugh detection ( cf .section [ sec : relwrk ] ) follow the supervised classification paradigm that may face a long extent of a training phase with considerable amount of costly annotated data .we hypothesize that a rhythmic nature of the vocalized laugh can allow us to use existing rhythm - detection signal processing based techniques , e.g. , detection the rhythm in music , also for the laugh detection .this would lead to an unsupervised and less data - dependent laugh detection , as an alternative to conventional machine learning approaches . in this work, we propose a three stage procedural method to detect human laugh using rhythm , through the laugh apex , which is the most prominent laugh part .this procedural method works in three basic procedural sequences : first we filter out low power spectral density ( psd ) frames using an automatic psd threshold computation based on well - established otsu s threshold technique .then we analyze all high - energy psd frames to detect the rhythmic frames ( such as rhythmic speech and/or rhythmic laugh ) with a music rhythm detector algorithm based on frequency demodulation .we select higher energy frames because human laugh is predominantly conceptualized as vowel - like high energy bursts . finally , we compute a statistical threshold to detect only the rhythmic laugh frames .we demonstrate the proposed detection method on naturally occurring conversations , that usually contain plenty of instances of happy and natural human laugh .therefore , we choose multiparty meeting conversations ami as a database for evaluation .the recordings of the ami meeting corpus show a huge variety of spontaneous expressions .the organisation of this paper is structured as follows : in the next section [ sec : relwrk ] , we clarify the laugh as a signal and its types as established by the studies , and also describe in details the related works on laughter detection and recognition . in the following section [ sec : method ] we illustrate the proposed method . in the next consequent section [ sec : detexp ]we describe used data and experimental set - ups ; this is followed by the discussion about the results in the subsection [ sec : res ] .finally , we conclude the findings and possible future works in the section [ sec : concl ] .the vocalized laugh can be spontaneous or voluntary . with clinical observationthere is a clear distinction between spontaneous and voluntary laugh .it is seen that during spontaneous laugh human self - awareness and self - attention is diminished . on the other hand , in voluntary laugh humanproduce a laugh sound pattern similar to the spontaneous laugh but still it differs in many aspects like vowel used ( viz .the derivative of schwa ) , pitch , frequencies and amplitudes , voice quality etc .all these differences have effects on the rhythm of spontaneous and voluntary laugh .bachorowski et al ( 2001 ) found that the vocalized laugh is rhythmic compared to the snort - like laugh or grunts ; and also the vocalized laugh elicits positive emotion than the other kinds of laugh .devillers et al ( 2007 ) found that the unvoiced laughs express more negative emotion , whereas the voiced laugh segments are perceived as positive ones . a sizable number of previous works in laugh occurrence detection are already proved to be impressive in terms of techniques , results and large set of intricate features .the majority of the previous works follow the supervised classification paradigm that may face a long extent of training phase with considerable amount of costly annotated data .many of these works consider the task as a binary ( i.e. laugh vs. non - laugh ) classification or segmentation problem .the label - type of laughter in the laughter detection tasks may vary from the coarse - grained label to the fine - grained one .the coarse - grained laugh detection generally implies to the binary classification ( i.e. laugh vs. non - laugh ) .there exists some instances of coarse - level multi - class detection viz .the laughter classification along with the other non - laugh classes like silences , fillers etc . .in other works , it detects many kinds of laugh such as polite , mirthful , derisive vs. non - laugh .there also exists a few works on unsupervised classification of the laugh .some unsupervised techniques depend on the burst detection and classification of the burst as laughter .the affect bursts are defined as short , emotional and non - speech expressions that interrupt speech such as respiration , laughter or unintelligible vocal sound .therefore it is hard to tag the right meaning of ( single or n - tuple ) affective bursts without any reference .the non - parametric statistical methods also have been exploited in real - time , training - free framework to detect laughter ; still one needs to extract features for this technique .majority of unsupervised methods are primarily tested on their own collected data . in this workwe attempt to propose a real - time , rhythm - based approach for laughter detection .we attempt to exploit the rhythmic pattern of laughter , following the work of bachorowski et al ( 2001 ) , we aim to detect the vocalized laughter through detection of the laugh apex occurrence .we do not aim to detect the unvocalized laughter in this work .rhythm is defined as the systematic temporal and accentual patterning of sound . in music ,rhythm perception is usually studied by using metrical tasks .metrical structure also plays an organizational function in the phonology of language , via speech prosody or laughter .we attempt to use this metrical structure of the human laugh without analyzing speech prosody . from the earlier studies , we see that prosody and musical structure ( such as rhythm ) borrow or share concepts since long back .this studies with rhythmic patterns lead to the birth of the linguistic theories of stress - timed and syllable - timed languages . herewe do not consider the rhythm in the speech prosody .recent studies reveal that `` rhythm '' in speech should not be equated with isochrony .the absence of isochrony is not the same as the absence of rhythm . in ,isochrony is defined as the organization of sound into portions perceived as being of equal or unequal duration .strict isochrony expects the different elements to be of exactly equal duration , whereas weak one claims to have the tendency for the different elements to have the same duration .so , the languages can have rhythmic differences which have nothing to do with isochrony .but the rhythm in human laugh is always isochronous like any music , so we exploit the isochronous behavior of the human laugh in this work , and do not consider the non - isochronous rhythm of languages .we use an approach of frequency modulation to retrieve this rhythm , following . to detect rhythmic laughterfirst we segment the whole speech to select the probable laughter segments , then we classify the candidate frames for voiced laughter using a rhythm algorithm based on frequency demodulation ; finally we select the rhythmic laughter frames through a statistical process .we do not consider shared laughter captured on a single channel , rather our method is engineered for a solo laughter by the single participant .we use an unsupervised algorithm to detect laughter using its rhythmic property .this entire process can be divided into three basic sub - processes : first we filter out low power spectral density frames using an automatic psd threshold computation .then we use all high - energy psd frames to detect all rhythmic segments ( such as rhythmic speech and/or rhythmic laughter ) with the rhythm detector algorithm . finally , we compute a statistical threshold to detect only the rhythmic laughter frames . based on the detected rhythmic laughter frames , we are able to generate the time boundaries of the laugh segments . description the three aforementioned sub - processesis following .we compute the psd threshold using nonparametric power spectral density ( psd ) estimation through welch s overlapped segment averaging psd estimator , where , i.e. the sampling frequency .we compute the psd threshold following the otsu method . in this methodthe computed psd set ( ps ) is sorted in ascending order , let us consider the index sets as $ ] , then the sorted set is divided into two sets randomly , say : and , where .next , for , we iteratively compute then finally we compute , given by , ^ 2}{\omega(k)*(1-\omega(k))})\big{]}\ ] ] where , , , and . here denotes the -th probability considering the elements of the corresponding set in the iteration , further details is in the paper by otsu ( 1975) .we attempt to acquire the optimal value psd threshold through a brute - force optimization process of running our laugh detection algorithm on the development data . in the section [ sec : res ] ,we experimentally compare the performance of psd threshold computation with the development data using the unsupervised method by otsu ( 1975 ) and the brute - force optimization method .first we select the high psd frames using the threshold computed in subsection [ psd ] .then these high psd frames are passed through the rhythm calculation , thus we select the rhythmic frames among all the high energy frames .more specifically , we call these rhythmic frames as the candidate laughter frames .we basically exploit frequency modulation ( fm ) technique to capture isochronous behavior of rhythm . in this casewe use an oscillator to modulate the frequency of a sinusoidal wave . herethe oscillator is the `` carrier '' and the other one is the `` modulator '' .we attempt to use a sawtooth carrier in this case . since laugh signal has a periodic nature it is traceable as a sawtooth ( or triangular ) waveform , therefore we choose the triangular hanning window as the basic oscillator function , which is computed as follows : {i=1\cdots6}\ ] ] here denotes the hanning window length .the properties of the `` modulator '' fm components are defined by the frequency band limit with a set of six harmonics that starts with zero then it reaches the periodicity pitch 200 hz then all the other four ( , , and ) harmonics of that pitch . herewe choose to follow this filterbank implementation method described in scheirer(1998 ) .each harmonics has two band - ranges .therefore , this also initializes twelve band - range values .these frequency band - limits are used to compute the band - ranges . since the beginning ofthe method we were computing data in the time domain . nowthe signal is taken from the time domain to the frequency domain with fourier transform , and we prepare the output using short time windows . finally , we convolve the inverse fast fourier transformed window data with a fourier transformed half - hanning window .we use a set of six band - limits at this moment : beginning at 200 hz , increasing this in multiple of two , as the frequency results in a more and more complex multi phonic .the resulting wave is the summation of many different sinusoidal waves ; the carrier frequency lies in the middle while the other tones lie above and below it at distances determined by the modulation frequency . when the modulation amplitude rises , the amplitudes of the additional frequencies also rise .however , this increase is difficult to formulate mathematically .the advantage of fm over additive one ( the simple addition of sinusoidal waves ) is that we need to use only two oscillators to convolute a rich and complex rhythmic human laugh sound .although currently we use the six modulation frequencies , this number can be changed if needed .the output of this function is basically a six column matrix , each row of the matrix is one frame .we use the median of this output to use it further in the next subsection [ statp ] .we compute the basic statistic functions ( namely mean and standard deviation ) for all the obtained rhythmic candidate laughter frame with negative gradient ( i.e. basically the negative difference between two consecutive points , and this is done to compute all local maxima points ) .next , we derive the -confidence bounds through the student -test of the standard deviation . then , we compute a statistical threshold for rhythmic laughter frame selection as the difference between the upper bound of the confidence interval and the estimated population - standard deviation computed through same student -test .we compute this threshold on the basis of the hypothesis that the power of laugh is significantly higher than that of rhythmic speech / music .we select the frames as the laughter frames whose standard deviation is equal or higher than the threshold , and we finally compute the time intervals from those selected frames .algorithm [ view ] outlines overall view of the steps involved in the proposed laugh detection .in this framework , the method takes a raw speech signal as input and it outputs the time intervals of the laughter segments in the signal .we consistently apply a standard short - time analysis using a frame window of 2.5 sec ( following the study of ) with 50% overlaps .we used part of ami corpus as our test ( 5 meetings ) and development ( 2 meetings ) data .we used augmented multiparty interaction ( ami ) meeting corpus in this work .ami meeting corpus consists of 100 hours of meeting recordings .the recordings use a range of signals synchronized to a common timeline .these include close - talking and far - field microphones , individual and room - view video cameras , and output from a slide projector and an electronic whiteboard . during the meetings ,the participants also have unsynchronized pens available to them that record what is written .the meetings were recorded in english using three different rooms with different acoustic properties , and include mostly non - native speakers .following petridis and pantic ( 2011 ) we used only close - talk headset audio ( 16khz ) recordings .we used the same data , which is used by petridis and pantic ( 2011 ) ( i.e. the seven meeting recordings recordings of eight participants consisting 6 young males and 2 young females of around 210 mins of recordings ) .we split the whole data set into two parts : the development data consists of two meeting recordings ( i.e. ib4010 and ib4011 sets ) ; we present the final result shown in the table [ bsln - rhythm - comp ] using our test data of five meeting recordings ( i.e. ib4001 and ib4005 sets ) .the challenge of ami meeting corpus is that the data has a large amount of overlapping speech .we follow the same baseline protocol using the same feature set like .we establish a baseline for the ( general laugh vs non - laugh ) classification of human laugh using interspeech 2013 paralinguistic feature set .this feature set consists of 141 features .we use support vector machine classifier with 5-fold cross - validation .we extract the features using the opensmile tool .we use libsvm classifier for svm training and prediction .this is a supervised binary sequential classification task .we use the data segments of 20 msec window at the rate of 10 msec .the baseline is achieved in a speaker dependent scenario .we select this baseline because it is the best performing supervised method for laugh detection .table [ bsln - rhythm - comp ] compares the results of our proposed approach with the supervised baseline approach .while the proposed rhythm based algorithm can be used for the detection of positive vocalized laughter using rhythm , the baseline has been designed to classify all kinds of laugh without distinction .we evaluate the results in the percentage f1-measures .the performance of our approach on ami meeting database shows better performance than the corresponding baseline .we notice that it performs in a balanced way in terms of precision and recall .figure [ resfig ] presents the roc ( receiver operating curve ) comparison of the psd threshold computation over the development data .we compare the performance of threshold computation using the otsu method in against to that of the threshold computation using the brute - force optimization method .the roc is computed using the true positive and false positive percentages .we see from the figure [ resfig ] that the roc computed with brute - force optimization thresholding is performing marginally better than the roc computed by the method of otsu ( 1975 ) with the development data .therefore we use the optimized threshold with the development data to present the result in the table [ bsln - rhythm - comp ] .in this work we have outlined a novel algorithm for positive vocalized laugh detection using rhythm .this is a real - time , training - free approach in comparison to the existing supervised approaches of the laugh detection .the algorithm is based on the rhythmic transforms in laughter .the rhythm is analysed through frequency modulations using a modulator and a sawtooth carrier .all the six carriers are fixed , beginning at 200 hz and the other four multiples of 200hz .the strength of this technique also resides in that : we do not need to extract pitch or other intricate feature set to analyze rhythm ; since it does not involve any complex process of computation , or intermediate file or memory handling , the time and space complexity of this method is low .we used ami - role based meeting dataset to evaluate the proposed algorithm .the proposed laugh detection approach works well in comparison to the supervised baseline . in this workwe do not detect all kinds of vocalized laugh , we focus on the detection of rhythmic vocalized human laughter .this method is capable to work incrementally for further recognition of different laugh types or other recognition for laugh and rhythmic speech or music.this algorithm is sensitive to speech signal clipping ; it may fail to detect the human laugh recorded in a noisy environment , specifically , it will fail to work with human speech data along with a background score of rhythmic music .the matlab code of the algorithm is available as open - source code at the following address : https://github.com/sghoshidiap/laughdet .
|
human laugh is able to convey various kinds of meanings in human communications . there exists various kinds of human laugh signal , for example : vocalized laugh and non vocalized laugh . following the theories of psychology , among all the vocalized laugh type , rhythmic staccato - vocalization significantly evokes the positive responses in the interactions . in this paper we attempt to exploit this observation to detect human laugh occurrences , i.e. , the laughter , in multiparty conversations from the ami meeting corpus . first , we separate the high energy frames from speech , leaving out the low energy frames through power spectral density estimation . we borrow the algorithm of rhythm detection from the area of music analysis to use that on the high energy frames . finally , we detect rhythmic laugh frames , analyzing the candidate rhythmic frames using statistics . this novel approach for detection of ` positive ' rhythmic human laughter performs better than the standard laughter classification baseline . laugh signal detection , paralinguistic analysis
|
studying complex systems is typically based on analyzing large , multivariate data . since , in general terms , complexity is primarily connected with coexistence of collectivity and chaos or even noise , it is of crucial importance to find an appropriate low dimensional representation of an underlying high dimensional dynamical system . in many casesthis aims at denoising and compressing dynamic imaging data .such a problem is particularly frequent in the area of the brain research where a complex but relatively sparse connectivity prevails .understanding brain function requires a characterisation and quantification of the correlations in the signals generated at different areas .direct pathways connect the sensory organs with the corresponding primary cortical areas . in the auditory system of interest here , delivery of a stimulus to either the left or the right earis relayed to both primary auditory cortices , with stronger and earlier response on the contralateral side .the first cortical response arrives very early , well within 20 milliseconds , but it is too weak to be mapped non - invasively from outside .successive waves of cortical activation follow with the strongest around 80 - 100 ms . for a simple stimulus andno cognitive task required the response as seen in the average is effectively over within the first 200 - 300 milliseconds .more elaborate analysis shows that the `` echoic memory '' last for a few seconds .furthermore the activity in each area of the cortex , including the auditory cortex and its subdivisions , is determined by a plethora of interactions with other areas and not just the direct pathway from the cochlea .the variability of the evoked response possibly reflects the many ways a given input in the periphery can be modulated before the strong cortical activations emerge .our treatment of the activity from each auditory cortex as an independent signal bypasses this complexity by lumping many effects into information theoretic measures .the advantage of this approach is that it leads to quantitative analysis of stochastic and collective aspects of the complex phenomena in the auditory cortex and the brain at large . in our previous work have established the existence of correlations between activity in the two auditory cortices , using mutual information as a measure of statistical dependence .the analysis showed that collectivity and noise were present in the data .usually , one analyzes a set of simultaneously recorded signals which emerge from the activity of sub - components of the system .consequently , the presence of correlations in such signals is to be interpreted as a certain sort of cooperation among several or all of these sub - components .though closely related , our present approach is somewhat different . instead of studying many subsystems at the same time, we deal with two brain areas only and aim at identifying repetitive structures and their time - relations in consecutive independent trials of delivery of the stimulus .we thus construct the correlation matrix ( which is a normalized version of the covariance matrix ) whose entries express correlations among all the trials that are delivered by experiment .the difference relative to a conventional use of the correlation matrix is that now the indices of this matrix are labeling different presentations of the stimulus and not different subsystems .the resulting eigenspectrum is then expected to carry information about deterministic , non - random properties , separated out from the noisy background whose nature can also be quantified .the details of the experiment can be found in our earlier articles . here , for completeness , we sketch briefly only the most important facts . five healthy male volunteers participated in the auditory experiment .we used 2x37-channel , two - dewar meg apparatus ( each dewar covered the temporal area in one hemisphere ) to measure magnetic field generated by the cortical electric activity .the stimuli were 1 khz tones lasting 50 ms each delivered in three runs to the left , right or both ears in 1 second intervals .the single trial of delivery of stimulus was repeated 120 times for each kind of stimulation .the cortical signals were sampled with 1042 hz frequency .pilot runs were used to place each dewar in turn so that both the positive and negative magnetic field extrema were captured by the 37 channel array .with such a coverage it is feasible to construct linear combinations of the signals which act like virtual electrodes `` sensing '' the activity in the auditory cortex .this computation can be done at each timeslice of each single trial independently , thus building the timeseries for each auditory cortex for further analysis .delivery of a sound stimulus or any change in the continuous stimulus causes a characteristic activity in the auditory cortex which is best illustrated by averaging many such events .the ( averaged ) evoked potential , appears in both hemispheres and has a form of several positive and negative deflections of the magnetic field .the most prominent feature of the average is a high amplitude deflection at about 80 - 100 ms after the onset of the stimulus ( so called m100 ) .the details of the average evoked response are hardly visible in each single trial , partly because of strong background activity , which is not related to the stimulus and partly because of the latency jitter introduced by the many feed - forward and feed - back interactions that occur intermittently between the periphery and the cortex .if as signal we consider what is fairly time - locked to the stimulus onset then signal - to - noise ratio is much improved by averaging the signal over all single trials .we will consider two runs , corresponding to stimuli delivered to the left and right ear .each run comprises single trials , thus we have 120 signals for each hemisphere and each kind of stimulation .the signals are represented by the time series of length of time slices each evenly covering 1 second time interval .since all the stimuli were provided in precisely specified equidistant instants of time , all the series can be adjusted so that the onset of each stimulus corresponds to the same time slice .each signal starts 220 ms before and ends 780 ms after the onset .a band pass filter was applied in the 1 - 100 hz range . for a simple auditory stimulus and no cognitive task associated with it ,the average evoked response lasts for 200 - 300 ms ; this is also reflected in our earlier mutual information study of the signals .since other parts of each series are associated with activity which is not time - locked to the stimulus , the appearence of similar events in both hemispheres and across trials results in correlations that are much stronger in the first few hundred millisecond .the presence of correlations and collectivity can not be excluded _ a priori _ from other periods and it is therefore of considerable interest to compare two such intervals .we have settle on two such intervals , each with 250 timeslices : the first we call the evoked potential ( ep ) interval and it covers the first 250 timeslices after stimulus onset , i.e. 250 time slices ( 2 - 241 ms ) ; this is the period where the average signal is strong .the second interval we consider as baseline or background ( b ) and for this we choose the interval from 501 ms and ending 740 ms after the onset of the stimulus .since the time between stimuli is one second our choice avoids the time just before stimulus onset , when anticipation and expectation is high while being as far as possible from the stimulus onset .for the two time - series and of the same length , one defines the correlation function by the relation where denotes a time average over the period studied . for two sets of time - series each all combinations of the elements can be used as entries of the correlation matrix . by diagonalizing one obtains the eigenvalues and the corresponding eigenvectors . in the limiting case of entirely random correlations the distribution is known analytically and reads : where with , , and where is equal to the variance of the time series ( unity in our case ) . for our present detailed numerical analysiswe select two characteristic subjects ( db and fb ) out of all five subjects who participated in the experiment . the background activity in both subjects does not reveal any dominant rhythm which , if present in two signals , may introduce additional , spontaneous correlations not related to the stimulus . the signals of db reveal a relatively strong eps and a good signal - to - noise ratio .fb is somehow on the other side of the spectrum of subjects , as its eps are small and hardly visible and the signals are dominated by a high - frequency noise which results in a poor snr . the signals forming pairs in eq . ( [ eq : cab ] ) may come either from the same or from the opposite hemispheres . the first possibility we term the _ one - hemisphere _ correlation matrix and the latter one the _ cross - hemisphere _ correlation matrix .the first matrix is , by definition , real symmetric and the second one must be real but , in general , it is not symmetric. an interesting global characteristics of the dynamics encoded in is provided by the distribution of its elements .an example for such a distribution is shown in fig . 1 for the one - hemipshere correlation matrix .as one can see in the background region ( solid lines ) the distributions are gaussian - like centered at zero .this implies that the corresponding signals are statistically independent to a large extent .a significantly different situation is associated with the evoked potential part of the signal .the most obvious effect is that the centre of mass of the distribution is shifted towards the positive values . in this respectthere is also a difference between the subjects : the average value of elements for db ( approx .0.35 ) is considerably higher than for fb ( 0.1 ) .this indicates that the signals in fb are on average less correlated even in the ep region than the signals recorded from db .this may originate from either a smaller amplitude of the collective response of fb s cortex or from a much smaller signal - to - noise ratio . for the cross - hemisphere correlation matrixthe relevant characteristics are similar .the only difference is that the shifts ( in both subjects ) are slightly smaller .more specific properties of the correlation matrix can be analysed after diagonalazing .the one - hemisphere correlation matrix is real and symmetric and consequently all its eigenvalues are real .the structure of their distribution is displayed in fig .the eigenvalues are shown for several characteristic cases : two subjects , the left and right hemispheres and two regions ( ep and b ) .the structure of the eigenvalue spectra depends on the subject but first of all on the region of the signal .there is a clear separation of the largest eigenvalue from the rest of the spectrum in the ep region in db .this effect is much less pronounced for fb and considerably reduced in b. this can be understood if we compare this result with fig . 1 . to a first approximationthe distribution of elements in ep can be described as a shifted gaussian : where denotes a gaussian matrix centered at zero and is a matrix whose entries are all unity . is a real number .of course , the rank of is one and , therefore , the second term alone in eq .( [ eq : gu ] ) develops only one nonzero eigenvalue of magnitude . since the expansion coefficients of this particular state are all equal this assigns a maximum of collectivity to such a state .if is significantly larger than zero the structure of is predetermined by the second term in eq .( [ eq : gu ] ) . as a resultthe spectrum of comprises one collective state with large eigenvalue .since in this case constitutes only a noise correction to all the other states are connected with significantly smaller eigenvalues . in terms of the signals analysed here the first component of ( [ eq : gu ] ) corresponds to uncorrelated background activity and noise and the second one originates from the synchronous response of the cortex to external stimuli .similar characteristics of collectivity on the level of the correlation matrix has recently been identified in correlations among companies on the stock market . in relation to eq .( [ eq : rho ] ) the presence of a strongly separated eigenvalue is one obvious deviation which is consistent with the non - random character of the corresponding eigenstate .further deviations can be identified by comparing the boundaries of our calculated spectrum to of eq .( [ eq : lambda ] ) .for we obtain and .clearly , there are several eigenvalues more which are larger than .this may indicate that the corresponding eigenstates absorb a fraction of the collectivity .however a closer inspection shows that also on the other side of the spectrum there are eigenvalues smaller than and basically no empty strip between and can be seen . bythis our empirical distribution seems to indicate that an effective which determines this distribution is significantly smaller than .this , in turn , may signal that the information content in the time - series of length is equivalent to a significantly shorter time - series .this conclusion is supported by the time - dependence of the autocorrelation function calculated from our signals .it drops down relatively slowly and reaches zero only after 20 - 30 time - steps between consecutive recordings .memory effects are present and hence neighboring recordings are not independent ; this of course is not surprising because neural activity in the brain has a finite duration ( and 25 - 30 ms is an important time scale ) and there are plenty of time - delayed processes and interactions which will produce activity in neighbouring times with shared information .one could explicitly test whether this is a reason our calculated deviates from the prediction of eq .( [ eq : rho ] ) by recomputing with appropriately sparser time - series .unfortunately , the number of recordings covering the ep is too small for this .instead we perform the following analysis : we generate the new time - series such that , i.e. , the time - series of differences .these destroy the memory effects and now the autocorrelation function drops down very fast .3 shows the density of eigenvalues of the correlation matrix generated from .now the agreement with eq .( [ eq : rho ] ) improves and becomes relatively good already when every second time - point from is taken , such that the total number of them remains the same .taking more distant points , leaving out intermediate ones , drastically reduces the correlation between the remaining successive points .the above thus illustrates the subtleties connected with the correlation matrix analysis of time - series .replacing our original time - series by improves the agreement with eq .( [ eq : rho ] ) but at the same time the collective state connected with ep dissolves .this is due to disappearance in of the memory effects present in .therefore , in the following we return to our original time - series .another statistical measure of spectral fluctuations is provided by the nearest - neighbor spacing distribution .the corresponding spacings are computed after renormalizing the eigenvalues in such a way that the average distance between the neighbors equals unity .a related procedure is known as unfolding .two characteristic and typical examples of such distributions corresponding to ep and b regions are shown in fig . 4 ( for db ) .while in both cases these distributions agree well with the wigner distribution which corresponds to the gaussian orthogonal ensemble ( goe ) of random matrices , some deviations on the level of larger distances between neighboring states are more visible in the ep than in the b region .this in fact is consistent with the presence of larger eigenvalues in the ep case as shown in fig . 2 .interestingly , the bulk of even here agrees well with goe .in order to further quantify the observed deviations we also fitted the histograms with the so - called brody distribution where ^{1+r}$ ] . depending on a value of the repulsion parameter , this distribution describes the intermediate situations between the poisson ( no repulsion , ) and the standard wigner distribution ( goe ) .the best fit in terms of eq .( [ eq : brody ] ) gives in the ep and in the b case , respectively .thus we clearly see that the measurements share the universal properties of goe . a departure betraying some collectivityis nevertheless present in both b and ep intervals , but even in the ep interval the effect of the stimulus does not change this picture significantly : it results in one or at most few remote distinct states in the sea of low eigenvalues of the goe type . in order to further explore this effectwe look at the distribution of the eigenvector components for the same cases as in fig .4 . fig .5 displays such a distribution generated from eigenvectors associated to one hundred lowest eigenvalues ( main panels of the figure ) calculated both for the ep ( upper part ) and b ( lower part ) regions .the result is a perfectly gaussian distribution in both cases .however , in ep a completely different distribution ( upper inset ) corresponds to the state with the largest eigenvalue .the charactersitic peak located at around 0.1 documents that majority of the trials contribute to this eigenvector with similar strength .this eigenvector is thus associated with a typical behavior of many single - trial signals .the component values in the largest eigenvalue in b also deviate from a gaussian distribution ( inset in the lower part of fig .5 ) although in this case their distribution is largely symmetric with respect to zero .this makes the two eigenvectors in b and ep regions approximately orthogonal which indicates a different mechanism generating collectivity in these two regions .a more explicit way to visualise the differences among the eigenvectors is to look at the superposed signals for , 119 and 75 these are shown in fig .6 using the eigenvectors calculated for the ep ( middle panel ) and for b ( lower panel ) regions .the signals corresponding to the largest eigenvalues develop the largest amplitudes in both cases .in the first case ( ep ) it very closely resembles a simple average ( upper panel ) over all the trials . in the second case ( b )long range correlations are clearly present , demonstrating that there is more in the signal than the short latency correlations in ep .the large eigenvalues in b also show a degree of collectivity . when signals weighted by the eigenvectors with the highest eigenvalue in ep and b are compared we see that there is essentially no amplification in the other region ( i.e. in the ep interval when the b - weighted signals are used ) .this provides another indication that different mechanisms are responsible for the collectivity at these two different latency ranges .analogous effects of collectivity for are already much weaker and disappear completely as an example of shows .we now turn to the cross - hemisphere correlation function , obtained by forming pairs in eq .( [ eq : cab ] ) from the time - series representing opposite hemispheres ( with ) .introducing in addition a time - lag between such signals , and dropping the rather obvious superscripts for the left and right hemisphere , we define a delayed correlation matrix a similar cross - correlation time - lag function has been employed in the past to investigate across trials correlations , but because of the high computational load of an exhaustive comparison across different delays the analysis was restricted to the computation of the time - lagged cross - correlation between the average and individual single trials .the spectral decomposition of the cross - correlation matrix provides a more elegant approach , requiring the solution of the -dependent eigenvalue problem since can now be asymmetric its eigenvalues can be complex ( but forming pairs of complex conjugate values since remains real ) and in our case they generically are complex indeed .one anticipated exception may occur when similarity of the signals in both hemispheres takes place for a certain value of . in this case is dominated by its symmetric component and the effect , if present , is thus expected to be visible predominantly on the largest eigenvalue .it is more likely to see this effect in the ep region of the time - series .we thus calculate the cross - hemisphere correlation matrix from the -long subintervals of and covering the eps .7 presents the resulting real and imaginary parts of the largest eigenvalue as a function of for two subjects and two kinds of stimulation ( left and right ear ) .as it is clearly seen the large real parts are accompanied by vanishing imaginary parts .based on this figure several other interesting observations are to be made .first of all strongly depends on and reaches its maximum for a significantly nonzero value of .this reflects the already known fact that the contralateral ( opposite to the side the stimulus is delivered ) hemisphere responds first and thus the maximum of synchronization occurs when the signals from the opposite hemispheres are shifted in time relative to each other .( here means that the signal from the right hemisphere is retarded relative to the left hemisphere and the opposite applies to ) .furthermore , the magnitude ) of the time - delay estimated from locations of the maxima agrees with an independent estimate based on the mutual information .even a stronger degree of synchronization for db relative to fb , as can be concluded from a significantly larger value of in the former case , agrees with this previous study .finally , fig .8 shows some examples of the eigenvalue distribution on the complex plane . in the ep regionthe specific value of the time - delay , upper panel ) corresponds to maximum synchronization between the two hemispheres for this particular subject .here we see one strongly repelled eigenvalue with a large real part and vanishing imaginary part .an interesting sort of collectivity can be inferred from an example shown in the middle panel ) of fig .the largest eigenvalue is about a factor of 3 repelled more in the imaginary axis direction than in the real direction .this indicates that the antisymmetric part of is dominating it which expresses certain effects of antisynchronization ( synchronization between the signals opposite in phase ) . in the b region , on the other hand , there are basically no such effects of synchronization between the two hemispheres and , consequently , the complex eigenvalues are distributed more or less uniformly around ( 0,0 ) as an example in the lowest panel of fig . 8 showsthe standard application of the correlation matrix formalism is to study correlations among ( nearly ) coincident events in different parts of a given system .a typical principal aim of the related analysis is to extract a low - dimensional , non - random component which carries some system specific information from the whole multi - dimensional background activity .the advantage of the correlation matrix formalism is that it allows to directly relate the results to universal predictions of the theory of random matrices .the present study shows that the correlation matrix provides a useful tool for studying the underlying mechanism which gives rise to collectivity from a collection of events or signals sampled in different regions .the brain auditory experiment considered here is one example where there is a need for such an analysis . in this way we were thus able to quantify the nature of the background brain activity in two distinct periods which turns out to be largely consistent with the gaussian orthogonal ensemble of random matrices , both in absence as well as in presence of the evoked potentials .the analysis also allows to compare the degree of collectivity from the properties of the eigenvectors with the highest eigenvalues .crucially the same analysis allows also a quantification of the degree of collectivity .the beginnings of how the method can be extended to study correlations between the two sources of signals was also outlined . in this casethe correlation matrix is asymmetric and results in complex eigenvalues .an immediate application of such an extension is to look at correlations among signals recorded in our experiment from the opposite hemispheres . introducing in addition the time - lag between the signals one can study the effects of delayed synchronization between the two hemispheres .the quantitative characteristics of such synchronization remain in agreement with those found by other means .l , s.j .williamson and l. kaufman , science * 258 * , 1668(1992 ) .liu , a.a .ioannides and j.g .taylor ( 1998 ) , neuroreport * 9 * , 2679(1998 ) l.c .liu , a.a .ioannides and h.w .mller - grtner ( 1998 ) electroenceph .. neurophysiol . * 106 * , 64(1998 ) j. kwapie , s. drod , l.c .liu and a.a .ioannides , phys . rev . *e58 * , 6359 ( 1998 ) a.m. fraser , and h.l .swinney , phys . rev . *a33 * , 1134 ( 1986 ) s. drod , j. kwapie , a.a .ioannides and l.c .liu , in _ collective excitations in fermi and bose systems _ , edited by c.a .bertulani , l.p .canto and m.s .hussein ( world scientific , singapore , 1999 ) , pp .62 - 77 d.s .broomhead and g.p .king , physica * 20d * , 217(1986 ) l.c .liu and a.a .ioannides , brain topogr .* 8b(4 ) * , 385 ( 1996 ) m. hmlinen , r. hari , r.j .ilmoniemi , j. knuutila and o. lounasmaa , rev .* 65 * , 413(1993 ) o.d .creutzfeldt , _ cortex cerebri _ , ( oxford university press , oxford , 1995 ) a. edelman , siam j. matrix anal* 9 * , 543(1988 ) ; + a.m. sengupta and p.p .mitra , phys .* e60 * , 3389(1999 ) s. drod , f. grmmer , f. ruf and j. speth , _ dynamics of competition between collectivity and noise in the stock market _ , lanl preprint , cond - mat/9911168 t.a .brody , j. flores , j.b .french , p.a .mello , a. panday , and s.s.m .wong , rev .53 * , 385 ( 1981 ) m.l .mehta , _ random matrices _ ( academic press , boston,1991 ) s. drod and j. speth , phys . rev. lett . * 67 * , 529(1991 ) * fig . 1 .* distributions of for the one - hemisphere correlation matrix .the upper panel corresponds to db and the lower one to fb .the solid lines display such distributions evaluated in the regions beyond evoked activity ( b ) and the dashed lines in the ep region . +* fig . 2 .* structure of the eigenvalue spectra of the correlation matrices ( one - hemisphere correlations ) for the two discussed regions of the signals ( evoked potential - ep , background activity - b ) for db ( upper part ) and fb ( lower part ) . in each panelthere are two spectra of eigenvalues , corresponding to the right hemisphere ( circles ) and the left one ( triangles ) .the eigenvalues are ordered from the smallest to the largest .+ * fig . 3 * density of eigenvalues of the correlation matrix calculated from the points of the time - series of increments of the original time - series , i.e. , . in the lower panelevery second point of is taken but the number of such points is still 250 .the dashed line corresponds to the distribution prescribed by eq .( [ eq : rho ] ) . + * fig . 4 .* nearest - neighbor spacing distribution ( histogram ) of the eigenvalues of for subject db .the upper panel corresponds to the evoked potential ( ep ) region of the time - series and the lower panel to the background ( b ) activity part .the distributions have been created after unfolding the eigenvalues .the smooth solid curves illustrate the wigner distribution and the dashed curves the best fit in terms of the brody distribution . + * fig . 5 .* distribution of the eigenvector components ( ) for ep ( upper part ) and b ( lower part ) regions ( subject db ) .the main panels correspond to one hundred lowest eigenvalues , while the insets show plots of the same quantity for the eigenvector corresponding to ( ) . for comparison , gaussianbest fits are also presented ( dotted lines ) .( note different scales in the figure . ) + * fig .the comparison of the signal obtained by simple average over all 120 trials ( upper panel ) and the signals obtained from eq .( [ eq : sup ] ) for both regions , ep ( middle part ) and b ( lower part ) for subject db .signals in the middle and lower panels denote superpositions for ( solid line ) , ( dashed line ) and ( dotted line ) . + * fig . 7 .* calculated from the cross - hemisphere correlation matrix .the upper part corresponds to db and the lower part to fb .both panels illustrate two kinds of stimulation : left ear ( le ) and right ear ( re ) .the solid lines denote the real part of while the dashed and dotted ones its imaginary part .the sign of denotes retardation of a signal from the right hemisphere ( ) or the left one ( ) .+ * fig .* examples of the eigenvalue distribution of the cross - hemisphere correlation matrix for the right ear stimulation for db obtained from the ep region ( upper and middle panels ) and the b region ( lower panel ) .all parts present the distributions on the complex plane .the eigenvalues for , which corresponds to the maximum of in fig . 7 , are shown in the upper panel and the eigenvalues for ( corresponding to strong antisymmetry of * c * ) are presented in the middle one .a typical distribution of the eigenvalues in the b region is illustrated in the lower part .( note different scale in the middle panel . )
|
we adopt the concept of the correlation matrix to study correlations among sequences of time - extended events occuring repeatedly at consecutive time - intervals . as an application we analyse the magnetoencephalography recordings obtained from human auditory cortex in epoch mode during delivery of sound stimuli to the left or right ear . we look into statistical properties and the eigenvalue spectrum of the correlation matrix calculated for signals corresponding to different trials and originating from the same or opposite hemispheres . the spectrum of largely agrees with the universal properties of the gaussian orthogonal ensemble of random matrices , with deviations characterised by eigenvectors with high eigenvalues . the properties of these eigenvectors and eigenvalues provide an elegant and powerful way of quantifying the degree of the underlying collectivity during well defined latency intervals with respect to stimulus onset . we also extend this analysis to study the time - lagged interhemispheric correlations , as a computationally less demanding alternative to other methods such as mutual information .
|
roughly speaking , iterated brownian motion ( ibm ) is `` brownian motion run at an independent one - dimensional brownian clock . '' of course , this is not rigorous because the one - dimensional brownian motion can take negative values , whereas brownian motion is defined only for nonnegative times .there are two natural ways to get around this .first , one can use the absolute value of the one - dimensional brownian motion .this process is one of the subjects of the papers allouba and zheng ( 2001 ) and allouba ( 2002 ) , where various connections with the biharmonic operator are presented .those authors call their process `` brownian - time brownian motion '' ( btbm ). the other rigorous definition of ibm is the one we will use and it is due to burdzy ( 1993 ) .he uses a natural extension of brownian motion to negative times , called `` two - sided brownian motion . ''formally , let be independent -dimensional brownian motions started at and suppose is one - dimensional brownian motion started at 0 , independent of .define two - sided brownian motion by then iterated brownian motion is although ibm is not a markov process , it has many properties analogous to those of brownian motion ; we list a few here . 1 . for instance, the process scales .that is , for each , is ibm .the law of the iterated logarithm holds ( burdzy ( 1993 ) ) there is also a chung type lil ( khoshnevisan and lewis ( 1996 ) ) and various kesten type lil s ( csrg , fldes and rvsz ( 1996 ) ) for ibm .other properties for local times are proved in xiao ( 1998 ) .the process has order variation ( burdzy ( 1994 ) ) : ^ 4 = 3(t - s ) \text { in } l^p,\ ] ] where is a partition of p_1\le 2p_1>2\dfrac{p_1}2<1\dfrac{p_1}2 = 1\dfrac{p_1}2>1 ] , } |i'_\alpha(\gamma w)| e^{-w } \dw < \infty,\ ] ] by formula 8.486.2 on page 970 of gradshteyn and ryzhik ( 1980 ) , .\ ] ] hence by , .\ ] ] in particular , } |i'_\alpha(\gamma w)| \le c(\alpha ) e^{bw}[w^{\alpha-1 } + w^{\alpha+1}].\ ] ] then since and , follows .thus \\ & = & \frac12 \left[\frac{\gamma^{\alpha-1}}{\sqrt{1-\gamma^2}\ [ 1+\sqrt{1-\gamma^2}]^{\alpha-1}}\right]\\ & + & \frac12\left[\frac{\gamma^{\alpha+1}}{\sqrt{1-\gamma^2}\ [ 1 + \sqrt{1-\gamma^2}]^{\alpha+1}}\right]\\ & \le & \frac12 ( 1-\gamma^2)^{-1/2 } [ \gamma^{\alpha-1 } + \gamma^{\alpha+1}],\end{aligned}\ ] ] where we have used formula 6.611.4 on page 708 of gradshteyn and ryzhik ( 1980 ) for the third equality . we also see that the derivative is nonnegative . to finish , observe that as claimed .to prove theorem [ thm1.4 ] , we will need the following consequence of and .[ cor2.4 ] for , ^{-\alpha}. \eqno \square\ ] ] by theorem [ thm1.3 ] , and lemma [ lem2.1 ] , for , now for we have by corollaries [ cor2.2 ] and [ cor2.4 ] , by and the integral test .hence we can exchange summation and integration above to get , uniformly for , m_j(\theta)\\ & = \frac12 r^{\frac{n}2 - 2 } \rho^{1-\frac{n}2 } \sum^\infty_{j=1 } \alpha^{-1}_j \gamma^{\alpha_j } [ 1+(1-\gamma^2)]^{-\alpha_j } \cdot \\ & \quad\cdot\left[~{\int\limits}_{\partial d } \sin \varphi(\eta ) \frac\partial{\partial n_\eta } m_j(\eta ) \mu(d\eta)\right ] m_j(\theta),\end{aligned}\ ] ] as claimed . if is large , then is small and so by theorem [ thm1.4 ] , as m_1(\theta),\ ] ] where we have used the fact that since the hopf maximum principle ( protter and weinberger ( 1984 ) , theorem 7 on page 65 ) implies on . since as ,we get the desired asymptotic upon integrating and appealing to .let be the first exit times of from and for , define for typographical simplicity we write for .then for any , by independence and symmetry .writing for the density of , by independence of and , hence by theorem [ thm1.3 ] , lemma [ lem2.1 ] and , for , , \sin \varphi(\eta ) \mu(d\eta ) du\ f_z(v)dv.\end{aligned}\ ] ] there is no danger of circular reasoning in using theorem [ thm1.3 ] since its proof is self - contained . using corollary [ cor2.2 ] ,if we can show for , then by monotone convergence and dominated convergence , exchange of summation with integration is allowed and for , \cdot\nonumber\\ \label{eq3.3 } & \quad \cdot \int^\infty_0 \int^\infty_0 \frac{v}{u(u+v ) } e^{-\frac{\rho^2+r^2}{2u } } i_{\alpha_j } \left(\frac{\rho r}u\right ) f_z(v)dudv.\end{aligned}\ ] ] the work to justify has been done in section 2 : the term is bounded above by then follows from lemma [ lem2.3]a , since .as it stands , the behavior of for large is not obvious from .it will turn out that the term dominates . inwhat follows , we write where is as in theorem [ thm1.4 ] . from we have since after an integration by parts , ^{-2 } p_z(\tau^->v )e^{-s } i_\alpha \left(\frac{2\rho rs}{\rho^2+r^2}\right ) \frac{\rho^2+r^2}{2s^2}\ dsdv\nonumber\\ & = \frac2{\rho^2+r^2 } \int^\infty_0 \int^\infty_0 \left[1 + \frac{2sv}{\rho^2+r^2 } \right]^{-2 } p_z(\tau^->v ) e^{-s } i_\alpha \left(\frac{2\rho rs}{\rho^2+r^2}\right ) \ dsdv\nonumber\\ \label{eq3.5b } & = \int^\infty_0 \int^\infty_0 h\ dsdv,\quad \text{say.}\end{aligned}\ ] ] * case 1 : * . by , . by , for fixed , choose independent of so large that then here and in what follows , will be a number whose exact value might change from line to line , but will always be independent of and . as for the asymptotic in partb ) , notice that and ^{-2 } p_z(\tau^->v ) e^{-s } r^\alpha i_\alpha \left(\frac{2\rho rs}{\rho^2+r^2}\right)\\ & = 2p_z(\tau^->v ) e^{-s } \frac{(\rho s)^\alpha}{\gamma(\alpha+1)},\end{aligned}\ ] ] using the asymptotic hence by and the dominated convergence theorem in , as claimed . *case 2 : * .this part is more delicate because this time .let and use the asymptotics and to choose and such that and notice is independent of and is not .^\alpha}\nonumber\\ \intertext{(gradshteyn and ryzhik ( 1980 ) , 6.611.4 on page 708 ) } \label{eq3.13 } & \le k r^{-2-\alpha } ( 2\rho)^\alpha \quad \text{for}\quad r\ge m_3 \quad \text{large,}\end{aligned}\ ] ] as for , by ^{-2}e^{-s } k \left(\frac{\rho rs}{\rho^2+r^2}\right)^\alpha \frac{e^\alpha e^{\frac{2\rho rs}{\rho^2+r^2}}}{\left(\alpha+\frac12\right)^\alpha } \ dsdv\\ & = \frac{k}{\rho^2+r^2 } \left(\frac{\rho r}{\rho^2+r^2}\right)^\alpha \left[\frac2{\rho^2+r^2}\right]^{-2 } \frac{e^\alpha}{\left(\alpha+\frac12\right)^\alpha } \int^\infty_{m_2r } s^{\alpha-2 } e^{-s } e^{\frac{2\rho rs}{\rho^2+r^2 } } \ ds\\ & \le k \left(\frac{\rho r}{\rho^2+r^2}\right)^\alpha ( \rho^2+r^2 ) \frac{e^\alpha}{\left(\alpha+\frac12\right)^\alpha } \int^\infty_{m_2r } s^{\alpha-2 } e^{-3s/4 } e^{-m_2r/4 } e^{\frac{2\rho rs}{\rho^2+r^2 } } \ ds\\ & \le k\rho^\alpha r^{2-\alpha } \frac{e^\alpha}{\left(\alpha+\frac12\right)^\alpha } e^{-m_2r/4 } \int^\infty_{m_2r } s^{\alpha-2 } e^{-s/2 } \ ds,\end{aligned}\ ] ] for large , say , where is independent of .thus now we examine the dominant piece . reversing the order of integration , then changing into , write and observe now for we have , hence by , becomes ^\alpha \int^{m_2r}_0 h\left(\frac{2sm_1}{\rho^2+r^2}\right ) s^{\alpha+p_1/2 - 1 } e^{-s/2 } \ ds\end{aligned}\ ] ] for large , independent of .if then by , this yields ^\alpha 2^\alpha \gamma \left(\alpha + \frac{p_1}2\right)\nonumber\\ & = kr^{-p_1 } \left[\frac{2\rho e}{r\left(\alpha+\frac12\right)}\right]^\alpha \left(\alpha + \frac{p_1}2 - 1\right ) \gamma\left(\alpha + \frac{p_1}2 - 1\right)\nonumber\\ & \le kr^{-p_1 } \left[\frac{2\rho e}{r\left(\alpha+\frac12\right)}\right]^\alpha e^{-\alpha } \left(\alpha + \frac{p_1}2 - 1\right)^{\alpha+ \frac{p_1}2 -\frac12 } \qquad \text{(stirling 's formula)}\nonumber\\ \label{eq3.19 } & \le k \alpha^{\frac{p_1 - 1}2 } ( 2\rho)^\alpha r^{-\alpha - p_1}.\end{aligned}\ ] ] when , observe that for we have for large , independent of .hence by and , ^\alpha \int^{m_2r}_0 \left[\ln \frac{\rho^2+r^2}{2sm_1}\right ] s^\alpha e^{-s/2 } \ds\nonumber\\ & \le \frac{k}{r^2 } \left[\frac{\rho e}{r\left(\alpha+\frac12\right ) } \right]^\alpha \int^{m_2r}_0 [ k\ln r - \ln s ] s^\alpha e^{-s/2}\ ds\qquad \text{( large)}\nonumber\\ & \le \frac{k}{r^2 } \left[\frac{\rho e}{r\left(\alpha + \frac12\right ) } \right]^\alpha \left[k(\ln r)2^\alpha\gamma(\alpha+1 ) - \int^1_0 ( \ln s)s^\alpha e^{-s/2 } \ ds\right]\nonumber\\ & \le \frac{k}{r^2 } \left[\frac{\rho e}{r\left(\alpha + \frac12\right ) } \right]^\alpha \left[k(\ln r ) 2^\alpha\gamma(\alpha+1 ) - \int^1_0 ( \ln s ) \ds\right]\nonumber\\ & \le \frac{k}{r^2 } \left[\frac{2\rho e}{r\left(\alpha + \frac12\right ) } \right]^\alpha\gamma(\alpha+1 ) \ln r\nonumber\\ & = \frac{k}{r^2 } \left[\frac{2\rho e}{r\left(\alpha + \frac12\right ) } \right]^\alpha \alpha\gamma(\alpha ) \lnr\nonumber\\ \label{eq3.20 } & \le k\alpha(2\rho)^\alpha r^{-\alpha-2 } \ln r,\qquad \text{by stirling 's formula.}\end{aligned}\ ] ] for part b ) , first assume . for have and for we have .hence by and applied to the and factors in , we get the that integrand in is bounded above by moreover , writing as we see from the asymptotics and that since is integrable on , we can apply the dominated convergence theorem to get combining this with and and using that , we get which is the claimed value in part b ) . finally , assume . consider the integral which is just in with the factors involving and replaced by and , respectively .these are more or less the asymptotics from and . then for , for large ,hence by , for such the integrand of is bounded above by r^{-\alpha-2 } s^{\alpha } e^{-s}\\ \le~&c_\alpha \frac1{\ln r } [ k\ln r - \ln s ] s^{\alpha } e^{-s}\\ \le~&c_\alpha \frac1{\ln r } [ k\ln r + ( -\ln s ) \vee 0 ] s^{\alpha } e^{-s}\\ \le~&c_\alpha [ k + ( -\ln s)\vee 0 ] s^{\alpha } e^{-s},\end{aligned}\ ] ] which is integrable on . moreover ,the limit of the integrand of is 2 \rho^\alpha s^{\alpha } e^{-s}\\ & \quad = 4\rho^\alpha s^{\alpha}e^{-s}.\end{aligned}\ ] ] hence by dominated convergence again , by multiply by , let then let to end up with by and , we get as desired .now we can prove theorem [ thm1.1 ] .write the sum in as . if we can show that then by lemma [ lem3.1 ] b , the conclusion of theorem [ thm1.1 ] will hold . to this end , write .\ ] ] it suffices to show as .there is no danger in dividing by because as in the proof of corollary [ cor1.5 ] , the factor in is positive . by lemma [ lem3.1 ]a ) and , for some constants and independent of , for then by lemma [ lem3.1 ] b , by the integral test and , for any , the series converges uniformly on .thus , since for , and follows as desired . the unbounded , nonsmooth nature of leads to technicalities not encountered in the bounded case considered by hsu ( 1986 ) . we now state the following result used to prove theorem [ thm1.3 ] . before giving its proof , we show how it yields theorem [ thm1.3 ] .we follow hsu s idea of finding the laplace transform in of the density . here and in what follows we will write [ thm4.1 ] a ) let , with .then for , b ) for and , .c ) let and .if is nonnegative with compact support in \rho\rho\rho\rho$ small.}\end{cases}\end{aligned}\ ] ] thus for and large , yields \\ & \le k [ m^{-\frac{n}2-\alpha_1 } + m^{-\alpha_1-\frac{n}2}]\\ & \le km^{-\alpha_1-\frac{n}2}\end{aligned}\ ] ] and if is small , \\ & \le k[{\varepsilon}^{-\frac{n}2 + \alpha_1 } + { \varepsilon}^{\alpha_1-\frac{n}2}]\\ & = k{\varepsilon}^{-\frac{n}2 + \alpha_1}.\end{aligned}\ ] ] then if we can differentiate under the integral ( by lemma [ lem4.2 ] ) , as desired . the next order of business is to study in a small neighborhood of . to this end , introduce the function where is the usual gaussian kernel .the relevant properties of are stated in the next lemma . after changing variables , is the modified bessel function .it is known that ( can be found in abramowitz and stegun ( 1972 ) pp .375378 , formulas 9.6.8 , 9.6.9 , 9.7.4 , 9.7.2 , respectively .formula is from watson ( 1922 ) page 79 , formula ( 4 ) in section 3.71 ) .by is bounded on a neighborhood of . since , by lemma [ lem4.8 ]a ) for small , as .thus we just need to show \sigma(dz ) = 2u(x).\ ] ] it is well - known that for the exit time of brownian motion from , .\ ] ] then ,\ ] ] and as a consequence , for .\ ] ] the exchange of and is justified as follows .bound difference quotients via the mean value theorem .then the exchange is justified provided this follows from lemma [ lem4.8 ] , part c. furthermore , since is bounded near , by lemma [ lem4.8 ] c \sigma(dz)\right| \le k\sigma(\partial b_\delta(x ) ) \to 0 \text { as } \delta \to 0.\ ] ] thus to get we need only show by lemma [ lem4.8 ] b , on , as . since is continuous and bounded near , as desired .10 abramowitz , m. and stegun , i.a ._ handbook of mathematical functions _ , dover , new york .allouba , h. ( 2002 ) .brownian - time process : the pde connection ii and the corresponding feynman - kac formula , trans .soc . * 354 * 46274637 .allouba , h. and zheng , w. ( 2001 ) .brownian - time processes : the pde connection and the half - derivative generator , ann .* 29 * 17801795 .bauelos , r. and smits , r.g .( 1997 ) . brownian motion in cones , probab . theory related fields * 108 * 299319 .burdzy , k. ( 1993 ) .some path properties of iterated brownian motion , in _ seminar on stochastic processes _ ( e. cinlar , k.l . chung and m.j .sharpe , eds . ) 6787 birkhuser , boston .burdzy , k. and khoshnevisan , d. ( 1998 ) . brownian motion in a brownian crack , ann .. probab .* 8 * 708748 .burkholder , d.l .exit times of brownian motion , harmonic majorization and hardy spaces , adv .* 26 * 182205 .chavel , i. ( 1984 ) ._ eigenvalues in riemann geometry _ , academic , new york .cski , e. , csrg , m. , fldes , a. and rvsz , p. ( 1996 ) .the local time of iterated brownian motion , journal of theoretical probability , * 9 * 717743 .deblassie , r.d .exit times from cones in of brownian motion , probab . theory related fields * 74 * 129 .deblassie , r.d . and smits , r. ( 2004 ) .brownian motion in twisted domains , to appear , transactions of the american mathematical society .deblassie , r.d .iterated brownian motion in an open set , to appear ann .probab .gilbarg , d. and trudinger , n. ( 1983 ) ._ elliptic partial differential equations of second order _ , 2nd ed . ,springer , berlin .gradshteyn , i.s . and ryzhik , i.m ._ table of integrals , series and products _ , academic , new york .pinsky , r.g ._ positive harmonic functions and diffusion _ , cambridge university press , cambridge .protter , m. and weinberger , h. ( 1984 ) ._ maximum principles in differential equations _ , springer , new york .watson , g.n .( 1922 ) . _ a treatise on the theory of bessel functions _, 2nd ed . , cambridge university press , cambridge .
|
_ we study the distribution of the exit place of iterated brownian motion in a cone , obtaining information about the chance of the exit place having large magnitude . along the way , we determine the joint distribution of the exit time and exit place of brownian motion in a cone . this yields information on large values of the exit place ( harmonic measure ) for brownian motion . the harmonic measure for cones has been studied by many authors for many years . our results are sharper than any previously obtained . _
|
detecting clusters or communities in real - world graphs such as large social networks , web graphs , and biological networks is a problem of considerable practical interest that has received a great deal of attention .a `` network community '' ( also sometimes referred to as a module or cluster ) is typically thought of as a group of nodes with more and/or better interactions amongst its members than between its members and the remainder of the network . to extract such sets of nodes one typically chooses an objective function that captures the above intuition of a community as a set of nodes with better internal connectivity than external connectivity .then , since the objective is typically np - hard to optimize exactly , one employs heuristics or approximation algorithms to find sets of nodes that approximately optimize the objective function and that can be understood or interpreted as `` real '' communities .alternatively , one might define communities operationally to be the output of a community detection procedure , hoping they bear some relationship to the intuition as to what it means for a set of nodes to be a good community .once extracted , such clusters of nodes are often interpreted as organizational units in social networks , functional units in biochemical networks , ecological niches in food web networks , or scientific disciplines in citation and collaboration networks . in applications , it is important to note that heuristic approaches to and approximation algorithms for community detection often find clusters that are systematically `` biased , '' in the sense that they return sets of nodes with properties that might be substantially different than the set of nodes that achieves the global optimum of the chosen objective .for example , many spectral - based methods tend to find compact clusters at the expense that they are not so well separated from the rest of the network ; while other methods tend to find better - separated clusters that may internally be `` less nice . ''moreover , certain methods tend to perform particularly well or particularly poorly on certain kinds of graphs , _e.g. _ , low - dimensional manifolds or expanders .thus , drawing on this experience , it is of interest to compare these algorithms on large real - world networks that have many complex structural features such as sparsity , heavy - tailed degree distributions , small diameters , etc .moreover , depending on the particular application and the properties of the network being analyzed , one might prefer to identify specific types of clusters .understanding structural properties of clusters identified by various algorithmic methods and various objective functions can guide in selecting the most appropriate graph clustering method in the context of a given network and target application . in this paper, we explore a range of different community detection methods in order to elucidate these issues and to understand better the performance and biases of various network community detection algorithms on different kinds of networks . to do so , we consider a set of more than 40 networks ; 12 common objective functions that are used to formalize the concept of community quality ; and 8 different classes of approximation algorithms to find network communities .one should note that we are not primarily interested in finding the `` best '' community detection method or the most `` realistic '' formalization of a network community .instead , we aim to understand the structural properties of clusters identified by various methods , and then depending on the particular application one could choose the most suitable clustering method .we describe several classes of empirical evaluations of methods for network community detection to demonstrate the artifactual properties and systematic biases of various community detection objective functions and approximation algorithms .we also discuss several meta - issues related to community detection algorithms in very large graphs , including whether or not existing algorithms are sufficiently powerful to recover interesting communities and whether or not meaningful communities exist at all . also in contrast to previous attempts to evaluate community detection algorithms and/or objective functions , we consider a size - resolved version of the typical optimization problem .that is , rather than simply fixing an objective and asking for an approximation to the best cluster of any size or some fixed partitioning , we ask for an approximation to the best cluster for every possible size .this provides a _ much _ finer lens with which to examine community detection algorithms , since objective functions and approximation algorithms often have non - obvious size - dependent behavior .the rest of the paper is organized as follows .section [ sec : related ] gives the background and surveys the rich related work in the area of network community detection .then , in section [ sec : pieces ] , we compare structural properties of clusters extracted by two clustering methods based on two completely different computational paradigms a spectral - based graph partitioning method local spectral and a flow - based partitioning algorithm metis+mqi ; and in section [ sec : otheralgs ] , we extend the analyses by considering related heuristic - based clustering algorithms that in practice perform very well .section [ sec : scores ] then focuses on 11 different objective functions that attempt to capture the notion of a community as a set of nodes with better intra- than inter - connectivity . to understand the performance of various community detection algorithms at different size scales we compute theoretical lower bounds on the conductance community - quality score in section [ sec : bounds ] .we conclude in section [ sec : conclusion ] with some general observations .here we survey related work and summarize our previous work , with an emphasis on technical issues that motivate this paper .a great deal of work has been devoted to finding communities in large networks , and much of this has been devoted to formalizing the intuition that a community is a set of nodes that has more and/or better links between its members than with the remainder of the network . very relevant to our workis that of kannan , vempala , and vetta , who analyze spectral algorithms and describe a community concept in terms of a bicriterion depending on the conductance of the communities and the relative weight of between - community edges .flake , tarjan , and tsioutsiouliklis introduce a similar bicriterion that is based on network flow ideas , and flake _ et al . _ defined a community as a set of nodes that has more edges pointing inside the community than to the rest of the network .similar edge - counting ideas were used by radicchi _et al . _ to define and apply the notions of a strong community and a weak community . within the `` complex networks '' community , girvan and newman proposed an algorithm that used `` betweenness centrality '' to find community boundaries . following this ,newman and girvan introduced _modularity _ as an _ a posteriori _ measure of the overall quality of a graph partition .modularity measures internal ( and not external ) connectivity , but it does so with reference to a randomized null model .modularity has been very influential in recent community detection literature , and one can use spectral techniques to approximate it .however , guimer , sales - pardo , and amaral and fortunato and barthlemy showed that random graphs have high - modularity subsets and that there exists a size scale below which modularity can not identify communities .finally , we should note several other lines of related work .first , the local spectral algorithm of andersen , chung , and lang was used by andersen and lang to find ( in a scalable manner ) medium - sized communities in very large social graphs .second , other recent work has also focused on developing local and/or near - linear time heuristics for community detection include .third , there also exists work which views communities from a somewhat different perspective . for recent reviews of the large body of work in this area ,see .[ fig : intro ] we model each network by an undirected graph , in which nodes represent entities and edges represent interactions between pairs of entities .we perform the evaluation of community detection algorithms in a large corpus of over social and information networks .the networks we studied range in size from tens of nodes and scores of edges up to millions of nodes and tens of millions of edges ; and they were drawn from a wide range of domains , including large social networks , citation networks , collaboration networks , web graphs , communication networks , citation networks , internet networks , affiliation networks , and product co - purchasing networks . in the present work we focus on a subset of these . in particular , we consider a bipartite authors - to - papersnetwork of dblp ( authtopap - dblp ) , enron email network ( email - enron ) , a co - authorship network of arxiv astro physics papers ( coauth - astro - ph ) , and a social network of epinions.com ( epinions ) .see for further information and properties of these networks . even though we consider various notions of community score we will primarily work with _ conductance _ , which arguably is the simplest notion of cluster quality , as it can be simply thought of as the ratio between the number of edges inside the cluster and the number of edge leaving the cluster .more formally , _ conductance_ of a set of nodes is , where denotes the size of the edge boundary , , and , where is the degree of node . thus , in particular , more community - like sets of nodes have _ lower _ conductance . for example in figure [ fig : intro](left ) , sets and have conductance , so the set of nodes is more community - like than the set .conductance captures a notion of `` surface area - to - volume , '' and thus it is widely - used to capture quantitatively the gestalt notion of a good network community as a set of nodes that has better internal- than external - connectivity .we then generalize the notion of the quality of a single cluster into a size resolved version . using a particular measure of network community quality , _e.g. _ , conductance or one of the other measures described in section [ sec : scores ] , we then define the _ network community profile ( ncp ) _ that characterizes the quality of network communities as a function of their size . for every between and half the number of nodes in the network .] , we define .that is , for every possible community size , measures the score of the most community - like set of nodes of that size , and the ncp measures as a function of .for example , in figure [ fig : intro](middle ) we use conductance as a measure of cluster quality and for , among all sets of -nodes , has best conductance , and thus . similarly , and+ denote the best conductance sets on and nodes , respectively . just as the magnitude of the conductance provides information about how community - like is a set of nodes , the shape of the ncp provides insight into how well expressedare network communities as a function of their size .moreover , the ncp also provides a lens to examine the quality of clusters of various sizes .thus in the majority of our experiments we will examine and compare different clustering algorithms and objective functions through various notions of the ncp plot and other kinds of structural metrics of clusters and how they depend / scale with the size of the cluster .moreover , the shape of the ncp is also interesting for a very different reason .it gives us a powerful way to quantify and summarize the large - scale community structure of networks .we found that the ncp behaves in a characteristic manner for a range of large social and information networks : when plotted on log - log scales , the ncp tends to have a universal `` v '' shape ( figure [ fig : intro](right ) ) . up to a size scale of about nodes , the ncp decreases , which means that the best - possible clusters are getting progressively better with the increasing size .the ncp then reaches the minimum at around and then gradually increases again , which means that at larger size scales network communities become less and less community - like .( this should be contrasted with behavior for mesh - like networks , road networks , common network generation models , and small commonly - studied networks , for which the ncp is either flat or downward - sloping . )the shape of the ncp can be explained by an onion - like `` nested core - periphery '' structure , where the network consists of a large core ( slightly denser and more expander - like than the full graph , but which itself has a core - periphery structure ) and a large number of small very well - connected communities barely connected to the core . in this context , it is important to understand the characteristics of various community detection algorithms in order to make sure that the shape of ncp is a property of the network rather than an artifact of the approximation algorithm or the function that formalizes the notion of a network community .we compare different clustering algorithms and heuristics .we focus our analyses on two aspects .first , we are interested in the quality of the clusters that various methods are able to find .basically , we would like to understand how well algorithms perform in terms of optimizing the notion of community quality ( conductance in this case ) .second , we are interested in quantifying the structural properties of the clusters identified by the algorithms .as we will see , there are fundamental tradeoffs in network community detection for a given objective function , approximation algorithms are often biased in a sense that they consistently find clusters with particular internal structure .we break the experiments into two parts .first , we compare two graph partitioning algorithms that are theoretically well understood and are based on two very different approaches : a spectral - based local spectral partitioning algorithm , and the flow - based metis+mqi .then we consider several heuristic approaches to network community detection that work well in practice . in this sectionwe compare the local spectral partitioning algorithm with the flow - based metis+mqi algorithm .the latter is a surprisingly effective heuristic method for finding low - conductance cuts , which consists of first using the fast graph bi - partitioning program metis to split the graph into two equal - sized pieces , and then running mqi , an exact flow - based technique for finding the lowest conductance cut whose small side in contained in one of the two half - graphs chosen by metis .each of those two methods ( local spectral and metis+mqi ) was run repeatedly with randomization on each of our graphs , to produce a large collection of candidate clusters of various sizes , plus a lower - envelope curve .the lower - envelope curves for the two algorithms were the basis for the plotted ncp s in the earlier paper . in the current paperthe lower - envelope curves for local spectral and metis+mqi are plotted respectively as a red line and a green line in figure [ fig : intro](right ) , and as pairs of black lines in figure [ compactness - vs - cuts - fig](top ) and figures [ other - algos - fig ] and [ other - algos - fig - lbfig ] .note that the metis+mqi curves are generally lower , indicating that this method is generally better than local spectral at the nominal task of finding cuts with low conductance .however , as we will demonstrate using the scatter plots of figure [ compactness - vs - cuts - fig ] , the clusters found by the local spectral method often have other virtues that compensate for their worse conductance scores . as an extreme example , many of the raw metis+mqi clusters are internally disconnected , which seems like a very bad property for an alleged community . by contrast , the local spectral method always returns connected clusters . acknowledging that this is a big advantage for local spectral, we then modified the collections of raw metis+mqi clusters by splitting every internally disconnected cluster into its various connected components .then , in all scatter plots of figure [ compactness - vs - cuts - fig ] , blue dots represent raw local spectral clusters , which are internally connected , while red dots represent broken - up metis+mqi clusters , which are also internally connected .+ conductance of connected clusters found by local spectral ( blue ) and metis+mqi ( red ) + + cluster compactness : average shortest path length + + cluster compactness : external vs. internal conductance [ compactness - vs - cuts - fig ] let us now consider the top row of scatter plots of figure [ compactness - vs - cuts - fig ] which compares the conductance scores ( as a function of cluster size ) of the collections of clusters produced by the two algorithms .the cloud of blue points ( local spectral clusters ) lies generally above the cloud of red points ( metis+mqi clusters ) , again illustrating that local spectral tends to be a weaker method for minimizing conductance score . in more detail, we find that local spectral and metis+mqi tend to identify similar pieces at very small scales , but at slightly larger scales a gap opens up between the red cloud and the blue cloud . at those intermediate size scales ,metis+mqi is finding lower conductance cuts than local spectral .however , the local spectral algorithm returns pieces that are internally more _compact_. this is shown in the middle row of figure [ compactness - vs - cuts - fig ] where for each of the ( connected ) pieces for which we plotted a conductance in the top row , we are now plotting the average shortest path length between random node pairs in that piece . in these plots , we see that in the same size range where metis+mqi is generating clearly lower conductance connected sets , local spectral is generating pieces with clearly shorter internal paths , _i.e. _ , smaller diameter sets .in other words , the local spectral pieces are more `` compact . ''this effect is especially pronounced in the dblp affiliation network , while it also shows up in the enron email network and the astrophysics collaboration network .moreover , we made similar observations also for many other datasets ( plots not shown ) .finally , in the bottom row of figure [ compactness - vs - cuts - fig ] we introduce the topic of internal vs. external cuts , which is something that none of the existing algorithms is _ explicitly _ optimizing . these are again scatter plots showing the same set of local spectral and metis+mqi pieces as before , but now the -axis is external conductance divided by internal conductance .external conductance is the quantity that we usually plot , namely the conductance of the cut which separates the cluster from the graph .internal conductance is the score of a low conductance cut _ inside _ the cluster .that is , we take the induced subgraph on the cluster s nodes and then find best conductance cut inside the cluster .we then compare the ratios of the conductance of the bounding cut and the internal conductance . intuitively , good and compact communities should have small ratios , ideally below 1.0 , which would mean that those clusters are well separated from the rest of the network and that they are also internally well - connected and hard to cut again . however , the three bottom - row plots of figure [ compactness - vs - cuts - fig ] show the ratios .points above the horizontal line are clusters which are easier to cut internally than they were to be cut from the rest of the network ; while points below the line are clusters that were relatively easy to cut from the network and are internally well - connected .notice that here the distinction between the two methods is less clear . on the one hand, local spectral finds clusters that have worse ( higher ) bounding cut conductance , while such clusters are also internally more compact ( have internal cuts of higher conductance ) . on the other hand, metis+mqi finds clusters that have better ( lower ) bounding cut conductance but are also internally easy to cut ( have internal cut of lower conductance ) .thus when one takes the ratio of the two quantities we observe qualitatively similar behaviors .however , notice that local spectral seem to return clusters with higher variance in the ratio of external - to - internal conductance . at small size scalesmetis+mqi tends to give clusters of slightly better ( lower ) ratio , while at larger clusters the advantage goes to local spectral .this has interesting consequence for the applications of graph partitioning since ( depending on the particular application domain and the sizes and properties of clusters one aims to extract ) either local spectral or metis+mqi may be the method of choice .also , notice that there are mostly no ratios well below , except for very small sizes .this is important , as it seems to hint that large clusters are relatively hard to cut from the network , but are then internally easy to split into multiple sub - clusters .this shows another aspect of our findings : small communities below nodes are internally compact and well separated from the remainder of the network , whereas larger clusters are so hard to separate that cutting them from the network is more expensive than cutting them internally .community - like sets of nodes that are better connected internally than externally do nt seem to exist in large real - world networks , except at very small size scales .last , in figure [ sprawl - plots ] , we further illustrate the differences between spectral and flow - based clusters by drawing some example subgraphs .the two subgraphs shown on the left of figure [ sprawl - plots ] were found by local spectral , while the two subgraphs shown on the right of figure [ sprawl - plots ] were found by metis+mqi .these two pairs of subgraphs have a qualitatively different appearance : metis+mqi pieces look longer and stringier than the local spectral pieces .all of these subgraphs contain roughly 500 nodes , which is about the size scale where the differences between the algorithms start to show up . in these cases ,local spectral has grown a cluster out a bit past its natural boundaries ( thus the spokes ) , while metis+mqi has strung together a couple of different sparsely connected clusters .( we remark that the tendency of local spectral to trade off cut quality in favor of piece compactness is nt just an empirical observation , it is a well understood consequence of the theoretical analysis of spectral partitioning methods . ) [ sprawl - plots ] next we consider various other , mostly heuristic , algorithms and compare their performance in extracting clusters of various sizes . as a point of referencewe use results obtained by the local spectral and metis+mqi algorithms .we have extensively experimented with several variants of the global spectral method , both the usual eigenvector - based embedding on a line , and an sdp - based embedding on a hypersphere , both with the usual hyperplane - sweep rounding method and a flow - based rounding method which includes mqi as the last step .in addition , special post - processing can be done to obtain either connected or disconnected sets .we also experimented with a practical version of the leighton - rao algorithm , similar to the implementation described in .these results are especially interesting because the leighton - rao algorithm , which is based on multi - commodity flow , provides a completely independent check on metis , and on spectral methods generally .the leighton - rao algorithm has two phases . in the first phase ,edge congestions are produced by routing a large number of commodities through the network .we adapted our program to optimize conductance ( rather than ordinary ratio cut score ) by letting the expected demand between a pair of nodes be proportional to the product of their degrees . in the second phase ,a rounding algorithm is used to convert edge congestions into actual cuts .our method was to sweep over node orderings produced by running prim s minimum spanning tree algorithm on the congestion graph , starting from a large number of different initial nodes , using a range of different scales to avoid quadratic run time .we used two variations of the method , one that produces connected sets , and another one that can also produce disconnected sets . in top row of figure [ other - algos - fig ] ,we show leighton - rao curves for three example graphs .local spectral and metis+mqi curves are drawn in black , while the leighton - rao curves for connected and possibly disconnected sets are drawn in green and magenta respectively . for small to medium scales ,the leighton - rao curves for connected sets resemble the local spectral curves , while the leighton - rao curves for possibly disconnected sets resemble metis+mqi curves .this further confirms the structure of clusters produced by local spectral and metis+mqi , as discussed in section [ sec : pieces ] .+ leighton - rao : connected clusters ( green ) , disconnected clusters ( magenta ) .+ + ncp plots obtained by graclus and newman s dendrogram algorithm .+ [ other - algos - fig ] at large scales , the leighton - rao curves shoot up and become much worse than local spectral or metis+mqi . that leighton - rao has troubles finding good big clusters is not surprising because expander graphs are known to be the worst case input for the leighton - rao approximation guarantee .large real networks contain an expander - like core which is necessarily encountered at large scales .we remark that leighton - rao does not work poorly at large scales on every kind of graph .( in fact , for large low - dimensional mesh - like graphs , leighton - rao is a very cheap and effective method for finding cuts at all scales , while our local spectral method becomes impractically slow at medium to large scales . )this means that based on the structure of the network and sizes of clusters one is interested in different graph partitioning methods should be used . while leighton - rao is an appropriate method for mesh - like graphs , it has troubles in the intermingled expander - like core of large networks . finally , in addition to the above approximation algorithms - based methods for finding low - conductance cuts , we also experimented with a number of more heuristic approaches that tend to work well in practice .in particular , we compare graclus and newman s modularity optimizing program ( we refer to it as dendrogram ) .graclus attempts to partition a graph into pieces bounded by low - conductance cuts using a kernel -means algorithm .we ran graclus repeatedly , asking for pieces .then we measured the size and conductance of all of the resulting pieces .newman s dendrogram algorithm constructs a recursive partitioning of a graph ( that is , a dendrogram ) from the bottom up by repeatedly deleting the surviving edge with the highest betweenness centrality . a flat partitioningcould then be obtained by cutting at the level which gives the highest modularity score , but instead of doing that , we measured the size of conductance of every piece defined by a subtree in the dendrogram .the bottom row of figure [ other - algos - fig ] presents these results .again our two standard curves are drawn in black .the lower - envelopes of the graclus or dendrogram points are roughly similar to those produced by local spectral , which means both methods tend to produce rather compact clusters at all size scales .generally , graclus tends to produce a variety of clusters of better conductance than newman s algorithm .moreover , notice that in case of epinions social network and the astrophysics coauthorship network graclus tends to prefer larger clusters than the newman s algorithm .also , graclus seems to find clusters of ten or more nodes , while newmans s algorithm also extracts very small pieces . in general , clusters produced by either graclus or dendrogram are qualitatively similar to those produced by local spectral .this means that even though local spectral is computationally cheaper and easily scales to very large networks , the quality of identified clusters is comparable to that returned by techniques such as graclus and dendrogram that are significantly more expensive on large networks such as those we considered .in the previous sections , we used conductance since it corresponds most closely to the intuition that a community is a set of nodes that is more and/or better connected internally than externally . in this section , we look at other objective functions that capture this intuition and/or are popular in the community detection literature .in general there are two criteria of interest when thinking about how good of a cluster is a set of nodes .the first is the number of edges between the members of the cluster , and the second is the number of edges between the members of the cluster and the remainder of the network .we group objective functions into two groups .the first group , that we refer to as multi - criterion scores , combines both criteria ( number of edges inside and the number of edges crossing ) into a single objective function ; while the second group of objective functions employs only a single of the two criteria ( _ e.g. _ , volume of the cluster or the number of edges cut ) .let be an undirected graph with nodes and edges .let be the set of nodes in the cluster , where is the number of nodes in , ; the number of edges in , ; and , the number of edges on the boundary of , latexmath:[$c_s = degree of node .we consider the following metrics that capture the notion of a quality of the cluster .lower value of score ( when is kept constant ) signifies a more community - like set of nodes . ** conductance : * measures the fraction of total edge volume that points outside the cluster . * * expansion : * measures the number of edges per node that point outside the cluster . * * internal density : * is the internal edge density of the cluster . * * cut ratio : * is the fraction of all possible edges leaving the cluster . ** normalized cut : * . * * maximum - odf ( out degree fraction ) : * + is the maximum fraction of edges of a node pointing outside the cluster . * * average - odf : * is the average fraction of nodes edges pointing outside the cluster . * * flake - odf : * is the fraction of nodes in that have fewer edges pointing inside than to the outside of the cluster .we then generalize the ncp plot : for every cluster size we find a set of nodes ( ) that optimizes the chosen community score .we then plot community score as a function of .it is not clear how to design an optimization procedure that would , given a cluster size and the community score function , find the set that minimizes the function , _i.e. _ , is the best community .operationally , we perform the optimization the following way : we use the local spectral method which starts from a seed node and then explores the cluster structure around the seed node ; running local spectral from each node , we obtain a millions of sets of nodes of various sizes , many of which are overlapping ; and then for each such set of nodes , we compute the community score and find the best cluster of each size .figure [ fig : cmtymeasures1 ] considers the above eight community scores .notice that even though scores span different ranges they all experience qualitatively similar behavior , where clusters up to size ca .100 have progressively better scores , while the clusters above ca .100 nodes become less community - like as their size increases .this may seem surprising at the first sight , but it should be somewhat expected , as all these objective functions try to capture the same basic intuition they reward sets of nodes that have many edges internally and few pointing out of the clusters .there are , however , subtle differences between various scores . for example ,even though flake - odf follows same general trend as conductance , it reaches the minimum about an order of magnitude later than conductance , normalized cut , cut ratio score or the average - odf . on the other hand , maximum - odf exhibits the opposite behavior as it clearly prefers smaller clusters and is basically flat for clusters larger than about several hundred nodes .this is interesting as this shows the following trend : if one scores the community by the `` worst - case '' node using the out degree fraction ( _ i.e. _ , maximum - odf ) then only small clusters have no outliers and thus give good scores . when one considers the average fraction of node s edges pointing outside the cluster ( average - odf ) the objective function closely follows the trend of conductance .on the other hand , if one considers the fraction of nodes in the cluster with more of their edges pointing inside than outside the cluster ( flake - odf ) , then large clusters are preferred .next , focusing on the cut ratio score we notice that it is not very smooth , in the sense that even for large clusters its values seem to fluctuate quite a lot .this indicates that clusters of similar sizes can have very different numbers of edges pointing to the rest of the network . in terms of their internal density ,the variations are very small the internal density reaches the maximum for clusters of sizes around 10 nodes and then quickly raises to 1 , which means larger clusters get progressively sparser . for large clustersthis is not particularly surprising as the normalization factor increases quadratically with the cluster size .this can be contrasted with the expansion score that measures the number of edges pointing outside the cluster but normalizes by the number of nodes ( not the number of all possible edges ) .these experiments suggest that internal density and maximum - odf are not particularly good measures of community score and the cut ratio score may not be preferred due to high variance .flake - odf seems to prefer larger clusters , while conductance , expansion , normalized cut , and average - odf all exhibit qualitatively similar behaviors and give best scores to similar clusters .in addition , we performed an experiment where we extracted clusters based on their conductance score but then also computed the values of other community scores ( for these same clusters ) .this way we did not optimize each community score separately , but rather we optimized conductance and then computed values of other objective functions on these best - conductance pieces .the shape of the plots remained basically unchanged , which suggests that same sets of nodes achieve relatively similar scores regardless of which particular notion of community score is used ( conductance , expansion , normalized cut , or average - odf ) .this shows that these four community scores are highly correlated and in practice prefer practically the same clusters .+ + + [ fig : cmtymeasures1 ] next we also consider community scores that consider a single criteria .one such example is modularity , which is one of the most widely used methods to evaluate the quality of a division of a network into modules or communities . for a given partition of a network into clusters, modularity measures the number of within - community edges , relative to a null model of a random graph with the same degree distribution .here we consider the following four notions of a quality of the community that are based on using one or the other of the two criteria of the previous subsection : * * modularity : * , where is the expected number of edges between the nodes in set in a random graph with the same node degree sequence . ** modularity ratio : * is alternative definition of the modularity , where we take the ratio of the number of edges between the nodes of and the expected number of such edges under the null - model . * * volume : * is sum of degrees of nodes in . ** edges cut : * is number of edges needed to be removed to disconnect nodes in from the rest of the network .figure [ fig : cmtymeasures2 ] shows the analog of the ncp plot where now instead of conductance we use these four measures .a general observation is that modularity tends to increase roughly monotonically towards the bisection of the network .this should not be surprising since modularity measures the `` volume '' of communities , with ( empirically , for large real - world networks ) a small additive correction , and the volume clearly increases with community size . on the other hand , the modularity ratio tends to decrease towards the bisection of the network .this too should not be surprising , since it involves dividing the volume by a relatively small number .results in figure [ fig : cmtymeasures2 ] demonstrate that , with respect to the modularity , the `` best '' community in any of these networks has about half of all nodes ; while , with respect to the modularity ratio , the `` best '' community in any of these networks has two or three nodes . leaving aside debates about community - quality objective functions ,note that , whereas the conductance and related measures are _ discriminative _ , in that they prefer different kinds of clusters , depending on the type of network being considered , modularity tends to follow the same general pattern for all of these classes of networks .that is , even aside from community - related interpretations , conductance ( as well as several of the other bi - criterion objectives considered in section [ sxn : multi - crit - scores ] ) has qualitatively different types of behaviors for very different types of graphs ( _ e.g. _ , low - dimensional graphs , expanders , large real - world social and information networks ) , whereas modularity and other single - criterion objectives behave in qualitatively similar ways for all these diverse classes of graphs .+ [ fig : cmtymeasures2 ]so far we have examined various heuristics and approximation algorithms for community detection and graph partitioning .common to these approaches is that they all only approximately find good cuts , _i.e. _ , they only approximately optimize the value of the objective function .thus the clusters they identify provide only an _ upper bound _ on the true minimum best clusters . to get a better idea ofhow good those upper bounds are , we compute theoretical _ lower bounds_. here we discuss the spectral lower bound on the conductance of cuts of arbitrary balance , and a related sdp - based lower bound on the conductance of any cut that divides the graph into two pieces of equal volume .lower bounds are usually not computed for practical reasons , but instead are used to gain insights into partitioning algorithms and properties of graphs where algorithms perform well or poorly .also , note that the lower bounds are `` loose , '' in the sense that they do not guarantee that a cluster of a particular score exists ; rather they are just saying that there exists no cluster of better score .first , we introduce the notation : is a column vector of the graph s node degrees ; is a square matrix whose only nonzero entries are the graph s node degrees on the diagonal ; is the adjacency matrix of ; is then the non - normalized laplacian matrix of ; * 1 * is vector of 1 s ; and is the matrix dot - product operator . now , consider the following optimization problem ( which is well known to be equivalent to an eigenproblem ) : let be a vector achieving the minimum value .then is the spectral lower bound on the conductance of any cut in the graph , regardless of balance , while defines a spectral embedding of the graph on a line , to which rounding algorithms can be applied to obtain actual cuts that can serve as upper bounds at various sizes .next , we discuss an sdp - based lower bound on cuts which partition the graph into two sets of exactly equal volume .consider : and let be a matrix achieving the minimum value .then is a lower bound on the weight of any cut with perfect volume balance , and is a lower bound on the conductance of any cut with perfect volume balance .we briefly mention that since , we can view as a gram matrix that can be factored as .then the rows of are the coordinates of an embedding of the graph on a hypersphere . again , rounding algorithms can be applied to the embedding to obtain actual cuts that can serve as upper bounds .the spectral and sdp embeddings defined here were the basis for the extensive experiments with global spectral partitioning methods that were alluded to in section [ sec : algs ] . in this section , it is the lower bounds that concern us . figure [ other - algos - fig - lbfig ] shows the spectral and sdp lower bounds for three example graphs .the spectral lower bound , which applies to cuts of any balance , is drawn as a horizontal red line which appears near the bottom of each plot .the sdp lower bound , which only applies to cuts separating a specific volume , namely , appears as an red triangle near the right side of the each plot .( note that plotting this point required us to use volume rather than number of nodes for the x - axis of these plots . ) clearly , for these graphs , the lower bound at , is higher than the spectral lower bound which applies at smaller scales .more importantly , the lower bound at , is higher than our _ upper _ bounds at many smaller scales .this demonstrates two important points : ( 1 ) it shows that best conductance clusters are orders of magnitude better than best clusters consisting of half the edges ; and ( 2 ) it demonstrates that graph partitioning algorithms perform well at various size scales . for all graph partitioning algorithms ,the minimum of their ncp plot is close to the spectral lower bound , and the clusters at half the volume are again close to theoretically best possible clusters .this suggests that graph partitioning algorithms we considered here do a good job both at finding best possible clusters and at bisecting the network .[ other - algos - fig - lbfig ] take , for example , the first plot of figure [ other - algos - fig - lbfig ] , where in black we plot the conductance curves obtained by our ( local spectral and metis+mqi ) algorithms . with a red dashed line we plot the lower bound on the best possible cut in the network , and with red triangle we plot the lower bound for the cut that separates the graph in two equal volume parts .thus , the true conductance curve ( which is intractable to compute ) lies below black but above red line and red triangle . from practical perspectivethis demonstrates that the graph partitioning algorithms ( local spectral and metis+mqi in particular ) do a good job of extracting clusters at all size scales .the lower bounds tell us that the conductance curve which starts at upper left corner first has to go down and reach the minimum close to the horizontal dashed line ( spectral lower bound ) and then sharply rise and ends up above the red triangle ( sdp lower bound ) .this verifies several things : ( 1 ) graph partitioning algorithms perform well at all size scales , as the extracted clusters have scores close to the theoretical optimum ; ( 2 ) the qualitative shape of the ncp is not an artifact of graph partitioning algorithms or particular objective functions , but rather it is an intrinsic property of these large networks ; and ( 3 ) the lower bounds at half the size of the graph indicate that our inability to find large good - conductance communities is not a failings of our algorithms .instead such large good - conductance `` communities '' simply do not exist in these networks .finally , in table [ lower - bound - table ] we list for about 40 graphs the spectral and sdp lower bounds on overall conductance and on volume - bisecting conductance , and also the ratio between the two .it is interesting to see that for these graphs this ratio of lower bounds does a fairly good job of discriminating between declining - ncp - plot graphs , which have a small ratio , and v - shape - ncp - plot graphs , which have a large ratio .small networks ( like collegefootball , zacharykarate and monksnetwork ) have downward ncp plot and a small ratio of the sdp and spectral lower bounds . on the other hand large networks ( _ e.g. _ , epinions or answers-3 ) have downward and then upward ncp plot ( as in figure [ fig : intro](right ) ) have large ratio of the two lower bounds .this hints that in small networks `` large '' clusters ( i.e. , clusters of around half the network ) tend to have best conductances . on the contrary , in large networkssmall clusters have good conductances , while large clusters ( of the half the network size ) tend to have much worse conductances , and thus high ratios of lower bounds as shown in the left table of table [ lower - bound - table ] .[ lower - bound - table ]in this paper we examined in a systematic way a wide range of network community detection methods originating from theoretical computer science , scientific computing , and statistical physics .our empirical results demonstrate that determining the clustering structure of large networks is surprisingly intricate . in general, algorithms nicely optimize the community score function over a range of size scales , and the scores of obtained clusters are relatively close to theoretical lower bounds . however , there are classes of networks where certain algorithms perform sub - optimally .in addition , although many common community quality objectives tend to exhibit similar qualitative behavior , with very small clusters achieving the best scores , several community quality metrics such as the commonly - used modularity behave in qualitatively different ways .interestingly , intuitive notions of cluster quality tend to fail as one aggressively optimizes the community score . for instance , by aggressively optimizing conductance , one obtains disconnected or barely - connected clusters that do not correspond to intuitive communities .this suggests the rather interesting point ( that we described in section [ sec : pieces ] ) that _ approximate _ optimization of the community score introduces a systematic bias into the extracted clusters , relative to the combinatorial optimum .many times , as in case of local spectral , such bias is in fact preferred since the resulting clusters are more compact and thus correspond to more intuitive communities .this connects very nicely to regularization concepts in machine learning and data analysis , where separate penalty terms are introduced in order to trade - off the fit of the function to the data and its smoothness . in our case here , one is trading off the conductance of the bounding cut of the cluster and the internal cluster compactness .effects of regularization by approximate computation are pronounced due to the extreme sparsity of real networks . how to formalize a notion of regularization by approximate computation moregenerally is an intriguing question raised by our findings .r. andersen , f. chung , and k. lang .local graph partitioning using pagerank vectors . in _focs 06 : proceedings of the 47th annual ieee symposium on foundations of computer science _ , pages 475486 , 2006 .g. flake , s. lawrence , and c. giles .efficient identification of web communities . in _kdd 00 : proceedings of the 6th acm sigkdd international conference on knowledge discovery and data mining _ , pages 150160 , 2000 .k. lang and s. rao . a flow - based method for improving the expansion or conductance of graph cuts . in _ipco 04 : proceedings of the 10th international ipco conference on integer programming and combinatorial optimization _ , pages 325337 , 2004 .t. leighton and s. rao .an approximate max - flow min - cut theorem for uniform multicommodity flow problems with applications to approximation algorithms . in _focs 88 : proceedings of the 28th annual symposium on foundations of computer science _ , pages 422431 , 1988 .j. leskovec , k. lang , a. dasgupta , and m. mahoney .statistical properties of community structure in large social and information networks . in _www 08 : proceedings of the 17th international conference on world wide web _ , pages 695704 , 2008 .d. spielman and s .- h . teng .spectral partitioning works : planar graphs and finite element meshes . in _ focs 96 : proceedings of the 37th annual ieee symposium on foundations of computer science _ , pages 96107 , 1996 .
|
detecting clusters or communities in large real - world graphs such as large social or information networks is a problem of considerable interest . in practice , one typically chooses an objective function that captures the intuition of a network cluster as set of nodes with better internal connectivity than external connectivity , and then one applies approximation algorithms or heuristics to extract sets of nodes that are related to the objective function and that `` look like '' good communities for the application of interest . in this paper , we explore a range of network community detection methods in order to compare them and to understand their relative performance and the systematic biases in the clusters they identify . we evaluate several common objective functions that are used to formalize the notion of a network community , and we examine several different classes of approximation algorithms that aim to optimize such objective functions . in addition , rather than simply fixing an objective and asking for an approximation to the best cluster of any size , we consider a size - resolved version of the optimization problem . considering community quality as a function of its size provides a much finer lens with which to examine community detection algorithms , since objective functions and approximation algorithms often have non - obvious size - dependent behavior . * categories and subject descriptors : * h.2.8 database management : database applications data mining * general terms : * measurement ; experimentation . * keywords : * community structure ; graph partitioning ; conductance ; spectral methods ; flow - based methods .
|
the key objective of digital communications is to transmit information reliably from one point to another . with the introduction of iterative error correction codes ( such as turbo , low - density parity - check and repeat - accumulate codes ) ,error correction technology has become a vital means of achieving this aim in most current communication systems .a key performance measure of a coding scheme is its decoding threshold , which is the maximum noise level at which it can correct errors . in this paper , we design an efficient optimization technique to maximize the threshold of low - density parity - check ( ldpc ) codes . in a numerical technique , called density evolution ( de ) was formulated to find the threshold of the belief propagation ( bp ) decoding algorithm for a given ldpc ensemble .an ldpc ensemble is the set of all ldpc codes with a particular property set , usually the degree distribution of their graphical ( tanner graph ) representation .de determines expected iterative decoding performance of a particular code ensemble by tracking the probability density function of tanner graph edge messages through the iterative decoding process .this problem for the code designer is then to search for the ensemble with the best threshold from which a specific code may then be chosen .multi - edge type ldpc ( met - ldpc ) codes are a generalization of ldpc codes . unlike standard ldpc ensembles which contain a single statical equivalence class of tanner graph edges , in the multi - edge setting several edge classes can be defined and every node is characterized by the number of connections to edges of each class . the advantage of the met generalization is greater flexibility in code structure and improved decoding performances .the code optimization of ldpc and met - ldpc codes is a non - linear cost function maximization problem , where the de threshold is the cost function and the tanner graph structure and edge distribution gives the variables to be optimized . in the majority of previous research in code optimization found in the literature ,the optimization algorithm called differential evolution ( dif.e ) has been applied to finding good degree distributions for ldpc codes .this technique has been successfully applied to the design of good irregular ldpc codes for a range of channels .shokrollahi and sorn used an improved version of dif.e by proposing a new step called discrete recombination in order to increase the diversity of the new parameters in the search .richardson and urbanke suggested using hill - climbing method to optimize met - ldpc codes . in our work , we develop a new code optimization technique to optimize codes more efficiently .this technique can be thought of as minimizing the randomness in dif.e or limiting the search space in ordinary exhaustive search and hill - climbing .this technique is then successfully applied to design good irregular ldpc codes and met - ldpc codes . in previous research of code optimization ,the structure of the ldpc and met - ldpc tanner graph is determined via trail and error or exhaustive search , while only the edge distributions within a given structure are optimized . in this research, we propose a new nested method to optimize both the structure and edge distribution for ldpc and met - ldpc codes .this is particularly important for met - ldpc codes where , to date , it is not clear a priori which structures will be good .this paper is organized as follows .section ii briefly reviews the basic concepts of standard ldpc codes and met - ldpc codes . in sectioniii we review the code optimization problem for standard and met - ldpc codes and discus our proposed code optimization technique . in sectioniv we discuss the code optimization result obtained for several examples .section v concludes the paper .as the name suggests , an ldpc code is a linear block code described by a sparse parity - check matrix .an ldpc parity - check matrix can be represented in graphical form by a tanner graph .suppose the ldpc parity - check matrix , has columns and rows ; the corresponding tanner graph consists of variable nodes , check nodes , and an edge for every non - zero entry in .each variable node represents a bit of the codeword while each check node represents a parity - check constraint of the code .assuming is full rank , the code rate , r is given by .an ldpc code ensemble is typically specified by an edge degree distribution ( ) from the perspective of tanner graph edges : where ( resp . , )is the fraction of edges that are connected to degree variable nodes ( resp . , check nodes ) and ( resp . , ) is the maximum variable node degree ( resp . , check node degree ) .we let ( resp . , ) be the set of s for non zero ( resp . , ) .the tanner graph for a rate - half irregular ldpc code is shown in fig .[ fig.st_tanner ] , where = [ 2 , 3 , 6 , 20 ] , = [ 7 , 8 ] and the degree distribution is given by and .( resp . , ) represents the variable nodes ( resp . , check nodes ) .number of nodes for different edge types are shown as fractions of the code length , width=172 ] met - ldpc code ensembles are generally described based on a node - perspective , as opposed to the edge - perspective that is normally used for standard ldpc code ensembles .the met - ldpc code ensemble can be specified through two multinationals associated to the variable and check nodes . where and are vectors defined as follows .let denote the number of edge types used in the graph ensemble and denote the number of different channels over which a bit may be transmitted .let the vector ] be a received degree where is associated with punctured variables ( variables not transmitted to the receiver ) .the vector of variables corresponding to the edge distributions is denoted by ] and . are non - negative reals corresponding to the fraction of variable nodes with type ( ) and the fraction of check nodes with type ( ) in the graph respectively . in this research ,all the received variables are transmitted through a single link ( i.e = 1 ) .hence for un - punctured variables in the codeword ( i.e = 0 , = 1 ) = [ 0 , 1 ] and for punctured variables(i.e = 1 , = 0 ) = [ 1 , 0 ] .we can determine the decoding threshold for a given ldpc code ensemble defined by its degree distribution pair ( ) via de .our task is to find the degree distribution pair which yields the largest possible threshold .this a is non - linear cost function maximization problem . on the binary erasure channel ( bec )the optimization problem is as follows . on other channelsan appropriate de function , or a suitable approximation , is used for ( 5 ) .+ for a fixed code rate , r and maximum number of decoder iterations , , where r ] where is the number of edge classes and for all .+ for a fixed code rate , r and maximum number of decoder iterations , , where r ] ) where and is given by ( 20),width=288 ] , \lambda_2 = [ 0 , 0 , 3 , 0 ] , \lambda_3 = [ 0 , 0 , 3 , 0 ] , \lambda_4 = [ 0 , 0 , 0 , 1 ] $ ] , width=288 ] .optimization of 4-edge class rate - half met - ldpc codes on the bi - awgn channel [ cols="^,^,^,^,^,^,^,^,^ " , ]in this paper , we introduced a novel code optimization technique which can be successfully applied to optimize degree distributions for both standard ldpc codes and met - ldpc codes .we then proposed a joint optimization technique for met - ldpc codes which allows the optimization of the met structure and degree distribution given just the number of edge classes and maximum node degrees .we found that our proposed ar method works best for optimizing the edge degree distribution for a given set of allowed degrees while dif.e works best for optimizing the set of allowed degrees .t. j. richardson , m. a. shokrollahi , and r. l. urbanke , `` design of capacity - approaching irregular low - density parity - check codes , '' _ ieee trans .47 , no . 2 ,619637 , feb .
|
a low - density parity - check ( ldpc ) code is a linear block code described by a sparse parity - check matrix , which can be efficiently represented by a bipartite tanner graph . the standard iterative decoding algorithm , known as belief propagation , passes messages along the edges of this tanner graph . density evolution is an efficient method to analyze the performance of the belief propagation decoding algorithm for a particular ldpc code ensemble , enabling the determination of a decoding threshold . the basic problem addressed in this work is how to optimize the tanner graph so that the decoding threshold is as large as possible . we introduce a new code optimization technique which involves the search space range which can be thought of as minimizing randomness in differential evolution or limiting the search range in exhaustive search . this technique is applied to the design of good irregular ldpc codes and multi - edge type ldpc codes .
|
individuals of ecological communities permanently face the choice of either cooperating with each other , or of cheating .while cooperation is beneficial for the whole population and essential for its functioning , it often requires an investment by each agent .cheating is then tempting , yielding social dilemmas where defection is the rational choice that would yet undermine the community and could even lead to ultimate self - destruction .however , bacteria or animals do not act rationally ; instead , the fate of their populations is governed by an evolutionary process , through reproduction and death .the conditions under which cooperation can thereby evolve are subject of much contemporary , interdisciplinary research .evolutionary processes possess two main competing aspects .the first one is selection by individuals different fitness , which underlies adaptation and is , by neo - darwinists , viewed as the primary driving force of evolutionary change . in social dilemmas , defectors exploit cooperators rewarding them a higher fitness ; selection therefore leads to fast extinction of cooperation , such that the fate of the community mimics the rational one . a large body of work is currently devoted to the identification of mechanisms that can reinforce cooperative behavior , _e.g. _ kin selection , reciprocal altruism , or punishment .however , the evolution of cooperation in darwinian settings still poses major challenges .the second important aspect of evolution are random fluctuations that occur from the unavoidable stochasticity of birth and death events and the finiteness of populations .neutral theories emphasize their influence which can , ignoring selection , explain many empirical signatures of ecological systems such as species - abundance relations as well as species - area relationships .the importance of neutral evolution for the maintenance of cooperation has so far found surprisingly little attention . in this article, we introduce a general concept capable to investigate the effects of selection versus fluctuations by analyzing extinction events .we focus on social dilemmas , i.e. , we study the effects of darwinian versus neutral evolution on cooperation . for this purpose ,we consider a population that initially displays coexistence of cooperators and defectors , i.e. , cooperating and non - cooperating individuals .after some transient time , one of both ` species ' will disappear , simply due to directed and stochastic effects in evolution and because extinction is irreversible : an extinct species can not reappear again .the fundamental questions regarding cooperation are therefore : will cooperators eventually take over the whole population , and if not , for how long can a certain level of cooperation be maintained ?we show that the answers to these questions depend on the influence of stochasticity . for large fluctuations , evolution is effectively neutral , and cooperation maintained on a long time - scale , if not ultimately prevailing .in contrast , small stochastic effects render selection important , and cooperators die out quickly if disfavored .we demonstrate the emergence of an ` edge of neutral evolution ' delineating both regimes .* different types of social dilemmas*. we consider a population of cooperators and defectors , and describe their interactions in terms of four parameters and , see text . depending on the payoff - differences and , four qualitatively different scenarios arise . [ cols="<,<,<",options="header " , ] [ tab_dilemmas ] consider a population of individuals which are either cooperators or defectors .we assume that individuals randomly engage in pairwise interactions , whereby cooperators and defectors behave distinctly different and thereby gain different fitness .the population than evolves by fitness - dependent reproduction and random death , i.e. , a generalized moran process , which we describe in detail in the next subsection . herewe present the different possible fitness gains of cooperators and defectors . in the _ prisoner s dilemma _a cooperator provides a benefit to another individual , at a cost to itself ( with the cost falling short of the benefit ) .in contrast , a defector refuses to provide any benefit and hence does not pay any costs . for the selfish individual ,irrespective of whether the partner cooperates or defects , defection is favorable , as it avoids the cost of cooperation , exploits cooperators , and ensures not to become exploited .however , if all individuals act rationally and defect , everybody is , with a gain of , worse off compared to universal cooperation , where a net gain of would be achieved .the prisoner s dilemma therefore describes , in its most basic form , the fundamental problem of establishing cooperation . we can generalize the above scheme to include other basic types of social dilemmas .namely , two cooperators that meet are both rewarded a payoff , while two defectors obtain a punishment .when a defector encounters a cooperator , the first exploits the second , gaining the temptation , while the cooperator only gets the suckers payoff .social dilemmas occur when , such that cooperation is favorable in principle , while temptation to defect is large : .these interactions may be summarized by the payoff matrix hereby , the entries in the upper row describe the payoff that a cooperator obtains when encountering a cooperator or a defector , and the entries in the lower row contain the payoffs for a defector ., width=453 ] variation of the parameters and yields four principally different types of games , see tab .[ tab_dilemmas ] and fig .[ fig : c ] .the _ prisoner s dilemma _ as introduced above arises if the temptation to defect is larger than the reward , and if the punishment is larger than the suckers payoff , e.g. , , , and . as we have already seen above , in this case , defection is the best strategy for the selfish player . within the three other types of games ,defectors are not always better off .for the _ snowdrift game _ the temptation is still higher than the reward but the sucker spayoff is larger than the punishment .therefore , cooperation is favorable when meeting a defector , but defection pays off when encountering a cooperator , and a rational strategy consists of a mixture of cooperation and defection .another scenario is the _ coordination game _ , where mutual agreement is preferred : either all individuals cooperate or defect as the reward is higher than the temptation and the punishment is higher than sucker s payoff .last , the scenario of _ by - product mutualism _ yields cooperators fully dominating defectors since the reward is higher than the temptation and the sucker spayoff is higher than the punishment .all four situations and the corresponding ranking of the payoff values are depicted in tab .[ tab_dilemmas ] and fig .[ fig : c ] .we describe the evolution by a generalized moran process , where the population size remains constant and reproduction is fitness - dependent , followed by a random death event .let us denote the number of cooperators by ; the number of defectors then reads .the individuals fitness are given by a constant background fitness , set to , plus the payoffs obtained from social interactions .the fitness of cooperators and defectors thus read and , resp .. in the following , we assume weak selection , i.e. , the payoff coefficients are small compared to the background fitness .note that within this limit , the self interactions of individuals are only of minor relevance . more important , in the case of weak selection , the evolutionary dynamics of the game depends only on the payoff differences and .the different types of social dilemmas arising from theses two parameters are listed in table 1 . + of cooperation and therefore have a fitness advantage of compared to cooperators . (a ) , exemplary evolutionary trajectories . a high selection strength, i.e. , a high fitness difference ( purple ) , leads to darwinian evolution and fast extinction of cooperators , while a small one , ( green ) , allows for dominant effects of fluctuations and maintenance of cooperation on long time - scales .we have used in both cases .( b ) , the dependence of the corresponding mean extinction time on the system size .we show data from stochastic simulations as well as analytical results ( solid lines ) for , starting from equal abundances of both species , for different values of ( see text ) : ( ) , ( ) , ( ) , and ( ) .the transition from the neutral to the darwinian regime occurs at population sizes , and .they scale as : , as is confirmed by the rescaled plot where the data collapse onto the universal scaling function , shown in the inset.,title="fig:",height=207 ] of cooperation and therefore have a fitness advantage of compared to cooperators .( a ) , exemplary evolutionary trajectories . a high selection strength , i.e. , a high fitness difference ( purple ) , leads to darwinian evolution and fast extinction of cooperators , while a small one , ( green ) , allows for dominant effects of fluctuations and maintenance of cooperation on long time - scales .we have used in both cases .( b ) , the dependence of the corresponding mean extinction time on the system size .we show data from stochastic simulations as well as analytical results ( solid lines ) for , starting from equal abundances of both species , for different values of ( see text ) : ( ) , ( ) , ( ) , and ( ) .the transition from the neutral to the darwinian regime occurs at population sizes , and .they scale as : , as is confirmed by the rescaled plot where the data collapse onto the universal scaling function , shown in the inset.,title="fig:",height=207 ] in the moran process , reproduction of individuals occurs proportional to their fitness , and each reproduction event is accompanied by death of a randomly chosen individual . as an example , the rate for reproduction of a defector and corresponding death of a cooperator reads whereby denotes the average fitness .the time scale is such that an average number of reproduction and death events occur in one time step .the evolutionary dynamics is intrinsically stochastic .although defectors may have a fitness advantage compared to cooperators , the latter also have a certain probability to increase .this situation is illustrated in fig .[ fig : phasespace ] for a population of individuals and the dynamics of the prisoner s dilemma . darwinian evolution , through selection by individuals fitness , points to the ` rational ' state of only defectors , while fluctuations oppose this dynamics and can lead to a state of only cooperators . in any case , upon reaching overall defection or cooperation , the temporal development comes to an end .one species therefore eventually dies out .the mean extinction time , i.e. , the mean time it takes a population where different species coexist to eventually become uniform , allows to distinguish darwinian from neutral evolution .consider the dependence of the mean extinction time on the system size .selection , as a result of some interactions within a finite population , can either stabilize or destabilize a species coexistence with others as compared to neutral interactions , thereby altering the mean time until extinction occurs .instability leads to steady decay of a species , and therefore to fast extinction : the mean extinction time increases only logarithmically in the population size , , and a larger system size does not ensure much longer coexistence .this behavior can be understood by noting that a species disfavored by selection decreases by a constant rate .consequently , its population size decays exponentially in time , leading to a logarithmic dependence of the extinction time on the initial population size .in contrast , stable existence of a species induces , such that extinction takes an astronomically long time for large populations . in this regime ,extinction only stems from large fluctuations that are able to cause sufficient deviation from the ( deterministically ) stable coexistence .these large deviations are exponentially suppressed and hence the time until a rare extinction event occurs scales exponentially in the system size .an intermediate situation , i.e. , when has a power - law dependence on , , signals dominant influences of stochastic effects and corresponds to neutral evolution . herethe extinction time grows considerably , though not exponentially , in increasing population size .large therefore clearly prolongs coexistence of species but can still allow for extinction within biologically reasonable time - scales .a typical neutral regime is characterized by , such that scales linearly in the system size .this corresponds to the case where the dynamics yields an essentially unbiased random walk in state space .the mean - square displacement grows linearly in time , with a diffusion constant proportional to .the absorbing boundary is thus reached after a time proportional to the system size .other values of can occur as well .for example , and as shown later , can occur in social dilemmas ( regimes ( 2 ) in fig .[ fig : c ] ) . to summarize , the mean extinction time can be used to classify evolutionary dynamics into a few fundamental regimes .darwinian evolution can yield stable and unstable coexistence , characterized by and , resp .. power law dependences , , indicate neutral evolution .transitions between these regimes can occur and manifest as crossovers in the functional relation .an approximate analytical description , valid for a large number of interacting individuals , is possible .the quantity of interest is thereby the probability of having cooperators at time .its time evolution is described by a master equation specified by transition rates such as ( [ eq : transprob ] ) . for large population sizes the master equation can be approximately described within a generalized diffusion approach , where the fraction of cooperators is considered as a continuous variable .the temporal development of is then described by a fokker - planck equation , +\frac{1}{2}\frac{\partial^2\;}{\partial x^2}\left[\beta(x)p(x , t)\right]\,.\end{aligned}\ ] ] hereby , describes the darwinian of the evolution , due to selection by fitness differences , and corresponds to the deterministic dynamics .the second part , which involves the diffusion term , accounts for fluctuations ( to leading order ) and thereby describes undirected random drift . decreases like with increasing population size . for the social dilemmas which we study in this article and given by , \nonumber , \\\beta(x)&=&\frac{1}{n}x(1-x)\left[2+({\mathcal{s}}-{\mathcal{p}})(1-x)+({\mathcal{t}}-{\mathcal{r } } ) x\right]\nonumber\\ & \approx & \frac{2}{n}x(1-x)\,.\end{aligned}\ ] ] here , the approximation of given in the last line is valid since weak selection is assumed .the prisoner s dilemma , specified by describes the situation where defectors have a frequency independent fitness advantage as compared to cooperators .this scenario is frequently studied in population genetics ; we briefly discuss it in the following .the directed part and diffusion coefficients are given by , \approx\frac{2}{n}x(1-x)\,.\end{aligned}\ ] ] with these one can calculate the fixation probability to end up with only cooperators if starting with an equal fraction of cooperators and defectors .it has already been calculated in previous work and reads , the probability for fixation of defectors follows as . within the darwinian regime ( )defectors fixate ( , whereas for the neutral regime ( ) both strategies have the same chance of prevailing ( ) .the fixation probability gives no information about the typical time needed for extinction of one of the two species .however , this time is important to determine whether extinction happens within the time scale of observation .we turn to this question in the following .the above analytical description , in form of the fokker - planck equation ( [ eq : fpe ] ) , can be employed for computing the mean extinction time .the latter refers to the mean time required for a population initially consisting of a fraction of cooperators to reach a uniform state ( only either cooperators or defectors ) .it is given as solution to the corresponding backward kolmogorov equation , (x)=-1\,,\ ] ] with appropriate boundary conditions .this equation can be solved by iterative integration . in detail , the mean extinction time , , if starting with an equal fraction of cooperators is given by \nonumber\\ & & \times\left[\int_0 ^ 1 du /\psi(u ) \right]^{-1},\end{aligned}\ ] ] where is given by .we have performed these integrals for the general moran process and show the results in the following .for the special case of the prisoner s dilemma , specified by , ( frequency independent fitness advantage ) , eq .( [ eq : integratingdetails ] ) can be solved exactly .the solution reads , \right\ } \big.\nonumber\\ & & + \big.{{\sf p}}_{{\sf fix , d}}\left\{\ln{(cn)}+\gamma-{{\sf ei}}\left(-cn/2\right)+e^{-cn}\left[{{\sf ei}}\left(cn/2\right)-{{\sf ei}}\left(cn \right ) \right]\right\}\big]\ , , \label{eq : t_constant_fitness}\end{aligned}\ ] ] where denotes the exponential integral and is the euler mascheroni constant . and denote the fixation probabilities of cooperators and defectors , given by eq .( [ eq : fixprob ] ) .the analytical solution of the mean extinction time as a function of is shown and compared to stochastic simulations in fig .[ fig : a ] . for a further discussion of ( eq . [ eq : t_constant_fitness ] ) and its impact on evolutionary dynamics we defer the reader to section [ sec : results ] . here , just note that the asymptotic behavior , of is given by for , and for . with this , the well known asymptotic solutions for high and low population size , and are obtained . for general social dilemmas with arbitrary payoff values , we need to rely on some approximations . using the drift and diffusion coefficient given by eq .( [ eq : fpecoeff ] ) we now linearize the fraction , i.e. we write .hereby denotes the fixed point of the deterministic dynamics , where , and . as an example , in the situation , , and , we obtain the mean extinction time , +{{\sf erfi}}\left(\sqrt{g}x^*\right)\rbrace^{-1}\times\nonumber\\ \times&&\big\{{{\sf erfi}}\left(-\sqrt{g}x^*\right)\left[-\mathcal{f}\left(g\left(1-x^*\right)\right)+\mathcal{f}\left(g\left(1/2-x^*\right)\right ) \right]\big.\nonumber \\ & & - { { \sf erfi}}\left(\sqrt{g}\left(1-x^*\right)\right)\left[\mathcal{f}\left(g\left(1/2-x^*\right)\right)-\mathcal{f}\left(g\left(x^*\right)\right ) \right]\nonumber \\ & & + \big . { { \sf erfi}}\left(\sqrt{g}\left(1/2-x^*\right)\right)\left[\mathcal{f}\left(g\left(1-x^*\right)\right)-\mathcal{f}\left(g\left(x^*\right)\right)\right]\big\ } \label{eq : t_gen}\end{aligned}\ ] ] hereby , denotes the complex error function , and involves a generalized hypergeometric function . for graphical representation of eq.(8 ) see fig .[ fig : d]a ( upper branch ) . as before ,the correct asymptotic behavior can also be calculated for this case .note that the asymptotic behavior of is given by for and for . for small population size ,the mean extinction time scales again like . forasymptotically large system sizes the scaling depends on the value of the fixed point . for an internal fixed point ,as arises in the snowdrift game , scales as expected like . in the results section, we analyze the properties of the analytical form of the mean extinction time , eqs .( [ eq : t_constant_fitness ] ) and ( [ eq : t_gen ] ) , together with numerical simulations , and demonstrate how it defines an emerging edge of neutral evolution . in the results section ,we show that the mean extinction time , eq .( [ eq : t_gen ] ) , exhibits different regimes of neutral and darwinian dynamics . here , we provide further information on how the boundaries between these regimes can be obtained analytically . and , a prisoner s dilemma , snowdrift game , by - product mutualism or coordination game arises .two regimes of neutral evolution , ( 1 ) and ( 2 ) shown in grey , intervene two darwinian regimes , ( 3 ) and ( 4 ) , depicted in white .coexistence of cooperators and defectors is lost after a mean time which discriminates the distinct regimes : in ( 1 ) , we encounter , while emerges in ( 2 ) , in ( 3 ) , and in ( 4 ) . in the prisoner s dilemma and the coordination game , neutral evolutioncan thus maintain cooperation at a much longer time than darwinian evolution .the edges of neutral evolution , red and blue curves , scale as ( see text ) . we therefore show them depending on and , where they adopt universal shapes.,width=377 ] for this purpose , we further approximate the dynamics .let us , firstly , focus on the edge of the regime where emerges .we note that , before unavoidable extinction of one species occurs , a quasi - stationary distribution may form around the fixed point . following the generic behavior of an ornstein - uhlenbeck process , its shape is approximately gaussian .its width is given by . and are specified in the preceding section .now , for small width , , the darwinian evolution dominates the behavior , meaning or .in contrast , if the dynamics is essentially a random walk , and emerges .the edge of neutral evolution therefore arises at . remembering that is given by , it follows that the edge between both regimes for is described by .numerical simulations yield a good agreement with this prediction . as discussed later ( see fig .2 ) , they reveal that the crossover between the two regimes is remarkably sharp . the constant which specifies the exact position of the crossovercan therefore be estimated as .it follows that the regime of therefore corresponds to the square circumscribed by straight lines connecting the points as shown in fig .[ fig : c ] .a similar argument allows to determine the crossover from the other neutral regime , with , to the darwinian regimes .the neutral regime emerges if the fixed point is close to the boundaries , such that or denotes the crossover to the darwinian regimes . from these relations , if follows that the shapes of this second neutral regime are described by and .the proportionality constant has again been estimated from numerical simulations . from the latter, we have also found that the parabolic curves constitute a valid approximation to this second edge of neutral evolution .we employ the analytical expression , eqs .( [ eq : t_constant_fitness ] ) and ( [ eq : t_gen ] ) , for the mean extinction time , as well as computer simulations , to show how regimes of darwinian and neutral evolution can be distinguished .we demonstrate that neutral evolution can maintain cooperation on much longer time - scales than darwinian , even if cooperation has a fitness disadvantage .we start with the special case of the prisoner s dilemma where defectors have a frequency independent fitness advantage compared to cooperators . the fixation probabilities , eq . ( [ eq : fixprob ] ) , provides first insight into the dynamics .when the population size is large and selection by fitness differences dominates the dynamics , i.e. , when , the probability that defectors ultimately take over the whole population tends to .cooperators are guaranteed to eventually die out .this is the regime of _ darwinian _ evolution ; the resulting outcome equals the one of rational agents .however , in the situation of small populations and small fitness difference , i.e. , , both cooperators and defectors have an equal chance of of fixating . in this regime, fluctuations have an important influence and dominate the evolutionary dynamics , leaving fitness advantages without effect , evolution is _ neutral_. further quantification of the regimes of darwinian and neutral evolution is feasible by considering the mean extinction time , given by eq .( [ eq : t_constant_fitness ] ) .it is compared to stochastic simulations in fig .[ fig : a ] b for different costs ( fitness advantages ) .the excellent agreement confirms the validity of our analytic approach . regarding the dependence of on the population size and the fitness difference , the mean extinction time can be cast into the form , with a scaling function . and are characteristic time scales and population sizes depending only on the selection strength . analyzing its properties , it turns out that increases linearly in for small argument , such that , c.f .[ fig : a ] b. this is in line with our classification scheme and the expected behavior .it indicates that for small system sizes , , evolution is _neutral_. fluctuations dominate the evolutionary dynamics while the fitness advantage of defectors does not give them an edge , c.f .[ fig : a ] a. indeed , in this regime , cooperators and defectors have an equal chance of surviving , see eq .( [ eq : fixprob ] ) .the behavior shows that the extinction time considerably grows with increasing population size ; a larger system size proportionally extends the time cooperators and defectors coexist .as expected , a very different behavior emerges for large system sizes , , where increases only logarithmically in , and therefore , again in correspondence with our classification scheme of the mean extinction time .the extinction time remains small even for large system sizes , and coexistence of cooperators and defectors is unstable .indeed , in this regime , selection dominates over fluctuations in the stochastic time evolution and quickly drives the system to a state where only defectors remain , c.f .[ fig : a ] a. the evolution is _darwinian_. as described above , the regimes of neutral and darwinian evolution emerge for and , respectively .the cross - over population size delineates both scenarios . further analyzing the universal scaling function , as well as comparison with data from stochastic simulations , see fig .[ fig : a ] b , reveals that the transition at is notably sharp .we therefore refer to it as the _ edge of neutral evolution_. the crossover time and the crossover population size which define the edge of neutral evolution decrease as in increasing cost .this can be understood by recalling that the cost corresponds to the fitness advantage of defectors and can thus be viewed as the selection strength .the latter drives the darwinian dynamics which therefore intensifies when grows , and the regime of neutral evolution diminishes . on the other hand , when the cost of cooperation vanishes , evolution becomes neutral also for large populations . indeed ,in this case , defectors do not have a fitness advantage compared to cooperators ; both do equally well .our approach now yields information about how large the cost may be until evolution changes from neutral to darwinian . from numerical inspection of find that neutral evolution is present for , and darwinian evolution takes over for .this resembles a condition previously derived by kimura , ohta , and others for frequency independent fitness advantages .the edge of neutral evolution arises at and . as a consequence we note that, though selection pressure clearly disfavors cooperation , our results reveal that the ubiquitous presence of randomness ( stochasticity ) in any population dynamics opens a window of opportunity where cooperation is facilitated . in the regime of neutral evolution , for , cooperators have a significant chance of taking over the whole population when initially present . even if not , they remain on time - scales proportional to the system size , , and therefore considerably longer than in the regime of darwinian evolution , where they extinct after a short transient time , . + , depending on , for different transitions emerging in social dilemmas ( c.f . fig[ fig : c ] ) .( a ) , transition from the neutral regime ( 1 ) , where emerges , to the darwinian regimes ( 3 ) ( ) as well as ( 4 ) ( ) .( b ) , from neutral dynamics in regime ( 2 ) ( ) to the darwinian regimes ( 3 ) ( ) and ( 4 ) ( ) .( c ) , transition between the two neutral regimes ( 1 ) ( ) and ( 2 ) ( ) .analytical calculations are shown as black lines , and symbols have been obtained from stochastic simulations for large ( ) , medium ( ) , and small ( ) values of and/or .the data collapse onto universal curves reveals the accuracy of the scaling laws . in ( a ) ,we have used , while , in ( b ) , and , in ( c).,title="fig:",height=128 ] , depending on , for different transitions emerging in social dilemmas ( c.f .[ fig : c ] ) .( a ) , transition from the neutral regime ( 1 ) , where emerges , to the darwinian regimes ( 3 ) ( ) as well as ( 4 ) ( ) .( b ) , from neutral dynamics in regime ( 2 ) ( ) to the darwinian regimes ( 3 ) ( ) and ( 4 ) ( ) .( c ) , transition between the two neutral regimes ( 1 ) ( ) and ( 2 ) ( ) .analytical calculations are shown as black lines , and symbols have been obtained from stochastic simulations for large ( ) , medium ( ) , and small ( ) values of and/or .the data collapse onto universal curves reveals the accuracy of the scaling laws . in ( a ) , we have used , while , in ( b ) , and , in ( c).,title="fig:",height=128 ] , depending on , for different transitions emerging in social dilemmas ( c.f .[ fig : c ] ) .( a ) , transition from the neutral regime ( 1 ) , where emerges , to the darwinian regimes ( 3 ) ( ) as well as ( 4 ) ( ) .( b ) , from neutral dynamics in regime ( 2 ) ( ) to the darwinian regimes ( 3 ) ( ) and ( 4 ) ( ) .( c ) , transition between the two neutral regimes ( 1 ) ( ) and ( 2 ) ( ) .analytical calculations are shown as black lines , and symbols have been obtained from stochastic simulations for large ( ) , medium ( ) , and small ( ) values of and/or .the data collapse onto universal curves reveals the accuracy of the scaling laws . in ( a ) , we have used , while , in ( b ) , and , in ( c).,title="fig:",height=128 ] let us now consider the influence of fluctuations within the more general form of social dilemmas , given by the parameters .we employ the analytical form of the mean extinction time , eq .( [ eq : t_gen ] ) , as well as results from stochastic simulations .examples for different paths in parameter space are shown in fig .[ fig : d ] .again , the approximative analytical results agree excellently with numerics .concering the dependence of the mean extinction time on the population size , different behaviors emerges , reflecting the different regimes of evolutionary dynamics .two regimes of darwinian evolution form , depicted white in fig .[ fig : c ] .the first one occurs within the snowdrift game , where the extinction time increases exponentially in the population size , , and coexistence of cooperators and defectors is stable .the second regime comprises parts of the prisoner s dilemma , the coordination game , and by - product mutualism .there , either defectors or cooperators eventually survive , and the mean extinction time of the other strategy is small , and obeys a logarithmic law . we have encountered this regime already in the particular case of the prisoner s dilemma specified by .these two darwinian regimes are separated by two regimes of neutral evolution , shown in grey in fig .[ fig : c ]. first , for small and small differences in the payoffs ( i.e. , around the point where the four types of games coincide ) a behavior emerges .second , at the lines where the snowdrift game turns into the prisoner s dilemma resp . by - productmutualism , the mean extinction time increases as a square - root in the population size , . similar to the prisoner s dilemma , we now aim at identifying the edge of neutral evolution , i.e. , the crossover from the darwinian regimes to the regimes of neutral evolution .we have calculated the boundaries of both neutral regimes , and analytically , see methods section [ methods_edges ] .they are described by straight lines for the first one and by parabola - shaped lines for the second one , see fig .[ fig : c ] . both edges of neutral evolution scale proportional to the system size .therefore , while increasing the system size changes the payoff parameters where the crossovers appear , the shape and relations of the different regimes are not altered . concerning the dependence of the edges of neutral evolution on the characteristic strength of selection , meaning the average contribution of the fitness - dependent payoff to the overall fitness , different scaling laws arise .for the crossover from the neutral regime to the other regimes , and scale as .in contrast , a scaling law for crossovers between the neutral regime with and the darwinian regimes emerges .this different scaling behavior arises , for example , for and varying as shown in fig .[ fig : d ] b.cooperation is often threatened by exploitation and therefore , although beneficial , vulnerable to extinction . in evolutionary dynamics , this mechanism comes in through selection by individuals fitness , the driving force of darwinian evolution .however , evolution also possesses stochastic aspects . employing a standard formulation of social dilemmas, we have shown that fluctuations can support cooperation in two distinct ways .first , they can lead cooperators to fully take over the population .second , neutral evolution considerably increases the time at which cooperators and defectors coexist , i.e. , at which a certain level of cooperation is maintained . to emphasize the importance of the second point , we note that in real ecological systems the rules of the dynamics themselves change due to external or internal influences , setting an upper limit to the time - scales at which evolution with constant payoffs , as we study here , applies. in particular , these times can be shorter than the times that would be needed for extinction of either cooperators or defectors , such that it may be less important to look at which of both would ultimately remain , but what the time - scales for extinction are .quantitatively , we have shown the emergence of different darwinian and neutral regimes . in the darwinian regime of the prisoners dilemma , cooperators are guaranteed to become extinct ; the same is true for the second neutral regime , where .however , in the other neutral regime , with , a random process determines whether cooperators or defectors prevail .cooperators may therefore take over due to essentially neutral evolution .moreover , even if cooperators eventually disappear , they remain for a considerably longer time in the neutral regimes than in the darwinian regime .indeed , in the regimes of neutral evolution , coexistence of cooperators and defectors is maintained for a mean time obeying resp . . for medium and large population sizes , this time exceeds by far the time atwhich cooperation disappears in the darwinian regimes of the prisoner s dilemma or of the coordination game ( if defectors happen to dominate in the latter case ) .neutral evolution can therefore maintain cooperation on a much longer time - scale than darwinian evolution .this effect is relevant as the neutral regimes considerably extend into the prisoner s dilemma as well as the cooperation game region .there , a form of neutrally maintained cooperation evolves .our results have been obtained by applying a general concept based on extinction times that allows to classify evolutionary dynamics into regimes of darwinian and neutral character , separated by an emerging edge of neutral evolution .apart from the social dilemmas under consideration here , we believe that our quantitative analytical approach can be versatilely applied to disentangle the effects of selection and fluctuations in various ecological situations where different species coexist .encouraged by our findings , we expect such studies to reveal further unexpected effects of fluctuations on ecology and evolution. financial support of the german excellence initiative via the program nanosystems initiative munich and the german research foundation via the sfb tr12 symmetries and universalities in mesoscopic systems is gratefully acknowledged .t. r. acknowledges funding by the elite - netzwerk bayern .10 r. muneepeerakul , e. bertuzzo , h. j. lynch , w. f. fagan , a. rinaldo , and i. rodriguez - iturbe .neutral metacommunity models predict fish diversity patterns in mississippi - missouri basin ., 453:220223 , 2008 .
|
the functioning of animal as well as human societies fundamentally relies on cooperation . yet , defection is often favorable for the selfish individual , and social dilemmas arise . selection by individuals fitness , usually the basic driving force of evolution , then quickly eliminates cooperators . however , evolution is also governed by fluctuations that can be of greater importance than fitness differences , and can render evolution effectively neutral . here , we investigate the effects of selection versus fluctuations in social dilemmas . by studying the mean extinction times of cooperators and defectors , a variable sensitive to fluctuations , we are able to identify and quantify an emerging ` edge of neutral evolution ' which delineates regimes of neutral and darwinian evolution . our results reveal that cooperation is significantly maintained in the neutral regimes . in contrast , the classical predictions of evolutionary game theory , where defectors beat cooperators , are recovered in the darwinian regimes . our studies demonstrate that fluctuations can provide a surprisingly simple way to partly resolve social dilemmas . our methods are generally applicable to estimate the role of random drift in evolutionary dynamics .
|
complex network theory has been proven to be a powerful framework to understand the structure and dynamics of complex systems .entity information network is a kind of complex network that describes the structural relationships between entities . with more and more researches aboutentity information network model having been proposed , it is widely used in social network analyzing , image association and other fields . depending on the diversity of the relationships and entities , entity information network model can be divided into two categories : based on simple structure and based on complex one . generally , only one kind of entity relationship is contained in simple structure based model , such as item - to - item , object - to - item or object - to - object relationships .in recent years , many modeling methods on item - to - item information network are proposed . in refs ,the similarity between media sources is evaluated by mapping different types of medium s features to a common space based on media contextual clues .zhu et al . proposed an information network model to correlate tweets , emotion features and users based on emotion analysis .matrix transformation , matrix decomposition and random walk methods are used to construct object - to - item information network . and some researches focus on the status of users in the resource , such as influence , importance and opinion leaders , by constructing the authoritative network based on specific topics .some modeling methods are also widely concentrated on object - to - object relationships such as user - to - user . based on direct and indirect users relationships ,the similarity between users can be measured by the comments on social media resource and those on user reviews . besides , matrix decomposition can also be used to evaluate the similarity between users in social media combining with lda .complex structure based model supports multiple forms of relationships between multimodal entities .google s knowledge graph and other engine knowledge maps belong to this type of model .in which , there are multi - class of relationships between entities .in addition , the entities are also multi - dimension and multi - scale .thus , they are generally called heterogeneous information network models . although heterogeneous information network models are widely studied in complex systems , such as academic resource search , citations recommendation , user based personalized service , traveling plan search and recommendation , and makeup recommendation . there are seldom researches about equipment - standard system . as we know, equipment - standard system is a multi - layer , multi - dimension and multi - scale complex system . for this reason, we present a heterogeneous information network model for equipment - standard system ( hinm - ess ) in this paper .hinm - ess contains three types of nodes that present different granularity of entities in equipment - standard system and six types of entity relationships .a complete hinm - ess can provide strong support for equipment - standard system , such as resource searching , production designing , standard revising and controlling .two real data sets are used in experiments to verify the validity of hinm - ess .the one is a real equipment - standard system data set that contains 2600 standard documents and 24 elements .the other is a mixed test data set that contains different size of data from multiple fields .the experiments show that our methods in modeling process are efficient and accurate .comparing with word mover s distance ( wmd ) , the relational modeling between documents using our method can save 50% time and the performance of precision reduces about 20% .that is , we can establish hinm - ess efficiently , and reflect relationship between entities in equipment - standard system accurately .the formal expression of hinm - ess is described as hinm - ess= .in which , is the network node set with three different granularity of entities , i.e. , , and . represents the standard document ; represents the clause in standard document ; represents the unit such as equipment , module or element . is the network edge set with six different kinds of relationships between entities , where represents the relationships , represents the relationships , represents the relationships , represents the relationships , represents the relationships , and represents the relationships .each edge has its weight to measure the degree of correlation or the similarity between entities . as shown in fig .[ fig1 ] , to construct the hinm - ess , six kinds of entity relationships are confirmed in turn by evaluating the similarities or correlations between entities .first of all , weights of are evaluated to confirm the relationships , as shown in fig . [ fig1](a ) .secondly , a is divided into several items to confirm relationships , as shown in fig .[ fig1](b ) . and then relationships and relationships are confirmed by the same strategy used in the first step , as shown in fig . [ fig1](c ) .thirdly , the relationships can be confirmed since and relationships have been confirmed , as shown in fig .[ fig1](d ) , and relationships can be confirmed in virtue of and , as shown in fig .[ fig1](e ) .finally , hinm - ess can be obtained , as shown in fig . [ fig1](f ) . because these six relationships involve different entities , the method to measure the weights of edges are very different .the details will be described in following subsections .+ the relational modeling is the first and the most important step to establish the hinm - ess .as the contents of are text , the correlation between can be confirmed via text similarity so as to establish the connection between two nodes in hinm - ess .since the contents of and are text as well , only relational modeling is described in details , the and relational modeling is similar with .text analyzing is one of the most popular research topics .there are many researches focus on measuring text similarity , such as lda and word2vec .these models translate text contents to different abstract features to improve the measuring performance . in this paper, we choose word mover s distance ( wmd ) method to measure the similarity between .wmd provides accurate similarity measurement by combining the word embedding with the earth mover s distance ( emd ) . in wmd ,word2vec provides embedding matrix for a finite size vocabulary of words .the column , , represents the embedding of the word in dimensional space .therefore , semantic similarity between two words and , that is refer to as the cost associated with ` traveling ' from one word to another , is measured by their euclidean distance in the word2vec embedding space , that is , text documents are represented as normalized bag - of - words ( nbow ) vectors , , if word appears times in the document , the element in is defined as , let and be the nbow representation of two text documents , be a flow matrix denotes ` how much ' of word in travels to word in , the similarity evaluation problem can be defined as the minimum cumulative cost required to move all words from to , and the constraints are provided by the solution to the following linear program , after the similarities between being calculated by wmd , we regard these similarity values as the weights of .although wmd leads to low error rates , the time complexity is as high as . in order to improve the time consumption while maintaining the accuracy ,we modify wmd by introducing simhash strategy .in which , top potentially similar are screened via simhash to reduce the complexity significantly .the main idea of simhash strategy is to reduce the dimension . as a local sensitive hash method , simhash maps the high - dimensional eigenvectors to a signature value containing bits named fingerprint . by comparing the hamming distance between fingerprints, we can estimate whether the are similar or not .therefore , an efficient potentially similar screening strategy based on simhash is presented in this paper . in top potentially similar screening process , simhash is used to map fingerprints for each . the in datasetis mapped to fingerprints with number 0 or 1 , for instance , .based on the fingerprints , the hamming distance is chosen to estimate the similarity between and generate a list of top potentially similar efficiently . as shown in eq .[ eq4 ] , hamming distance is the number of ` 1 ' in xor between documents and . for instance , the hamming distance between 110 and 011 equals 2 since . after estimating the similarity between by hamming distance, we can obtain the top potentially similar documents list by directly using sorting algorithms .but this simple strategy is very time - consuming . even for the best sorting algorithm , its time complexity reaches . aiming at the top , there is no need to sort all hamming distances .therefore , we propose two strategies to improve the efficiency of potentially similar document lists generating process . *( 1 ) the lowliest replace elimination based strategy . * in this strategy , top potentially similar documents are stored in a finite set of elements , which is defined as .the update strategy of is where represents the node which is the farthest from the target node on hamming distance in ; represents a new node which is under judgment to be the potentially similar ; represents the hamming distance between this node and target node . * ( 2 ) ordered window filling based strategy .* the lowliest replace elimination based strategy needs to repeatedly update the maximum value in the finite set .it takes too much time to traverse the whole set .therefore , we propose another strategy based on ordered window filling to reduce traversal time further . in this strategy ,top potentially similar documents are stored in an ordered window of elements , which is defined as .so , the elements in can be updated via eq .[ eq6 ] under the condition of , in which , represent a new node which is under judgment to be the potentially similar ; is the size of . via the screen strategies above, we obtain the top potentially similar by removing weightless edges from original relational graph .this process is illustrated by the simple example in fig .[ fig2 ] .relational modeling processes.,title="fig:",width=529 ] + firstly , we have a lot of edges in the original relational graph that is a complete graph , as shown in fig . [ fig2](a ) . by screening the top-3 potentially similar , we remove the weightless edges in fig .[ fig2](a ) , and then obtain the relational graph , as shown in fig .[ fig2](b ) .the edges are directed since two may not be the top similar for each other at the same time .as there are only a small number of edges which link the potentially similar documents left , the time consumption of the similarity evaluation process with wmd method is reduced sharply .then relational graph can be completed efficiently via two stages , i.e. , top potentially similar screening and docs similarity evaluation via wmd method .each contains different numbers of .therefore , the relational modeling can be considered as a decomposition process .the core issue of the relational modeling process is to extract items from standard documents accurately .since in equipment - standard system have typical hierarchical structure , the section numbers in section headers are always composed of integers and symbolic points , ` 2.2 ' for instance .furthermore , the section number will be followed by the section title directly .therefore , all the possible section numbers and titles can be extracted via regular expressions to construct a triplet sequence .in which , is the chapter number that is the first integer of section number ( for example , ` 2 ' is the chapter number when the section number is ` 2.2 ' ) ; is the whole section number ; is the line number of title in the document .for instance , a possible triplet may like this : ( 5 , 5.2.1 , 456 ) .the results obtained via regular expressions still contain some noises that meet the definition of the section title but not the real one , such as data in the table or in the text of the reference data .for this reason , we design following noise - filtering rule to remove those illogical triplets , in which , section number must conform the typesetting format of sections ( when , the logical value of can only be 2.2.1 or 2.3 or 3 ) ; the chapter number must be no less than all the chapter numbers in the previous triplets . in the triplet sequence must be sorted . according to the noise - filtering rule ,most of noises are erased . as a result, we can use the line number in triplets to decompose the and to extract out the accurately . at the same time, the relational network is constructed .79 standard documents have been used to test the extracting process .there are 77 documents extracted correct completely .the other two documents have only 1 incorrect item respectively .the accuracy of the extracting process is 97.5% . in hinm - ess ,two relationships , and , are widely used in equipment - standard system application .just relational modeling process is analyzed in this paper , since relational modeling is similar .generally , there are part of have been assigned labels , that is , there are some known relationships in equipment - standard system . hence , these relationships can be used to measure the correlation between and indirectly , and then complete the relational model . for a new that is waiting for assigning label ,the correlation between and the is measured by where is one of s similar ; is the similarity between and its similar which obtained in relational modeling ; presents s similar set ; presents the correlations between s similar and their relative .analogously , for a new , the correlation between and the is measured by combining with s similar and their relative , where is one of s similar ; is the similarity between and its similar ; presents s similar set ; is the correlations between s similar and their relative . the relational modeling process , which is the core of the hinm - ess , is illustrated in fig .figure [ fig3](a ) shows four , five and five known relationships . in original graph, and are waiting for assignment . since and are assigned to , the correlation between and is measured by as shown in fig .[ fig3](b ) .similarly , the correlations and are known , therefore , the correlation between and is measured by , as shown in fig . [ fig3](c ) .relational modeling.,title="fig:",width=529 ] + after all the correlations are evaluated , the weightless edges will be removed according to a threshold .alike soft - classification method , multiple will be assigned to one in relational modeling process and vice versa . in the relational model ,if the relationship between a and is confirmed , they will be put into the training samples .this iterative process will optimize hinm - ess continually and make model more and more accurate .in this paper , the performance of and relational network modeling methods are tested on two real data ( data1 and data2 ) .data1 includes 2600 standard documents and 24 elements .data2 is a mixed - field text data set which contains different size of data ( from 1 kb to 1 gb ) .the performance of original wmd and simhash+wmd is compared on data1 through three indices .the first index is time improvement ( ) where represents the time consumption of wmd algorithm and represents the time consumption of simhash+wmd method .the second index is accuracy that is used to estimate the proportion of documents simhash mistakenly delete from the 20 most similar text evaluated by wmd , where is the accuracywhen simhash method screening top documents as potentially similar documents ; is the set that contains the 20 most similar documents evaluated by wmd and is that by simhash+wmd .the third index is that considers the time improvement and accuracy at the same time .it is defined as figure [ fig4 ] shows that decreases with top increasing while the accuracy increasing with top increasing .since the achieves maximum at the top-1500 , it is recommended to choose top-1500 to screen potential similar for 50% time saving and 20% precision reducing .+ we also compare the time consumption of two improved screen strategies ( _ lowliest replace elimination based strategy _ and _ ordered window filling based strategy _ ) on data2 . as shown in fig .[ fig5 ] , the _lowliest replace elimination based strategy _ only save 1% time while the _ ordered window filling based strategy _ can save about 7.5% running time in screening process . the index in fig .[ fig5 ] is a time index enhanced by the time consumption of all hamming distances sort strategy , where is the time consumption of _ lowliest replace elimination based strategy _ or _ ordered window filling based strategy _ ; is that of whole sort strategy . on two different strategies in wmd+simhash.,title="fig:",width=377 ]+ finally , we test the hinm - ess s accuracy on relationship between entities using data1 .we test on 8 to verify whether the are linked to the right or not .the precision equals to the ratio of the number of test and the number of correct . generally ,more training samples lead to higher precision .the precision on topic and is higher than that on other topics , since the number of training in these two is about third times larger than that in others , as shown in table [ table1 ] .in addition , the size of training sample sets are larger than that of . hence , as anticipated , the performance of precision on reflecting is much higher than that of ..accuracy of relational model on data1 . [ cols="^,^,^,^,^",options="header " , ]suffering from the complex and redundant equipment - standard system , a heterogeneous information network model for equipment - standard system(hinm - ess ) is presented in this paper to deal with some important issues in the system , such as standard documents searching , standard revising , standard controlling and , production designing .hinm - ess contains three types of nodes that represent three types of entities and six types of entity relationships . , , , , , and relational models are discussed and the detail modeling strategies are presented in this paper .experiments on two real data sets show that the modeling strategies are time saving and the accuracy is also rather good .moreover , experiments show that hinm - ess model can reflect real relationships accurately when training sample is enough .overall , hinm - ess is an efficient and accurate model .it will provide a strong and firm support for applications in equipment - standard system . we will consider the iterative strategy in the modeling process to improve the accuracy of relational models further in the future study .this work is partially supported by the national natural science foundation of china under grant nos .61433014 and 61673085 , and by the fundamental research funds for the central universities under grant no . zygx2014z00200 s. boccaletti , v. latora , y. moreno , m. chavez , d.u .hwang , complex networks : structure and dynamics , phys .424 ( 2006 ) 175 - 308 .costa , f.a .rodrigues , g. travieso , p.r .villas boas , characterization of complex networks : a survey of measurements , adv .56 ( 2007 ) 167 - 242 .dorogovtsev , a.v .goltsev , j.f.f .mendes , critical phenomena incomplex networks .mod . phys . 80 ( 2008 ) 1275 - 1335 . m. barthlemy , spatial networks , phys499 ( 2011 ) 1 - 101 .cui , z.k .zhang , m. tang , p.m. hui , y. fu , emergence of scale - free close - knit friendship structure in online social networks , plos one 7 ( 2012 ) e50702 .l. l , y .- c .zhang , c.h .yeung , t. zhou , leaders in social networks , the delicious case , plos one 6 ( 2011 ) e21202 .t. zhou , m. medo , g. cimini , z .- k .zhang , y .- c .zhang , emergence of scalefreeleadership structure in social recommender systems , plos one 6 ( 2011 ) e20648 .ahnert , t.m.a .fink , clustering signatures classify directed networks , phys .e 78 ( 2008 ) 036112 .l. l , t. zhou , link prediction in complex networks : a survey , physica a 390 ( 2011 ) 1150 - 1170 .s. maslov , k. sneppen , a. zaliznyak , detection of topological patterns in complex networks : correlation profile of the internet , physica a 333 ( 2004 ) 529 - 540 .h. zhang , j. yuan , x. gao , z. chen , boosting cross - media retrieval via visual - auditory feature analysis and relevance feedback , in : proceedings of the 22nd acm international conference on multimedia , acm , 2014 , pp .953 - 956 .y. gao , m. wang , z.j .zha , j. shen , x. li , x. wu , visual - textual joint relevance learning for tag - based social image search , ieee t. image process . 22( 2013 ) 363 - 76 . l. zhu , a. galstyan , j. cheng , k. lerman , tripartite graph clustering for dynamic sentiment analysis on social media , in : proceedings of the 2014 acm sigmod international conference on management of data , acm , 2014 , pp .1531 - 1542 .m. clements , a.p.d .vries , m.j.t .reinders , the influence of personalization on tag query length in social media search , inform . process . manag .46 ( 2010 ) 403 - 412 .x. yu , x. ren , y. sun , q. gu , b. sturt , u. khandelwal , b. norick , j. han , personalized entity recommendation : a heterogeneous information network approach , in : proceedings of the 7th acm international conference on web search and data mining , acm , 2014 , pp .283 - 292 .j. vosecky , k.w.t .leung , w. ng , collaborative personalized twitter search with topic - language models , in : proceedings of the 37th international acm sigir conference on research & development in information retrieval , acm , 2014 , pp .y. li , c. wu , x. wang , p. luo , a network - based and multi - parameter model for finding influential authors , journal of informetrics 8 ( 2014 ) 791 - 799 .a. anderson , d. huttenlocher , j. kleinberg , j. leskovec , effects of user similarity in social media , in : proceedings of the fifth acm international conference on web search and data mining , acm , 2012 , pp .703 - 712 .h. ma , on measuring social friend interest similarities in recommender systems , in : proceedings of the 37th international acm sigir conference on research & development in information retrieval , acm , 2014 , pp .465 - 474 .z. wang , j. liao , q. cao , h. qi , z. wang , friendbook : a semantic - based friend recommendation system for social networks , ieee t. mobile comput . 14 ( 2015 ) 538 - 551 .vang , ethics of google s knowledge graph : some considerations , j. inf .ethics soc . 11 ( 2013 )245 - 260 .x. ren , j. liu , x. yu , u. khandelwal , q. gu , l. wang , j. han , cluscite : effective citation recommendation by information network - based clustering , in : proceedings of the 20th acm sigkdd international conference on knowledge discovery and data mining , acm , 2014 , pp .821 - 830 .a. kao , w. ferng , s. poteet , l. quach , r. tjoelker , talison - tensor analysis of social media data , ieee international conference on intelligence and security informatics , ieee , 2013 , pp .137 - 142 . c. li , a. sun , fine - grained location extraction from tweets with temporal awareness , in : proceedings of the 37th international acm sigir conference on research & development in information retrieval , acm , 2014 , pp .a. j. cheng , y. y. chen , y.t .huang , w.h .hsu , h.y.m .liao , personalized travel recommendation by mining people attributes from community - contributed photos , in : proceedings of the 19th acm international conference on multimedia , acm , 2011 , pp .l. liu , j. xing , s. liu , h. xu , x. zhou , s. yan , wow !you are so beautiful today ! , acm t. multim. comput . 11( 2014 ) 20 . c. sadowski , g. levin , simhash : hash - based similarity detection , http://www.googlecode.com/sun/trunk/paper/sim hash with bib.pdf[http://www.googlecode.com/sun/trunk/paper/sim hash with bib.pdf ] , 2007
|
entity information network is used to describe structural relationships between entities . taking advantage of its extension and heterogeneity , entity information network is more and more widely applied to relationship modeling . recent years , lots of researches about entity information network modeling have been proposed , while seldom of them concentrate on equipment - standard system with properties of multi - layer , multi - dimension and multi - scale . in order to efficiently deal with some complex issues in equipment - standard system such as standard revising , standard controlling , and production designing , a heterogeneous information network model for equipment - standard system is proposed in this paper . three types of entities and six types of relationships are considered in the proposed model . correspondingly , several different similarity - measuring methods are used in the modeling process . the experiments show that the heterogeneous information network model established in this paper can reflect relationships between entities accurately . meanwhile , the modeling process has a good performance on time consumption . complex system , heterogeneous information network , equipment - standard system , entity relationships model
|
eg:od ea:xbcebne q : sharing .... .... congestion gamepoa .... congestion and effective resource allocation are traditional problems that game theory has been trying to solve .as populations are increasingly concentrated in big cities , the congestion problem becomes more critical .one practical way to solve the congestion problem is sharing of unused resources .for example , even during heavy traffic congestion , many vehicles have empty seats .similarly , many empty vehicles occupy limited parking spaces in urban areas while people struggle to find an empty taxi . besides the transportation field, we can find examples such as unused buildings , empty restaurants , and idle workers .however , the sharing of unused resources is not realized unless there are matching demands of multiple users .empty vehicle seats are shared among passengers only if they have a common part on their routes .a vehicle is shared only if the destination of a driver is equal to the origin of another driver .if people behave selfishly , the probability of matching the demand for sharing becomes small , leading to the problem of demand coordination and incentive design .particular to the sharing of vehicles is that positive and negative externalities of one player s route change are propagated to other players who do not choose the relevant routes via changes in vehicle supply .consider a chain of vehicle sharing in which player 1 first drives a vehicle from a to b , then player 2 drives the same vehicle from b to c , and player 3 drives the vehicle from c to d. if player 1 quits to use the vehicle , player 2 and even 3 can not use the vehicle even though player 3 does not share any part of his route with player 1 . because classical congestion games only focus on externalities among players who choose the same routes ,it is necessary to consider alternative games that model externalities via vehicle supply to analyze coordination in vehicle sharing .this study formulates ride sharing games that model the positive and negative externalities of choosing vehicles .we also consider how to coordinate players to improve the efficiency of sharing , i.e. , maximizing the operation rate of otherwise unused vehicles by giving players an incentive .we mainly assume applications in transportation areas , such as carpooling and ride sharing . .... .... .... congestion gamepoacongestion gamea->b->ca->cpoa .... since rosenthal introduced the congestion game , it has been applied to problems of congestion externalities in several areas such as transportation and communication networks . in this game , players choose a combination of resources and the players payoffs depend on the congestion level , i.e. , the number of players using the same resources .the congestion game is a subclass of potential games , which feature the _ finite improvement property _ ( fip ) .this property guarantees that if each player updates his strategy in response to other players by turns , it will reduce his private cost and improve the common potential function , which eventually reaches a local minimum , the _ pure nash equilibrium _ ( pne ) , where each player has a deterministic strategy .because of this property , potential and congestion games have been well studied and have wide applications .the negative externalities in congestion games are also well studied .if players choose their routes selfishly in road networks , it causes loss of social welfare compared to socially optimal routing .the ratio of social cost in a selfish choice to the one in the social optimal is called the _ price of anarchy _ ( poa ) and its bounds are well known , especially for affine cost functions .studies on poa bound assume that a game has a pne .because of the property of always having a pne , most poa studies have focused on congestion games .a major difference between vehicle sharing and traffic routing problems is vehicle supply . while players drive their own vehicles in traditional traffic routing problems , players must find shared vehicles before riding in sharing problems . in the case of a chain of vehicle sharing from playera via player b to player c , the choice of player a has externalities not only on player b but those also propagate to player c , who does not share any common part of route with player a. this kind of complicated externality is not considered in traditional routing problems and congestion games . according to a review of sharing studies in transportation areas , there are few game theory studies on externalities and coordination of players moves in vehicle sharing problems .traditional studies on sharing include the optimization of vehicle routes for picking up all passengers ( the _ dial - a - ride problem _ ) , problems of splitting passengers fares according to their riding distances , optimization of locations of carpool stations , and problems of relocation of carpool vehicles among stations .those studies mainly focus on optimization problems under given moves ( origins and destinations ) of passengers and fixed drivers of vehicles , and therefore , do not analyze the poa when passengers strategically choose their moves in response to the moves of other passengers and vehicles .the review states that there are no studies on coordination of players moves to improve the efficiency of sharing or reduce the poa .there are several studies that examine the coordination of players in traditional traffic routing and congestion games .one promising technique for coordination is the mechanism design , which mainly provides monetary incentives for players to change their behavior in a coordinated manner .christodoulou studied coordination mechanism in congestion games .another relatively new technique for coordination is signaling , in which a mediator provides information for players to control their beliefs on uncertain environments , and accordingly , the expected payoff and resulting choices when there is information asymmetry between the mediator and players .most recent studies are based on the _ revelation principle _ , which proved the existence of incentive compatible recommendations of choices equivalent to the raw information inducing the same choices .rogers applied differential privacy techniques originally from database security to traffic routing problems for mitigating congestions by sending noisy incentive compatible recommendations of routes .vasserman also applied recommendations to traffic routing and analyzed how the poa improved . ....g:coordinatecgpoa q:fippne a : cgformulatepnet1,2,3 s : sectioncoordinate poa .... .... poapoapoa .... this study proposes ride sharing games as a formulation of positive and negative externalities caused by changes in the supply of shared vehicles .the study s objective is to understand the poa and its improvement via a coordination technique in ride sharing games .a critical question is whether ride sharing games have a pne since the poa bound assumes it .our result shows a sufficient condition for a ride sharing game to have a fip and a pne similar to potential games .this is the first step to analyze poa bound and its improvement by coordination in ride sharing games .we also show an example of coordinating players in ride sharing games using signaling and evaluate the improvement in the poa .section [ sec : model ] provides a formulation of ride sharing games and other setups .section [ sec : theorem ] presents the main results on a condition of ride sharing games to have pne and its proof .section [ sec : exam ] presents graphic examples of ride sharing games and coordination of players by signaling . .... .... .... a _ ride sharing game _ is defined as a tuple , where * is a finite set of players .a player represents an user of shared vehicles . represents all players except for .* is a finite set of vehicles .each vehicle has a common seating capacity .* is a directed graph that has a finite set of nodes and a finite set of edges . is a simple graph but each node has a loop to itself .a node represents a place and an edge represents a road .players and vehicles move on .* is a finite set of time that partitions the day .each player and vehicle is located on a node at time and finishes a move on an edge during period .* is a set of all paths with length on .a path represents a round trip of a player on a day. we denote if is a induced path of . is a complement of which is also a induced path of including all edges not in . is a common path if and only if and .* is a set of strategies of player . is a set of strategy profiles . is a round trip of player and is a strategy profile . represents a strategy profile of all players except for .a strategy update is denoted as when a original strategy profile is and player update a strategy from to .* is a map that represents the allocation of player to vehicle during each period depending on strategy profile . in the case where no vehicle is allocated to player , .each vehicle moves together with allocated player on the same edge where the player moves . represents the number of players riding on vehicle during period when the strategy profile is .* is a cost function of a player riding on vehicle on edge . is a set of cost functions of all edges .the total cost of player on a day is . in this study , we consider one - shot games , where players simultaneously choose whole round trips on the day .we assume that cost function is monotone decreasing for when and monotone increasing when . ........ here we prepare the basic concepts of potential games used in the rest of the paper .a _ pure nash equilibrium _ ( pne ) of game is defined as a deterministic strategy profile if and only if no player can reduce his cost by updating his deterministic strategy from .while not all games have a pne , some games always have a pne .in particular , the following property guarantees the existence of a pne . a game has a finite improvement property ( fip ) if a strategy profile of the game always converges to a pure nash equilibrium by updating each player s strategy , by turns , finite times .[ def : fip ] when a game has an fip , a pne can be found by players updating their strategies one player at a step . therefore , it is unnecessary to search the whole set of strategy profiles and the computational cost to find a pne is reduced .a potential game is game which has a potential function defined as follows . is a _ ( ordinal ) potential function _ and is a _ ( ordinal ) potential game _ if [ def : pg ] the following theorem guarantees that potential games have an fip and then a pne .if a potential game has a potential function with a finite amount of values , the game has an fip .[ thm : pg ] starting from an arbitrary strategy profile , each player updates their strategy by turns to minimize his cost . from the definition, it also reduces the value of the potential function until it reaches a local minimum . at that point, no player can reduce his cost and it is a pne . from definition [ def : fip ] , it also has an fip . in applications such as transportation and communication networks ,problems are often formulated as a minimization of a social cost , which is the total cost of all players .normally , a social cost in a pne is not the same as in the optimal .the ratio between the cost of a pne and an optimal cost is called the _ price of anarchy _ ( poa ) and is defined as follows .the price of anarchy of game is where is a social cost of strategy profile , is pne and is the social optimal .[ def : poa ] since the poa assume a pne , former studies on the poa have focused on games with a pne , such as congestion games . .... ....we consider cases where players have incomplete information on vehicle allocations .bayesian ride sharing game _ is an extension of a ride sharing game and defined as where * is a set of possible values of an exogenous variable , which affects the allocation of vehicles .* is the allocation of vehicles depending on .similarly , is the number of players on vehicle depending on and is the cost of player depending on . * ] .a pure bayesian nash equilibruim ( pbne ) of a bayesian game is a similar concept to pne of a deterministic game that is defined as a deterministic strategy profile if and only if no player can reduce his expected cost by updating his deterministic strategy from . .... .... in this study , we use the _ bayes correlated equilibrium _ ( bce) as a signaling technique of a mediator to coordinate players .a bce is a conditional distribution of a random recommendation , which is _ incentive compatible _ ( ic ) as defined below .a recommendation policy is incentive compatible if [ def : ic ] given the cost function of the mediator , the problem of the mediator is to design an optimal ic recommendation that makes players coordinate to minimize their cost .the problem is expressed as follows . \\ s.t .\sum_{\hat{\vect{a}}_{-i},x}p_{i}(x)\sigma(\hat{\vect{a}}|x)c_{i}(\hat{a}_{i},\hat{\vect{a}}_{-i}|x ) \leq \sum_{\hat{\vect{a}}_{-i},x}p_{i}(x)\sigma(\hat{\vect{a}}|x)c_{i}(a_{i},\hat{\vect{a}}_{-i}|x),\ : \forall i \forall \hat{a}_{i}. \end{array } \right .\label{eq : problem}\ ] ].... poat .... .... eq.15 .... here , we discuss when ride sharing games have a pne in order to evaluate the poa of a game and its improvement by coordination .we first start with the following negative result in most general cases. there exist ride sharing games that do not have an fip .[ thm : nonpg ] the example in section [ sec : nonpg ] shows the case in which the strategy updates of players are caught in an infinite loop , which will not converge to any pne .if all ride sharing games fall into this case , it is hard to apply theory of poa .however , we found cases where ride sharing games have an fip .intuitively , cost functions become monotone decreasing when and then ride sharing games have a structure of _ increasing returns _ and are _ locked in _ to a pne . before proceeding , we introduce several notions here .let and be the number of vehicles and players on edge during period , respectively .the change in by a strategy update is defined as follows . a strategy update is no - vehicle - loss if .[ def : nondecu ] an allocation map of a ride sharing game can be divided into _ path allocation _ and _ seat allocation _ .path allocation determines the paths of vehicles .once an edge on which a vehicle moves has been fixed , seat allocation determines the allocation of players to vehicles on the edge .the following path allocation assumes that the more demands there are on an edge , the more vehicles are allocated to it .a linear path allocation determines an allocation of out of vehicles on a node to outgoing edge on which players move at so that where is a constant that keeps .the remaining vehicles are allocated to in the order of .[ def : linpath ] is an allocated path if at least one vehicle is allocated on all edges in .[ def : allocp ] seat allocation is a simple version of bin packing problem .if players are willing to share a vehicle to reduce their costs , it is natural to assume the first - fit algorithm as follows .a seat allocation is first - fit if players are allocated to a vehicle with the smallest on the edge until it becomes full .[ def : firstfit ] this definition immediately yields the following lemma .if and is the first - fit seat allocation , all players ride in the same vehicle if .[ thm : acar ] an allocation is first - fit liner if it comprises a linear path allocation and a first - fit seat allocation .[ def : maxnondec ] the following lemma states that copying the strategy of another player always results in a cost less than in some ride sharing games . if * all players have a common set of actions , * , * is the first - fit seat allocation , and * strategy update is no - vehicle - loss .[ thm : rspg ] let be a current strategy profile .now consider strategy update which is always possible because of h1 . since all players ride on a vehicle on the edge according to h2 , h3 and lemma[thm : acar ] , the number of players sharing a car now depends only on and as follows . since player joins in and all other players strategies remain the same , therefore , from h4 and eq.[eq : se ] , we get from h2 , cost functions are monotone decreasing such that then , this completes the proof of lemma [ thm : rspg ] . is a riding path of player if and is an allocated path .[ def : ridingp ] is a necessary path if [ def : necp ] when is an allocated path . is a sufficient path if is a riding path of player and [ def : sufp ] a set of paths is disjoint if [ def : disjp ] let be a necessary path and is a riding path of player . is a driver if . is a passenger if .[ def : driver ] once a player update his strategy and becomes a driver , he can not improve his cost by updating his strategy again if * all players have a common set of actions , * , * , * is the first - fit seat linear allocation , and * has a disjoint set of necessary and sufficient paths .[ thm : driver ] let be a driver who has a riding path including a necessary and sufficient path . from h2 , h5 and definitions [ def : sufp ] and [ def : driver ] , all other players update their strategies to be passengers of .then the number of players on increases compaired to the one when lastly updated his strategy . since is disjoint , the number of players on the other decreases .then the cost for the part stays minimal since cost functions are monotone decreasing .meanwhile , is a only player who is on since all other players are passengers of .then the cost for the part is independent of and stay minimal since when updated strategy last time .therefore , the cost of whole path is minimal and then player can not update his strategy .the following theorem tells us there is a class of ride sharing games that has an fip .a ride sharing game has an fip if * all players have a common set of actions , * , * , * is the first - fit seat linear allocation , and * has a disjoint set of necessary and sufficient paths .[ thm : rsone ] let be a function defined as let be the current strategy profile , let be a player who has the current minimum cost among all players , and let his current profile be .then , becomes if has no riding path , no player are allocated a vehicle because of definition [ def : necp ] . in this case , player can update his strategy to , which has a necessary allocated path . from definition [ def : necp ] , updates the minimum cost and accordingly .if has a riding path , it must include a necessary path because of definition [ def : necp ] .if is not a driver , he can copy the strategy of as without reducing for all and from h4 the strategy update is no - vehicle - loss . then from lemma[ thm : rspg ] , it updates the minimum cost and accordingly .if has a riding path and is a driver , can not update his strategy according to lemma [ thm : driver ] . accordingly , if a player can update his strategy and reduce cost , it also reduces or a player can not update his strategy .then , satisfies definition [ def : pg ] and is a potential function .consequently , from theorem [ thm : pg ] , game has an fip .this complete the proof of theorem [ thm : rsone ] .although theorem [ thm : rsone ] has several assumptions and covers only a limited class of ride sharing games , the following theorem indicates a possibility of relaxation of the assumptions .the assumptions in theorem [ thm : rsone ] are not necessary conditions .[ thm : rshope ] the example in section [ sec : pg2 ] shows the case where a game has an fip even though it does not satisfy all the assumptions in theorem [ thm : rsone ] ..... .... * . * is a complete graph but each node has a loop edge connected to itself to represent staying of players . *initial location of players is node 1 and that of vehicles is node 2 .* must include nodes 3 and 4 for all players .* is the first - fit linear allocation . here, we assume and and then does not have an fip .figure [ fig : nonpg0 ] shows the initial state of this game .the numbers represent nodes ; and represent the players , and represents a vehicle .figures [ fig : nonpg1 ] and [ fig : nonpg2 ] show how strategy updates make a loop and the fip is broken . in figure[ fig : nonpg1 ] , there are two drivers ( and ) and is the player with the minimum cost . however , in figure [ fig : nonpg2 ] , updates his strategy to quit being a driver and become a passenger to reduce his cost , and loses a vehicle and his cost increases . in this case , so that player can not reduce cost by copying another player s strategy according to lemma [ thm : rspg ] .then , must choose the other vehicle to reduce his cost again as in figure [ fig : nonpg3 ] .this negative externality makes an infinite loop of this driver switching behavior and the game loses its fip . here , we assume and and that satisfies all the assumptions in theorem [ thm : rsone ] and has an fip . in this case, the game immediately converges into a pne as in figure [ fig : pg ] .the best update of player is to pickup the vehicle and all other players because the vehicle has enough capacity . here, we consider another case where and and the initial profile is the same as that in figure [ fig : nonpg1 ] . in this case, updates the same strategy as that in figure [ fig : nonpg2 ] and increases the cost of player .however , in this case , does not have to pick up the other vehicle but can be a passenger as in figure [ fig : pg1 ] , and this is the same pne as that in figure [ fig : pg ] .while this game does not satisfy h2 in theorem [ thm : rsone ] , it has an fip .this means that the assumptions in theorem [ thm : rsone ] are not necessary conditions .* . * and initial locations are shown in figure [ fig : brsg ] .all nodes have loop edges to themselves .* must include node 3 for all players .* is the first - fit linear allocation .* there is an uncertainty regarding the existence of the vehicle . means and means . *all players have a common prior . for each , this game satisfies the assumptions in theorem [ thm : rsone ] .there are only two distinct options for each player that . is a trip that visits nodes in this order . on the other hand , .all edges except for loop edges have the same cost function .if a player does not use the vehicle , the cost is 8 .if a player drives alone , the cost is 6 .if two players share the vehicle , the cost is 1 .the cost of loop edges is zero .the cost matrices of this game are shown in tables [ tbl : cost0 ] and [ tbl : cost1 ] .now we consider a system to coordinate players to share the unused vehicle by the bce , as described in section [ sec : signal ] .a system cost can be denoted as .then , the problem of the system is denoted as eq.[eq : problem ] , which is the search for an optimal recommendation policy as in table [ tbl : sigma ] . the problem becomes a linear programming and table [ tbl : sigmaopt ] presents a solution .this incentive compatible recommendation induces the coordination of players as a bce , where =27.9 ] . since =26 $ ] in social optimum, the poa is improved from 1.23 of pbne to 1.07 of bce .
|
in this study , we formulate positive and negative externalities caused by changes in the supply of shared vehicles as ride sharing games . the study aims to understand the price of anarchy ( poa ) and its improvement via a coordination technique in ride sharing games . a critical question is whether ride sharing games exhibit a pure nash equilibrium ( pne ) since the poa bound assumes it . our result shows a sufficient condition for a ride sharing game to have a finite improvement property and a pne similar to potential games . this is the first step to analyze poa bound and its improvement by coordination in ride sharing games . we also show an example of coordinating players in ride sharing games using signaling and evaluate the improvement in the poa . .... cgmediatemediatemr cgpne , fippoa , mrbound cgrsgpgmediate rsgfip=gq1poa , mrbound=gq2 a:.opg=t1 gq2 poa , mr .... .... : thm : driverh1,h3,h4 ....
|
predictions of the length of the solar cycle and date of solar maximum are important for planning space missions and satellite orbits .besides direct electromagnetic , particle , and mass effects , the sun cyclically influences the terrestrial ionospheric structure and interplanetary structure . a number of empirical or semi - empirical methods for estimating solar cycle progression exist .the earliest used method relies upon the sunspot number , following the discovery by of the 11-year periodicity in sunspot activity .the " geomagnetic precursor methods , relying upon measurements of changes in the earth s magnetic field , determine correlations between sunspot number at solar maximum and the geomagnetic index at the preceding minimum ( e.g. , ) . additionally , the solar radio emission at 10.7 cm ( f10.7 ) is a consistent measurement that has been recorded daily since 1947 and is also found to follow the solar activity cycle .combinations of these techniques have been used to predict the intensity and date of solar maximum of the current solar cycle .the solar cycle 24 prediction panel , led by noaa , examined several techniques and predicted a maximum in may 2013 that would be weak compared to recent solar cycles .similarly , recent work presented in predicts solar cycle 24 maximum f10.7 of no stronger than average and likely weaker than recent solar cycles . in this letter , we present a novel approach for determining the solar cycle peak and duration . the solar x - ray background , like other tracers such as the sunspot number and solar radio emission , rises during active times and declines in quiet times . through an analysis of the x - ray data from the past few solar cycles ,we predict the maximum x - ray background level , date of solar maximum , and length of solar cycle 24 . in section 2 ,we describe our analysis . in section 3, we compare the x - ray background results to the monthly sunspot number .section 4 includes discussion of our results .to make our solar cycle predictions , we analyzed goes x - ray observations obtained from noaa s ngdc .we determined the 1 - 8 ( corresponding to ) background levels using 1-minute data from 1986 through may 15 , 2014 .the data were obtained from goes-6 , -7 , and -8 ( solar cycle 22 ) , goes-8 and -10 ( solar cycle 23 ) , and goes-14 and -15 ( 2009 present ) .the x - ray background was computed as the smoothed minimum flux in a 24-hr time period preceding each 1-minute goes observation . in detail , we use the technique of , which includes the following steps : ( 1 ) compute the hourly median with a sliding 1-hour window , ( 2 ) determine the instantaneous background as the minimum of these hour medians in the previous 24 hours , and ( 3 ) smooth the instantaneous background by the previous 2 hours .the background was computed for both the 1 - 4 and 1 - 8 goes observations .the harder x - ray emission shows no discernible solar cycle trends , when compared with the soft x - ray emission , which is the focus of this paper . in order to determine the solar maximum and length of solar cycle for cycles 22 - 24, we fit a simple gaussian to the x - ray background of each solar cycle .we chose a gaussian for its simplicity in requiring only three free parameters and for its ability to reproduce the shape of the data over a solar cycle . to fit the data , we converted the date and time of the observation into decimal years from the start of the solar cycle ( scy ) .we identified solar cycle 22 as beginning in august 1986 and ending by may 1996 ; solar cycle 23 as beginning in may 1996 and ending by december 2008 ; and solar cycle 24 as beginning in december 2008 .we then fit a gaussian of the form : to the x - ray background . in the equation , f is the logarithm of the x - ray background flux in w m , f is the logarithm of the x - ray background flux at solar maximum in w m , scy is the solar cycle year in years , solar max is the fitted solar maximum value in years from the start of the solar cycle , and is the half - width of the solar cycle . in the fitting process , we filtered out any data points with background levels below .such measurements are below the goes 1 - 8 threshold of .the levenberg - marquardt algorithm was used to find the best - fit parameters f , solar max , and with the scipy optimization library in python .we determined the effect of the choice of bin size on the solar cycle parameters by computing best - fit values and statistics for bin sizes of 1 month , 2 weeks , 1 week , 2.5 days , and 1 day .the best - fit parameters , for each solar cycle examined , and for each binning level are listed in table [ table - binning ] .we find that the peak background flux is the most stable parameter , with very little variation in this parameter regardless of bin size . for solar cycles 22 and 23the solar maximum calculation is also stable , but the duration of the cycle varies by 2 months for cycle 22 and 6 months for cycle 23 .we tested goodness of fit with the chi - squared statistic , defined as , where std is the standard deviation of the measurements .the ideal case is where the reduced statistic , divided by the degrees of freedom ( the number of data points fitted minus the number of free parameters fit by the model ) , is closest to one .for the smaller bin sizes , cases where the reduced value is much greater than one are labeled as _oversampled_. for the largest bin sizes , the standard deviation is large , causing reduced .the optimized reduced values in table [ table - binning ] correspond to the 1-week binning .the best - fit parameters from the 1-week bin size are shown in table [ table - bestfit ] .the median 1-week background and best - fit gaussians are shown for solar cycles 22 , 23 , and 24 in figure [ fig - gaussian ] . ) for solar cycles 22 , 23 , and 24 .the x - ray background flux varies within the solar cycle , with higher values by a factor of 100 from solar minimum to solar maximum.,width=384 ] traditional measures of the solar cycle such as sunspot numbers show a double - peak due to the solar activity in the northern and southern hemispheres ( e.g. , ) .similarly , the x - ray observations also show the double peak profile .however , our choice of binning size affects whether the double peaked structure is blurred or distinct . for this reason we chose to fit only a single gaussian to derive the solar maximum and duration , but determined the peaks from examination of the 1-week binned data . in table[ table - bestfit ] , peak 1 corresponds to the peak in the x - ray background occurring before the fitted solar maximum and peak 2 is the peak following the solar maximum .since the current solar cycle 24 is incomplete , the resulting fewer measurements lead to more variability in the fitted solar maximum and duration parameters depending on the chosen bin size . in all cases ( table 2 ), however , we find that we have reached or passed the solar maximum .the solar cycle 24 is likely to end around 2020 , with a maximum uncertainty of 2 years . ) to the monthly sunspot number ( gray line ) for solar cycles 22 , 23 , and 24 .dashed lines show the gaussian fit to the sunspot number .note that color notations are the same as in figure 1.,height=240 ] ) to the monthly sunspot number ( gray line ) for solar cycles 22 , 23 , and 24 .dashed lines show the gaussian fit to the sunspot number .note that color notations are the same as in figure 1.,height=240 ] ) to the monthly sunspot number ( gray line ) for solar cycles 22 , 23 , and 24 .dashed lines show the gaussian fit to the sunspot number .note that color notations are the same as in figure 1.,height=240 ]the earliest used method of determining the solar activity level was through observing changes in the sunspot number . to determine how the x - ray background compares to sunspot number , we analyzed data of the monthly sunspot number from the solar influences data analysis center in belgium . in figure [ fig - ssn ], we show the sunspot number on the same scale as the 1-month averaged 1 - 8 x - ray background , for solar cycles 22 - 24 .we fit gaussians to the sunspot number data for each of the solar cycles 22 - 24 , with the best - fit parameters listed in table [ table - sunspot ] .in addition to the best - fit solar maximum date and associated sunspot number , table [ table - sunspot ] also includes the date and sunspot number of each of the peaks , obtained from analysis of the sunspot number data .peak 1 is the peak sunspot number before solar maximum and peak 2 is the peak sunspot number following solar maximum .given the difference in the binning method for the sunspot number from the x - ray background , we expect some variability in comparing the dates of solar max .we do find reasonable agreement , however , even when comparing with the 1-week binning of the 1 - 8 x - ray background ( table [ table - bestfit ] ) . for cycles 22 and 23 , the x - ray background solar maximum ( regardless of binning ) is within about 4 months from the sunspot number solar maximum .the cycle 24 estimates agree for peak 2 ( feb 2014 ; which is the peak to date of the analysis ) , which was the maximum in sunspot number to date , although there is a large difference in the peak 1 dates of 8 months .the peaks in the sunspot number and x - ray background are both higher preceding the solar max for cycles 22 and 23 .the sunspot peak numbers in cycle 22 , 192 for peak 1 and 173 for peak 2 , were higher than in subsequent cycles .the overall highest peaks in sunspot number from cycles 23 and 24 are 13% and 48% lower than the peak in cycle 22 .the solar soft x - ray emission is an important indicator of the state of the corona .while the mechanisms of coronal heating are poorly understood , the process is connected with solar magnetic activity ( e.g. , ) .previous soft x - ray studies have shown that variations exist in the derived luminosity from minimum to maximum , by a ratio of 5 to 6 times . with uniform observations over the past nearly three solar cycles ,the goes soft x - ray measurements provide a powerful database for characterizing the coronal variability and a tool for not only monitoring of flare activity but also for space weather forecasting . based on our analysis of the goes 1 - 8 ( 1.5 - 12kev ) observations from 1986present ,we have confirmed that the x - ray emission varies with solar cycle .we determined a soft x - ray background as the minimum flux in a 24-hr time period preceding each 1-minute goes observation . from our analysis , we show that the variance in this x - ray background follows a cyclical pattern from solar minimum to maximum .additional variations between solar cycles ( e.g. , differences in the solar maximum flux and length of the cycle ) are also found , with the peak background at solar maximum declining over the past two cycles . in particular , we find that the solar cycle 22 x - ray background peak of w m is 1.6 times the solar cycle 23 peak .the predicted peak for solar cycle 24 is w m , 25% lower than the peak background level in cycle 23 and only half of the peak level in cycle 22 .this variance is consistent with the variability found in the sunspot cycle during the same time periods , as shown in [ sunspot ] .further , we find that the soft x - ray emission during solar minimum has also declined over the past two cycles .the average level over the year of solar minimum preceding each solar cycle declined by a factor of from w m during solar cycle 22 to w m during solar cycle 23 . during solar cycle 24, the solar minimum average is unable to be determined reliably , since 72% of the measurements in solar minimum were below the goes threshold of w m .however , this evidence shows that the background was lower than the previous minimum , consistent with results from more sensitive soft x - ray instruments such as the sphinx x - ray spectrophotometer on the russian coronas - photon spacecraft ( e.g. , ) .these results carry merit as a historical study of the solar x - ray emission .additionally , they exhibit the potential of our technique for space weather climatology .one use is as an alternative method in determining the characteristics of the solar cycle .based upon our results , we predict the hemisphere - averaged maximum for solar cycle 24 as occurring in nov 2013 , with the peak so far having occurred in feb 2014 .the x - ray based predictions we made for the previous two solar cycles , along with the current cycle , were in good agreement with the sunspot cycle .our analysis also allows us to estimate the end date of the current cycle .our predicted end dates for solar cycles 22 and 23 were april 1997 and nov 2009 , respectively . in cycle 23, our predicted end date is year later than noaa swpc s agreed upon date of dec 2008 .our predicted end date for solar cycle 24 is sep 2020 .additionally , since the occurrence of x - ray flares are linked to solar activity and we have shown that the soft x - ray background scales with this , the x - ray background may also prove an important tool in x - ray flare forecasting .the intensity and number of flares , for instance , is also shown to scale with solar cycle . in future work, we will explore the use of this technique as a diagnostic for _ in - progress _ flare forecasting .we also plan to compare these solar cycle measures to those from observations with the coronagraph at the john w. evans solar facility of the national solar observatory at sacramento peak ..results from gaussian fits to the 18 x - ray background from goes using a variety of binning widths . the solar cycle , peak flux , solar maximum date and the corresponding decimal years since the beginning of the solar cycle ( scy ) , half - width ( ) of the solar cycle , date of the end of the cycle , and from the model fit are given .cases where reduced are indicated as oversampled " .[ cols="^,^,^,^,^ , < " , ]
|
we present an alternate method of determining the progression of the solar cycle through an analysis of the solar x - ray background . our results are based on the noaa geostationary operational environmental satellites ( goes ) x - ray data in the 1 - 8 band from 1986 - present , covering solar cycles 22 , 23 , and 24 . the x - ray background level tracks the progression of the solar cycle through its maximum and minimum . using the x - ray data , we can therefore make estimates of the solar cycle progression and date of solar maximum . based upon our analysis , we conclude that the sun reached its hemisphere - averaged maximum in solar cycle 24 in late 2013 . this is within six months of the noaa prediction of a maximum in spring 2013 .
|
until very recently , the celebrated properties of a scale - free degree distribution seemed to be incompatible with self - similar features of networks , in which the number of boxes of linear size scales with according to a power - law , with an exponent that is given by the fractal dimension . using the box - counting method , song _ _ et al ._ _ showed that many scale - free ( sf ) networks observed in nature can have a fractal structure as well .this result is striking , because the tiling and renormalization according to the linear size of the boxes , in which all pairs of nodes inside a box have mutual distance less than , appear to be physically relevant rather than being a formal procedure .therefore , the essential quantity in the tiling is the linear size of the box , , defined by being the maximal distance between the nodes of the box . in the renormalization procedurethe boxes are contracted to the nodes of the renormalized network whose edges are the interconnecting edges between the boxes on the original network . in this paperwe study the genetic regulatory network of two well - known organisms , _ saccharomyces cerevisiae _ and _ escherichia coli _ .we first determine the degree - distribution , that is the probability for finding a node with degree , to read off the exponent according to in order to check that the networks are scale - free .next we measure for various box - sizes to obtain the fractal dimension from .after renormalizing the networks according to the procedure proposed in , we measure the scaling behavior of the degree according to , where stands for the degree of a node in the renormalized network , is the largest degree inside the box that was contracted to one node with degree in the renormalization process , and is assumed to scale like with a new exponent .the invariance of under renormalization and the transformation behavior of the degree itself imply the relation between the exponents .therefore we check this relation by measuring , and comparing the values of from eq.[1 ] with the measured from the degree distribution .one of the important features of networks is their `` degree '' of assortativity .the notion of assortative mixing was known from epidemiology and ecology when it was introduced as a characteristic feature of generic networks by newman .assortativity refers to correlations between properties of adjacent nodes .one particular property is the ( in- or out-)degree of a node as the number of its ( in- or out-)going links , respectively .degree - degree correlations can be recorded as histograms ; in order to facilitate the comparison between networks of different size , they can be also characterized by the pearson coefficient .the pearson coefficient is obtained from the connected degree - degree correlation function after normalizing by its maximal value , which is achieved on a perfectly assortative network . here, stands for the average of having vertex degrees and at the end of an arbitrary edge .the pearson coefficient takes values between , it is positive for assortative networks ( for complete assortativity ) and negative for disassortative ones .we have measured this coefficient for a number of self - similar scale - free networks and present the results below .the reason why this feature is of interest in the present context is its relation to the power - law or exponential behavior of . in particular , we are interested in the question whether disassortativity is scale - invariant on a qualitative level under renormalization according to the prescription proposed in , and why these properties go along .disassortative features in protein interaction networks were found and explained by maslov and sneppen on the level of interacting proteins and genetic regulatory interactions . according to their results links between highly connected nodesare systematically suppressed , while those between highly - connected and low - connected pairs of proteins are favored . in this waythere is little cross - talk between different functional modules of the cell and protection against intentional attacks , since the failure of one module is less likely to spread to another one .also in immunological networks one speaks of lock- and key - interactions between molecular receptors and antigenic determinants . in general , complementarity is essential for pattern recognition interactions , underlying biological and biochemical processes as well as for symbiotic species in ecological networks . of course, it is not at all obvious or necessary that complementarity in `` internal '' ( functional ) properties should be manifest in topological features like the degree - degree correlations .therefore we study the relation between self - similarity and degree - assortativity in this paper .for the genetic regulatory networks _saccharomyces cerevisiae _ and _ escherichia coli _ we observe a power - law behavior of for with for _ s. cerevisiae _ and , respectively .the obtained degree - distributions are scale - free and satisfy a power - law with exponent for _ s. cerevisiae _ and for _ e.coli_. the scaling relation ( [ 1 ] ) between the exponents and is also satisfied within the error bars for both networks . in table [ tab : table1 ] we summarize the results also for some additional networks , for which we list their properties of self - similarity and disassortativity . if we confirm the property of self - similarity it means not only the scaling behavior of according to a power - law , but also the numerical verification of the scaling relation of eq.[1 ] and the invariance of under renormalization .this is more conclusive , because it is sometimes difficult to disentangle exponential from power - law behavior of for networks with a small diameter ( for example see fig.[dbdk_yeastregulatorynet ] for the regulatory network of _s.cerevisiae _ with an inset that shows the same data points on a log - linear scale instead of the log - log scale ) , whereas the scaling relation only holds for a power - law of the decay , it is easier to prove or disprove . a confirmation of ( dis)assortativity refers to histograms with ( negative ) positive slope of next - neighbor degree - degree correlations and/or a ( negative ) positive pearson coefficient , respectively . in most caseswe measured the degree - degree correlations also between nodes at distance as indicated in the figures ._ s.cerevisiae_ : ( a ) normalized number of boxes as a function of linear box size to read off ( b ) rescaling factor as function of the box size to read off , width=321 ] refer to the genetic regulatory networks of _ s.cerevisia _ and _ e.coli_ , the scientific collaboration network , and the internet on the autonomous systems level . for these networksthe properties of column 2 and 3 were examined by us , while for the last three networks ( the biochemical pathway network of _e.coli_ , the actor network and the world - wide - web ) , the self - similarity was established before , and we studied their property of disassortativity in addition .in particular the actor network deserves some further comments .the actor - network is self - similar , but its positive pearson coefficient suggests that it is assortative , in contrast to all other self - similar networks we have studied so far. a closer look at its next neighbor - degree - degree correlation ( fig.[kknn_actor ] ) shows an assortative behavior for degrees up to the order of 1000 , but slowly decays for larger degrees and becomes disassortative . the degree - degree correlation between nodes at distance larger than 1 is decreasing with degree for all . moreover , some comments are in order to the yeast - genetic regulatory network with 3456 nodes and 14117 edges , ( cf .fig.[dbdk_yeastregulatorynet ] ) .since it has a diameter of 9 , the largest value for the tiling is 10 .therefore we have only 8 data points available for the fit .each point corresponds to an average over 100 tiling configurations .different tiling configurations result from different starting seeds as well as the random selection of neighbors during the tiling process .the data point at in fig.[dbdk_yeastregulatorynet ] lies clearly outside the fluctuations about the average over different tiling configurations , thus outside the error bars , which are at least two orders of magnitude smaller than the respective value of , so that they are not visible on the scale of the figure . in fig .[ dbdk_ecoliregulatorynet_zheng_largedata ] , we find a similar behavior for _ e.coli_. the deviation from the power - law behavior at goes along with an assortative degree - degree correlation between nodes at distance as it is seen from fig.[kknn_yeast_ecoli_regulatorynet]a and fig.[kknn_yeast_ecoli_regulatorynet]b , showing the degree - degree correlation of _s.cerevisiae _ and _ e.coli_ , respectively , at distances .the data in fig.[kknn_yeast_ecoli_regulatorynet ] explicitly show the disassortative behavior at and for both _s.cerevisiae _ and _ e.coli_. however , for , we find that there is a certain value of , at which abruptly increases and slowly decreases for . here for _ s.cerevisiae _ and for _ e.coli_. these mixed properties of assortativity and disassortativity seem to go along with the deviation from the power - law behavior of . on a qualitative level , this is plausible if we focus on a hub that should be present in a scale - free network . in an assortative network ( assortative say at distance , for example ) , this hub is likely connected to another hub within the distance . if this hub is chosen as a seed of a box in a tiling with linear box size , we need much less boxes to cover the many nodes in the neighborhood of the hub than in a disassortative network . in a network which is assortative not only for a certain range of , but for all and at distances , like the scientific collaboration network , actually decays faster than power - like for all , as it is seen from the exponential fit of fig.[scientificcollaboration]a .the scaling relation between the exponents and assumes the scale - invariance ( under renormalization ) of the degree distribution , that is the invariance of the exponent .similarly , it is of interest how the disassortativity transforms under renormalization ( as defined in ) . as we see from fig.[pearson ], even networks like the scientific collaboration network ( fig.[scientificcollaboration ] ) , which are originally assortative , transform to more and more disassortative ones under iterated renormalization .( the number of renormalization steps is determined by the size of the networks , in particular by its diameter .the final step is achieved when the reduced network consists of just one node . )therefore the transformation behavior of disassortativity seems to be an effect of the renormalization procedure rather than an intrinsic self - similar property of the network .similarly , we measured the transformation behavior of the clustering coefficient under renormalization of self - similar networks .as the data show , it is an invariant property of scale - free networks , while it changes under renormalization for non - self - similar ones like the barabsi - albert one .to summarize , we find numerical evidence that self - similar scale - free networks are preferably disassortative in their degree - degree correlations . for biological networks this result may reflect the complementarity in interactions that is observed on various levels .c.song,s.havlin and h.a.makse , nature * 433 * , 392 ( 2005 ) n.m.luscombe , m.m.babu , h.yu , m.snyder , s.teichmann and m.gerstein , nature * 431 * , 308 ( 2004 ) http://www.ccg.unam.mx/computational_genomics /regulondb / datasets / regulonnetdatasets.html and http://www.gbf.de/systemsbiology r. albert and a .-barabsi , rev .mod . phys . * 74 * , 47 ( 2002 ) m.e.j.newman , phys.rev.lett.*89 * , 208701 ( 2002 ) ; m.e.j.newman and j.park , phys.rev.e * 68 * , 036122 ( 2003 ) s.maslov and k.sneppen , science * 296 * , 910 ( 2002 ) m.copelli , r.m.zorzenon dos santos and j.s.sa martins , cond - mat/0110350 ( 2001 ) www.nd.edu/networks p.uetz _ et al ._ , nature * 403 * , 623 ( 2000 ) the additional data are available on the web http://imperia.iu-bremen.de /ses / physics / ortmanns/36753/index.shtml a .-barabsi and r. albert , science * 286 * , 509 ( 1999 )
|
self - similar networks with scale - free degree distribution have recently attracted much attention , since these apparently incompatible properties were reconciled in by an appropriate box - counting method that enters the measurement of the fractal dimension . we study two genetic regulatory networks ( _ saccharomyces cerevisiae _ and _ escherichai coli _ ) and show their self - similar and scale - free features , in extension to the datasets studied by . moreover , by a number of numerical results we support the conjecture that self - similar scale - free networks are not assortative . from our simulations so far these networks seem to be disassortative instead . we also find that the qualitative feature of disassortativity is scale - invariant under renormalization , but it appears as an intrinsic feature of the renormalization prescription , as even assortative networks become disassortative after a sufficient number of renormalization steps .
|
when the shear stress exerted by wind on a sandy surface is sufficietly strong , sand grains are lifted from the sand bed and are transported by wind to sediment downstream .the raising sand grains follow a ballistic trajectory influenced by drag and gravity , eventually impacting again on the surface and inducing new particles to detach from the surface .this phenomenon , knows as saltation , generates a layer close to the sand bed with a typical maximum height of 10 - 20 cm .saltation is the main reason of erosion of sandy surfaces and together with the consequent sedimentation of sand particles it is the main reason of dune motion and accumulation of sand in specific regions where recirculation occurs .the engineering interest in understanding and simulating the dynamics of windblown sand , e.g. dune fields of loose sand , is dictated by their interaction with a number of human infrastructures in arid environments , such as roads and railways , pipelines , industrial facilities , farmlands , towns and buildings as shown in fig .[ figure_1 ] .moving intruder sand dunes , soil erosion and/or sand contamination can be comprehensively ascribed , from a phenomenological point of view , to non - equilibrium conditions , where the two processes , erosion and sedimentation , do not balance , leading to the erosion or deposition of sand on the soil and eventually to the evolution of that interface . in other terms ,such non - equilibrium situations are the most interesting cases from the applicative point of view .bagnold was the first who studied sand erosion and postulated a relation for the sand flux , determining the importance of wind speed and of the related shear stress on the sand surface .later authors introduced several corrections to bagnold s rule , but all the models have in common the observation that a sand grain is ejected from a sand bed if and only if the shear stress at the surface is larger than a threshold value .sauermann et al . observed that saltation reaches a steady state after a transitory phase of 2 seconds .after this period the trajectories are statistically equivalent for the ensemble of grains .this phenomenon happens because the new ejected particles increase the sand concentration in the saltation layer and this reduces the speed of saltating grains .so , a steady state is reached when all particles are ejected with the same velocity ( see also ) .a nice mathematical models of the saltation phenomenon is proposed by herrmann and sauermann who studied the dynamics of the surface of a dry granular bed dividing the sand bed into a non - moving time - dependent region providing sand mass and another time - dependent region above it in which sand particles can move transported by the wind .they propose a model averaged over the vertical coordinate , presenting a free boundary . coupled a model with a multiphase approach in which the slip of the dispersed phase is modeled by an algebraic model .similar turbulent one - dimensional models are proposed in , however without a multiphase coupling .kang and coworkers instead couple a multiphase model for the fluid flow with a particle method for the sand grains .a similar coupling was also used in where however the wind flow was computed indipendently from the presence of sand particles via a suitable turbulence model , typically the k- model .sedimentation has also been widely studied in the literature starting from several applications mainly in environmental and chemical engineering .one of the most important component in this phenomenon is the drag force experienced by the sedimenting particles that has driven a lot of attention by many authors as well reviewed in . differently from previous papers , here we will propose a comprehensive multiphase model for the entire process including sand erosion , wind transport , and sedimentation , that working also in non - equilibrium conditions is able to deal with the development of the stationary saltation layer starting from generic initial and boundary conditions and in particular from clear air and oversaturated situations . in order to dothat we develop a so - called first order model ( in time ) of sand erosion , transport and deposition , that can be easily tuned using experimental test cases . the resulting advection - diffusion equation for the suspended phase can then be coupled with a model describing the turbulent fluid flow .the mathematical model can then be solved with the aid of the fundamental erosion / deposition boundary condition at the sand bed , that depends on the shear stress .the plan of the paper is then the following .after this introduction , section 2 presents the mathematical model mainly focusing on the advective phenomena , on the microscopic dynamics related to the collision between sand grains , and on the erosion boundary condition .the result of some numerical simulations focusing on how the stationary condition is reached when wind blows over a heterogeneous sand bed are reported in section 3 .we consider the flow of sand as a multiphase system composed of sand grains in air .single sand grains have a density and float in air with a volume ratio ( typically well below 1% ) , so that the partial density of sand in air is .saturation obviously implies that where is the volume ratio of air .the mixture of air and sand grains is flowing on a sandy surface having a close packing volume ratio . because wind flow is in a turbulent regime the fluid phase is modelled by the reynolds - averaged navier - stokes equations ( rans ) equations .more precisely , a turbulence model is selected to provide the closure \\ & \frac{\partial k}{\partial t}+\nabla\cdot ( k\vv_f)= \nabla\cdot[(\nu_a+\nu_t)\nabla k]+p_k-\gamma\omega k\\ & \frac{\partial \omega}{\partial t}+\nabla\cdot ( \omega\vv_f)= \nabla\cdot[(\nu_a+\nu_t)\nabla \omega]+p_\omega - c_\omega\omega^2\\ \end{aligned } \right.\ ] ] with standard boundary conditions . in ( [ ns ] ) is the turbulent kinetic energy , is the specific dissipation rate , and are , respectively , air and turbulence viscosities , and are the production terms for and , and and are two empirical costants . in describing the transport of sandwe start observing that while sand particles are trasported by the wind they drift down with a characteristic sedimentation velocity due to the action of gravity .in addition , particle collide giving rise to an extra - flux term .hence , one we can write the following equation for the sand volume ratio where this closure can be actually deduced under suitable modelling assumptions from a more general multiphase model involving mass and momentum balance for the suspended phase .the sedimentation velocity can be evaluated by the balance of drag and buoyancy forces and strongly depends on the grain size .for instance , if we define the particle reynolds number as the one felt by the sand grains of diameter during their flow and therefore based on the relative velocity between air and solid particles , , then in the so - called newton regime , corresponding to particle reynolds numbers above 500 , the drag coefficient is approximately constant ( for instance , is used in ) , so that one has the classical relation however , at the other extreme , i.e. , for particle reynolds number below few units , corresponding to the so - called stokes regime , , so that one has the classical stokes sedimentation velocity from the distance from the sand bed . ]considering that in aeolian sand trasport the phenomenon is limited to the first few centimeters from the ground and that there the particle reynolds number is below 100 ( see fig .[ figure_2]a ) , a better evaluation of the sedimentation velocity with respect to eqs .( [ vsed ] , [ vsed2 ] ) can be obtained fitting the experimental dependence of the drag coefficient on the particle reynolds number as reviewed in and shown in fig .[ figure_2]b . coming to the collision term , already introduced by batchelor , neclectingit would imply that sand grains are only transported under the action of drag and gravity .however , collisions among particles have the important non - negligible effect of generating a sort of diffusion of sand particles from higher to lower density areas , that results fundamental in this modelling framework for the stationary formation of the saltation layer . for high volume ratios near close packing , auzerais et al . suggested the following nonlinear law on the basis of experimental data \,.\ ] ] such a term enforces the need of avoiding that the close packing volume ratio is reached for the sedimenting mass .however , as in wind - blown sand applications , the relation can be simplified to from the constitutive viewpoint , the collision terms ( [ fcoll2 ] ) or ( [ fcoll3 ] ) can be considered as deriving from treating the ensemble of particles as a gas , so that the stress term for the solid constituent is isotropic through a coefficient that depends on the particle density . substituting eqs .( [ fcoll3 ] ) and ( [ closure ] ) into ( [ mass ] ) gives the nonlinear degenerate advection - diffusion equation actually , if the limit value is also allowed , one has the linear case sometimes used in the literature . the coefficient be considered as composed of three contributions that take into account of the possible dependence from * the shear rate , or better the velocity gradient ; * the turbulence of the flow , so that this term results from the integration of the cfd simulation in a turbulent regime ; * the molecular diffusion , but as reported in , this term can be neglected with respect to other quantities . actually , starting from the obvious observation that the behaviour of sand particles is isotropic .objectivity , i.e. , independence of the constitutive dependence from the reference frame , implies that is a scalar isotropic function of a tensor . by the representation theorem of isotropic function then only depend on the invariants of the rate of strain tensor .however , since the flow can be considered as a perturbation of a shear flow in the vertical plane , the leading contribution is the second invariant $ ] . for this reasonwe assume that .this is an important generalization because all papers refers to a dependence on the wind velocity , which is a well defined quantity only on horizontal surfaces , while more general surfaces like dunes require a dependence from an objective invariant of the rate of strain tensor . in order to understand the meaning of the collision term we can look for stationary configurations for which all quantities depend only on the quote and velocitiesare directed along a flat plane ( at in the direction ) . in this case , the stationary profile in the saltation layer can be obtained integrating jointly with the boundary condition at the sand bed . in the simplest case in which is constant and , one immediately has an exponential profile with a characteristic length related to the thickness of the saltation layer given by and an integration constant related to the erosion boundary condition . in general, referring to fig .[ figure_2]c the dependence of the coefficient on the distance from the sand bed shows a strong diffusion closer to the surface and a low diffusion at a distance close to the height of the saltation layer , while particles that escape from the saltation layer are again strongly mixed due to the increasing diffusion due to turbulence .the general features of the erosion boundary condition can be inferred from experiments known for a long time that show that , generally speaking , erosion only occurs if the shear stress at the surface exceeds a threshold level , or equivalently that exceeds a threshold level ( see and referencces therein ) . referring to the last notation , because it is the one classically used in the literature ( though from the numerical point of view what is computed is whichis then compared to ) one has the flux boundary condition where stands for the positive part of .bagnold s formula is quite successful in determining the dependence of from the grain diameter . on the other hand ,the quantification of the parameter is not straightforward because most of the experiments measures the horizontal sand flux parallel to the surface while for the boundary condition one would need some knowledge of the vertical flux perpendicular to the surface which is the one related to the ejection of sand grains from the sand bed .very recently , on the basis of their experiments ho et al . proposed , so that the sand flux due to erosion takes the form where is the sand grain ejection vertical velocity evaluated experimentally and is a dimensionless free parameter to be fitted to experimental sand flux profiles .as domain of integration we focus on the flow over a horizontal heterogeneous lane .this is neither an artificial situation , nor a case of limited importance .in fact , most of the landforms in arid regions and roads are well approximated by a horizontal flat plane .this geometrical setup is also retained in most of the wind tunnel experimental studies present in the literature that are however mainly addressed to the characterisation of the windblown sand concentration and flux in uniform , in - equilibrium steady state conditions. nevertheless , uniform and steady state conditions are excessively ideal ones . in these situationsthe incoming wind already transports the maximum allowable sand density , so that erosion and deposition are balanced , and hence the sand bed surface is neither scoured , nor accumulated .conversely , in many engineering applications , attention must be paid to non - equilibrium conditions that will cause erosion or settlement of the sand bed .so , as sketched in fig .[ figure_3 ] , our horizontal plane is characterized by the alternation of erodible sandy regions and non - erodible regions , corresponding for instance to a street . in the simulationthe inflow wind is clean and with a fully developed logarithmically shaped velocity profile , with and ranging between 0.3 and 1 .the threshold value for erosion is and the grain size is . from the simulation shown in figures [ figure_4 ] and [ figure_5 ] , as soon as wind overcomes the boundary between the non - erodible and erodible surface ( that is put at a distance from the inflow boundary ) the saltation layer starts to develop .figure [ figure_4]a plots the profiles of the sand volume ratio at several points at the beginning of the erodible zone .the model correctly predicts the progressive uptake of sand and increase in the depth of the saltation layer , till saturation is reached because of the equilibrium between erosion and sedimentation .the thickness of the saltation layer at equilibrium is about 10 cm in qualitative agreement with experiments .viceversa , as shown in fig .[ figure_4]b , at the beginning of the second non - erodible zone , there is a reduction of the windblown sand density in the upper part of the stream because of the sedimentation process , not balanced by saltation . actually , due to diffusion , some sand also diffuses upstream , mainly very close to the surface where diffusion is dominant . referring to fig .[ figure_5]c , in this region one can notice an exponential growth of the scaled total sand flux where is the sand flux at equilibrium .after the first soil discontinuity at convection dominates and saturates in a length close to ( see fig .[ figure_5]b ) , that is nearly independent from the scaled wind velocity , in qualitative agreement with . when the erodible surface ends the total sand flux readily decreases in a characteristic distance that increases with the wind velocity as shown in fig .[ figure_5]b .it can be noticed from fig .[ figure_5]d that the decrease is less than exponential in qualitative agreement with . in conclusion ,the model ( [ ns],[advdiff ] ) jointly with the fundamental erosion boundary condition ( [ bc ] ) and other standard initial and boundary conditions is able not only to describe the erosion / transport / sedimentation process in stationary situation , but also to capture all features characterizing the development of the saltation layer up to equilibrium and of its reduction , that occur in nature when the sandy surfaces is heterogeneous .e. barnea , j. mizrahi , ( 1973 ) . a generalized approach to the fluid dynamics of particulate systems : part 1 .general correlation for fluidization and sedimentation in solid multiparticle systems , _ chem .j. _ * 5 * , 171189 .m. creyssels , p. dupont , a. ould el moctar , a. valance , i. cantat , j.t .jenkins , j.m .pasini , k.r .rasmussen , ( 2009 ) .saltating particles in a turbulent boundary layer : experiment and theory , _ j. fluid mech ._ * 625 * , 4774 .
|
three phenomena are involved in sand movement : erosion , wind transport , and sedimentation . this paper presents a comprehensive easy - to - use multiphase model that include all three aspects with a particular attention to situations in which erosion due to wind shear and sedimentation due to gravity are not in equilibrium . the interest is related to the fact that these are the situations leading to a change of profile of the sand bed . multiphase model , sand transport , erosion , sedimentation 81.05.rm 76t15 , 76t25
|
estimation and detection are two main concerns in the course of designing a communication system . the main goal is to design optimal demodulators at the receiver side providing the detector with the necessary sufficient statistics for its decision on the transmitted symbol at a specific observation interval .furthermore , the optimization of the decision device is also a target , i.e. , its design based on such statistical tests which rely on sufficient statistics and minimize the probability of error .a different setup of optimal designs related to radar and sonar systems is to detect the presence of either a deterministic or random signal in noise with least probability of error or false alarm .although the two aforementioned setups have conceptual differences , they are usually treated in the same fashion .first , an optimal demodulator is necessary to deliver the sufficient statistics to the decision device .then , the decision device , that optimally uses these sufficient statistics , has to be derived .the optimal design of the decision device is formulated in any case as a hypotheses testing problem .moreover , the optimization of the transmitter is another related problem . in this case, the problem turns to be the design of optimal transmission sets , such that the end performance metric , i.e , the probability of error is minimized .depending on the degree of knowledge about the transmission channel at the receiver side , the detector can be coherent , semi - coherent or noncoherent .the more information about the transmission channel is available , the better the receiver s performance will be .this justifies the fact that the receivers usually have a built - in channel estimator . in the communication and signal processing literature ,the usual channel estimators are the minimum variance unbiased ( mvu ) and the minimum mean square error ( mmse ) estimators . the combination of these channel estimators with the optimal decision devices is usually considered to address the problem of determining the optimal receiver. current physical layer ( phy ) standards that have attracted a lot of attention both from the mobile industry and the research community are the wireless interoperability for microwave access ( wimax ) , the long term evolution ( lte ) and the digital video broadcasting ( dvb ) either in its terrestrial ( dvb - t ) or its handheld ( dvb - h ) versions .these standards are orthogonal frequency division multiple access ( ofdma ) based and they can satisfy the need for shorter communication links to provide truly broadband connectivity services . in these systems , either mvu / least squares ( ls ) or mmse channel estimatorsare used , usually employing some sort of estimate interpolation through the frame if the goal is to track a time - varying channel . in this paper , we re - examine the validity of the common belief that the mvu and mmse channel estimators are the best choices to be combined with the optimal detectors , delivering an overall optimal receiver , when finite - sample training is used to estimate the channel .to this end , ideas originating from the system identification field are employed .recent results in optimal experiment design indicate that it is better to design the optimal training for the estimation of a certain set of unknown parameters with respect to optimizing the end performance metric rather than the mean square error of the parameter estimator itself .we will slightly modify this idea and we will examine if the aforementioned channel estimators are the best choices , when the selection of the channel estimator is made with respect to an appropriately defined end performance metric . for illustration purposes , this study is performed on a toy channel model , namely a single input single output ( siso ) flat fading channel with additive white gaussian noise ( awgn ) .the initial focus is on two different mse criteria .these mse criteria serve to demonstrate the dependence of the optimal channel estimators on the end performance metrics .their choice is based on the simplicity of the analysis that they allow .then , using the obtained results , we will examine the case of the error probability as the performance metric of interest .we show that for several performance metrics examined in this paper , the mvu and mmse channel estimators are suboptimal , while we propose ways to obtain better channel estimators .finally , we numerically compare the performances of the derived channel estimators with those of the mvu and mmse channel estimators for all performance metrics in this paper .these comparisons verify that the optimality of the usual channel estimators with respect to common end performance metrics is questionable .this paper is organized as follows : section [ sec : probst ] defines the problem of designing the channel estimator with respect to the end performance metric .section [ sec : prelim ] presents some results and comments that will be useful in the rest of the paper , while it introduces approximations of the performance metrics that the rest of the analysis will be based on .the optimality of the mvu and mmse channel estimators with respect to the minimization of the symbol estimate mse is examined in section [ sec : dmse ] and subsections therein , while uniformly better channel estimators are also proposed . the same analysis as in section[ sec : dmse ] is pursued in section [ sec : emse ] for a differently defined symbol estimate mse and in section [ sec : minpe ] for a rough approximation ( variation ) of the error probability performance metric .section [ sec : sims ] illustrates the validity of the derived results .finally , section [ sec : concl ] concludes the paper .the received signal model for a siso system , when the channel is considered to be narrowband block fading , is given as follows : where is the observed signal at the receiver side at time instant , is the complex channel impulse response coefficient , is the transmitted symbol at the same time instant taken from an m - ary constellation and is complex , circularly symmetric , gaussian noise with zero mean and variance .given an equiprobable distribution on the constellation symbols , we further assume that =0 ] , while our modulation method is memoryless .in addition , and are independent random sequences , while is a white random sequence .assume that a maximum energy and a training length of time slots are available at the transmitter for training .we can collect the received samples corresponding to training in one vector : where ^{t} ] is the vector of training symbols and ^{t} ] denotes the decision of the detector , when the transmitted symbol is .in essence , the ml detector minimizes the probability of error , when the transmitted symbols are equiprobable . when the receiver has a channel estimate , is replaced by in the last expression .a different kind of performance metric is the mse of a _ linear _ symbol estimator . in this paper, we will call the symbol estimator an _ equalizer_.the equalizer uses the channel knowledge and delivers a soft decision of the transmitted symbol , i.e. , a symbol estimate .we will call _ clairvoyant _ the equalizer that has perfect channel knowledge . denoting this equalizer by , we can find its mathematical expression as follows : , \label{eq : mmsechoice}\ ] ] where the expectation is taken over the statistics of and .if we set the derivative of the last expression with respect to to zero and we solve for , then the optimal clairvoyant equalizer is given by the expression we will call this the mmse clairvoyant equalizer .we observe that as the snr increases , i.e. , , .we will call the _ zero forcing _ ( zf ) clairvoyant equalizer . using the above definitions and assuming that the receiver has only an estimate of the channel , the system performance metric is the symbol estimate mse : .\label{eq : msedirect}\ ] ] the mse given by ( [ eq : msedirect ] ) can be defined in two different ways : if we assume that the channel is an unknown but otherwise deterministic quantity , then the expectation in ( [ eq : msedirect ] ) does not consider .this leads to an mse expression dependent on the unknown channel . in this case ,only the channel estimators that treat the channel as an unknown deterministic variable are meaningful .if we assume that the unknown channel is a random variable , then we can average the mse expression over . in this case , both the estimators that treat the channel as an unknown deterministic variable or as a random variable are meaningful .the former represents the case where the system designer chooses to ignore the knowledge of the channel statistics in the selection of the channel estimator for some reason . in the following , we focus on the zf equalizer , which becomes optimal as the snr increasesthis choice is made to preserve the simplicity of this paper and to highlight the derived results .the previous mse definition implies the definition of yet another mse that is meaningful in the context of communication systems .given an equalizer , we can define the excess of the symbol estimate based on an equalizer that only knows a channel estimate over the equalizer with perfect channel knowledge , thus leading to .\label{eq : mseexcess}\ ] ] in the sequel , this metric will be called _ excess _ mse .our goal will be to determine the optimal channel estimators for fixed training sequences so that each performance metric based on a given equalizer is minimized .to this end , the following section presents some useful ideas .consider the mvu estimator .since it is an unbiased estimator , it satisfies .this condition implies that =h ] , one can obtain {\mbox{\boldmath }_{\rm tr}^{}}}{e[|h|^2]\|{\mbox{\boldmath }_{\rm tr}^{}}\|^2+\sigma_w^2}.\label{eq : mmse}\ ] ] the of the zf equalizer using a deterministic channel ( `` dc '' ) assumption is \sigma_x^2+\sigma_w^2e\left[\frac{1}{|\hat{h}|^2}\right],\label{eq : mseddczf}\ ] ] the corresponding for random channel ( `` rc '' ) is : \right]\sigma_x^2+\sigma_w^2e_h\left[e\left[\frac{1}{|\hat{h}|^2}\right]\right],\label{eq : msedrczf}\ ] ] while for the we accordingly have : \left(\sigma_x^2+\frac{\sigma_w^2}{|h|^2}\right)\label{eq : emsezf}\ ] ] ( c.f .( [ eq : mseexcess ] ) ) .the is obtained by averaging the last expression over .depending on the probability distributions of and , the above mse expressions may fail to exist .the mses will be finite if the probability distribution function ( pdf ) of is of order as .a similar condition should hold for the pdf of in the case of . in the opposite case ,we end up with an _ infinite moment _ problem . in order to obtain well - behaved channel estimators that will be used in conjunction with the actual performance metrics ,some sort of regularization is needed . some ideas for appropriate regularization techniques to usemay be obtained by modifying robust estimators ( against heavy - tailed distributions ) , e.g. , by trimming a standard estimator , if it gives a value very close to zero .an example of such a trimmed estimator is given as follows : where can be any estimator and a regularization parameter ._ remark : _ clearly , the reader may observe that the definition of the trimmed preserves the continuity at .additionally , the event has zero probability since the distribution of is continuous .therefore , in this case can be arbitrarily defined , e.g. , .we focus now on the .assume a fixed .in the appendix , we show that , for a sufficiently small and a sufficiently high snr during training , minimizing is equivalent to minimizing the following approximation =\frac{e\left[|\hat{h}-h|^2\right]}{e\left[|\hat{h}|^2\right]}\sigma_x^2+\sigma_w^2\frac{1}{e\left[|\hat{h}|^2\right]}.\label{eq : mseddczf0}\ ] ] following similar steps and using some minor additional technicalities , we can work with =\frac{e_h\left[e\left[|\hat{h}-h|^2\right]\right]}{e_h\left[e\left[|\hat{h}|^2\right]\right]}\sigma_x^2+\sigma_w^2\frac{1}{e_h\left[e\left[|\hat{h}|^2\right]\right]},\label{eq : msedrczf0}\ ] ] instead of .moreover , ] can be defined accordingly .we will call the last approximations _ zeroth order _ symbol estimate mses and excess mses , respectively . the following analysis and results will be based on the zeroth order metrics and they will reveal the dependency of the channel estimator s selection on the considered ( any ) end performance metric . _ remarks : _ 1 .a useful , alternative way to consider the zeroth order mses is to view them as affine versions of normalized channel mses , where the actual true channel is and the estimator is .2 . in the definition of ( [ eq : mseddczf0 ] ), one can observe that after approximating the mean value of the ratio by the ratio of the mean values the infinite moment problem is eliminated . in the following ,all zeroth order metrics will be defined based on the _ non - trimmed _ to ease the derivations .this treatment is approximately valid when is sufficiently small as it is actually shown in eq .( [ eq : msexdcapp ] ) of the appendix .we now examine the zeroth order symbol estimate mse in the case of the zf equalizer .the optimality of the mvu and mmse channel estimators will be investigated .additionally , the training sequence is assumed fixed .the channel is considered either deterministic or random , depending on the available knowledge of a priori channel statistics and the will of the system designer to ignore or to exploit this knowledge .the expectation operators in eq .( [ eq : mseddczf0 ] ) are with respect to and .we have : =\sigma_x^2\frac{\left[|h|^2\left|{\mbox{\scriptsize f}}^{h}{\mbox{\scriptsize x}}-1\right|^2+\sigma_w^2\left\|{\mbox{\scriptsize f}}\right\|^2\right]+\frac{\sigma_w^2}{\sigma_x^2}}{|h|^2\left|{\mbox{\scriptsize f}}^{h}{\mbox{\scriptsize x}}\right|^2+\sigma_w^2\left\|{\mbox{\scriptsize f}}\right\|^2}.\end{aligned}\ ] ] the numerator of the gradient of the above expression with respect to discarding the outer is given by the following expression is not affected by these operations . ] : \left[|h|^2\left(\varphi-1\right)^{*}{\mbox{\boldmath }_{\rm tr}^{}}+\sigma_w^2{\mbox{\boldmath }}\right]\nonumber\\ & & -\left[|h|^2\varphi^{*}{\mbox{\boldmath }_{\rmtr}^{}}+\sigma_w^2{\mbox{\boldmath }}\right]\left[\frac{\sigma_w^2}{\sigma_x^2}+|h|^2\left|\varphi-1\right|^2+\sigma_w^2\|{\mbox{\boldmath }}\|^2\right],\nonumber\\ \label{eq : ddczfnom}\end{aligned}\ ] ] where . setting , we obtain : \neq { \mbox{\boldmath }}. \label{eq : nommvue}\ ] ] note that no choice of will zero this expression for any .therefore , the mvu is not an optimal channel estimator in this case .we can state this result more formally : [ thrm:1 ] the mvu estimator is _ not _ an optimal channel estimator for the task of minimizing ] . equating ( [ eq : ddczfnom ] ) to and taking the inner product of both sides with , we obtain the following necessary condition that every optimal channel estimating filter must satisfy given the training sequence : latexmath:[\[{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}=\left(1+\frac{\sigma_w^2}{\sigma_x^2 that satisfies this condition is . clearly , ( [ eq : optfzf ] ) is sufficient for ( [ eq : ddczfnom ] ) to become zerohowever , ( [ eq : optfzf ] ) has another problem , namely that the optimal solution depends on the unknown channel . in order to deal with the dependence of the optimal estimator on the unknown channel, we will resort to a stochastic approach .we will assume a _ noninformative _ prior distribution for the unknown channel .if the real and imaginary parts of the channel are considered bounded in the intervals and then the receiver can treat them as independent random variables uniformly distributed on and , respectively .the ] , where ] . in this case, the actual prior statistics of the channel are known .the zeroth order symbol estimate mse is given by = \sigma_x^2\frac{\left[e[|h|^2]\left|{\mbox{\scriptsize f}}^{h}{\mbox{\scriptsize x}}-1\right|^2+\sigma_w^2\left\|{\mbox{\scriptsize f}}\right\|^2\right]+\frac{\sigma_w^2}{\sigma_x^2}}{e[|h|^2]\left|{\mbox{\scriptsize f}}^{h}{\mbox{\scriptsize x}}\right|^2+\sigma_w^2\left\|{\mbox{\scriptsize f}}\right\|^2}.\ ] ] differentiating this expression with respect to , we get the numerator of the gradient which is given by ( [ eq : ddczfnom ] ) , but with replaced by ] , when the prior channel distribution is known.[thrm:2 ] the optimal channel estimator satisfies ( [ eq : nczf ] ) , ( [ eq : optfzf ] ) and ( [ eq : zf - mvue - opt ] ) , but with replaced by ] .we then have : =\frac{\frac{\sigma_x^2\sigma_w^2}{\|{\mbox{\boldmath }_{tr}^{}}\|^2}+\sigma_w^2}{e[|h|^2]+\frac{\sigma_w^2}{\|{\mbox{\boldmath }_{tr}^{}}\|^2}},\ ] ] which only depends on .furthermore , setting , it follows that /d\theta<0 ] is minimized when , which is intuitively appealing .therefore , any with energy equal to is an equally good training vector for the mvu estimator .thus , for the same , the estimator will be better than the mvu .similar conclusions can be reached for the mmse estimator , as well .we now examine the zeroth order excess mse in the case of the zf equalizer . in this case, we have : = \frac{|h|^2\left|{\mbox{\scriptsize f}}^{h}{\mbox{\scriptsize x}}-1\right|^2+\sigma_w^2\left\|{\mbox{\scriptsize f}}\right\|^2}{|h|^2\left|{\mbox{\scriptsize f}}^{h}{\mbox{\scriptsize x}}\right|^2+\sigma_w^2\left\|{\mbox{\scriptsize f}}\right\|^2}\left(\sigma_x^2+\frac{\sigma_w^2}{|h|^2}\right)\ ] ] the numerator of the gradient of the above expression with respect to is given by the following expression : \left[|h|^2\left(\varphi-1\right)^{*}{\mbox{\boldmath }_{\rm tr}^{}}+\sigma_w^2{\mbox{\boldmath }}\right]\nonumber\\ & & -\left[|h|^2\varphi^{*}{\mbox{\boldmath }_{\rmtr}^{}}+\sigma_w^2{\mbox{\boldmath }}\right]\left[|h|^2\left|\varphi-1\right|^2+\sigma_w^2\|{\mbox{\boldmath }}\|^2\right]\nonumber\\ \label{eq : ezfnom}\end{aligned}\ ] ] setting , one can easily check that the above expression becomes zero .therefore : the mvu _ is _ an optimal channel estimator for the task of minimizing ] depends on the unknown channel , the optimal channel estimator does not in this case . in this case , the prior statistics of the channel are known .the zeroth order excess mse is given by : &=&\frac{\left|\varphi-1\right|^2(e[|h|^4]\sigma_x^2+e[|h|^2]\sigma_w^2)}{e[|h|^4]|\varphi|^2+\sigma_w^2\left\|{\mbox{\scriptsize f}}\right\|^2 e[|h|^2]}\nonumber\\ & & + \frac{\sigma_w^2\left\|{\mbox{\scriptsize f}}\right\|^2(e[|h|^2]\sigma_x^2+\sigma_w^2)}{e[|h|^4]|\varphi|^2+\sigma_w^2\left\|{\mbox{\scriptsize f}}\right\|^2 e[|h|^2]}\end{aligned}\ ] ] differentiating this expression w.r.t . andsetting we zero the gradient . therefore : the mvu _ is _ an optimal channel estimator for the task of minimizing f x ] and variance .also , =|h|^2\left|{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}-1\right|^2+\sigma_w^2\|f\|^2 ] . for the power of the noise component, we have : =e\left[\left|\frac{w(n)}{\hat{h}}\right|^2\right]+e\left[\left|\frac{\epsilon}{\hat{h}}\right|^2\right]\sigma_x^2\label{eq : noisepbpsk}\ ] ] here , we face again the infinite moment problem . using again similar arguments as in the appendix for approximating ] when , we define the corresponding zeroth order version of { \mbox{\boldmath }} { \mbox{\boldmath }_{\rm tr}^{}} { \mbox{\boldmath }} { \mbox{\boldmath }} { \mbox{\boldmath }_{\rm tr}^{}} { \mbox{\boldmath }} { \mbox{\boldmath }} { \mbox{\boldmath }_{\rm tr}^{}} { \mbox{\boldmath }} { \mbox{\boldmath }} { \mbox{\boldmath }_{\rm tr}^{}} { \mbox{\boldmath }} { \mbox{\boldmath }} { \mbox{\boldmath }_{\rm tr}^{}} { \mbox{\boldmath }} ] is minimized .therefore , the results of subsection [ subsec : dmsedczf ] apply : the mvu estimator is _ not _ an optimal channel estimator for the task of minimizing ] with respect to _ any _ given channel distribution .we can then make the following statement : the mvu estimator is _ not _ an optimal channel estimator for the task of minimizing the average ] , possibly equal to .the average { \mbox{\boldmath }} 0 { \mbox{\boldmath }} 0 ] for every value of .finally , the numerator of } 0 ] .this concludes the proof .if we assume that the prior distribution of is known , then instead of the mvu , one could use the mmse channel estimator . plugging into the negative of ( [ eq : ddczfnom ] ), one can obtain that }\right|_{{\mbox{\scriptsize f}}={\mbox{\scriptsize f}}}\neq { \mbox{\boldmath }}\label{eq : snrnommmse} ] , when using any of the well - known digital modulations in a flat - fading awgn channel .[ thrm:8 ] the result follows along the same lines as in proposition [ thrm:7 ] .the problems of determining the optimal channel estimator for the task of minimizing ] was already solved in subsections [ subsec : dmsedczf-1 ] and [ subsec : dmserczf ] , respectively . in the case of}_0 ] , and we have already assumed high snr , therefore high ] and ] .the right hand side function is concave with respect to >0 ] .the right hand side is minimized when this last zeroth order approximation is minimized .thus , the estimators derived in subsections [ subsec : dmsedczf-1 ] and [ subsec : dmserczf ] are optimal for the task of minimizing }_0 ] ._ remark _ : although , we have shown that the mvu and mmse estimators are not optimal for the task of minimizing the zeroth order probability of error , we will see in the simulation section that their _ actual _ probability of error performance is almost identical with that of the optimal estimators for the zeroth order probability of error .this is due to two facts : first , the zeroth order probability of error is a variation of the actual probability of error and second , in practice the difference in the channel estimates must be large enough to give rise to a notable difference in the probability of error .nevertheless , we conjecture that such a difference may be more clear in the case of multiple input multiple output ( mimo ) systems if tight approximations of the error probability functions are used to derive the corresponding channel estimators .[ cols="^ " , ] in this section we present numerical results to verify our analysis . in all figures , and qpsk modulation is assumed .the snr during training highlights how good the channel estimate is .the parameter has been empirically selected to be . all schemes in figs .[ fig : zfmse]-[fig : zfexmse_biased ] use ( [ eq : wellbehest ] ) for the same . in figs .[ fig : zfmse]-[fig : zfpe ] , ] , i.e. , the real and imaginary parts of are assumed i.i.d . following a uniform distribution in ] equals and , respectively . in fig .[ fig : zfmse_zeroth ] , ] .the mvu is the best estimator as proved .this is another example contradicting what one would expect and verifying the motivation of this paper .furthermore , fig .[ fig : zfpe_zeroth ] shows the performance of all schemes in the case of an approximation to the error probability equal to }\right) ] and , respectively .they verify that the zeroth order metrics used in this paper are good approximations in terms of indicating the structure of uniformly better estimators than the mvu and mmse .nevertheless , the zeroth order metrics can not really determine the best possible bias with respect to the mvu estimator that the estimators in this paper must have in order to yield the best possible performance against the _ true _ performance metrics .the bias terms are only optimal with respect to the zeroth order metrics .in this paper , application - oriented channel estimator selection has been compared with common channel estimators such as the mvu and mmse estimators .we have shown that the application - oriented selection is the right way to choose estimators in practice .we have verified this observation based on three different performance metrics of interest , namely , the symbol estimate mse , the excess symbol estimate mse and the error probability .this section proposes a simplification of the metric for the estimator given in ( [ eq : wellbehest ] ) with a fixed . due to the gaussianity of , for any ( infinite moment problem ) . using ( [ eq : wellbehest ] ), the corresponding mean square error becomes : {\rm reg}={\rm pr}\left\{|{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}|>\lambda\right\}\cdot\nonumber\\ & & e\left[\sigma_x^2\left|1-\frac{h}{{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}}\right|^2+\frac{\sigma_w^2}{|{\mbox{\boldmath}_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}|^2 } ; |{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}|>\lambda\right]\nonumber\\ & & + { \rm pr}\left\{|{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath}_{\rm tr}^{}}|\leq \lambda\right\ } \cdot\nonumber\\ & & e\left[\frac{\sigma_x^2}{\lambda^2}\left|\lambda \frac{{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}}{|{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}|}-h\right|^2+\frac{\sigma_w^2}{\lambda^2 } ; where denotes conditioning and `` reg '' signifies the use of the regularized channel estimator in ( [ eq : wellbehest ] ) . to simplify this expression, we observe that , since by the mean value theorem this probability is equal to the area of the region , which is of order , multiplied by some value of the probability density function of in that region , which is of order .in addition , =\frac{\sigma_x^2}{\lambda^2}|h|^2\nonumber\\ & & + \frac{\sigma_w^2}{\lambda^2}-2\frac{\sigma_x^2}{\lambda}\re\left\ { h^{*}\frac{{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}}{|{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}|}\right\}\nonumber\end{aligned}\ ] ] if in addition the snr during training is sufficiently high and the probability mass of is concentrated around , then it can be shown that \approx \nonumber\\ & & \frac{\sigma_x^2e[|{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}-h|^2 ; |{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}|>\lambda]+\sigma_w^2}{e[|{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}|^2 ; |{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}|>\lambda]}.\end{aligned}\ ] ] the same holds even if is a biased estimator of at high training snr and tends to concentrate around a value bounded away from ( and of course from ) . to show the last claim , we set and .since , it also holds that >\lambda^2 ] and ] and ] , \rightarrow e^2[y] ] . for the last case ,notice that -e[x]e[y]\right|&=\left|e\left[(x - e[x])(y - e[y])\right]\right|\nonumber\\&\leq e\left[\left|x - e[x]\right|\left|y - e[y]\right|\right]\nonumber\\ & \leq \sqrt{e\left[\left|x - e[x]\right|^2\right]e\left[\left|y - e[y]\right|^2\right]},\end{aligned}\ ] ] where the last inequality follows again from the cauchy - schwarz inequality . by the mean square convergence of to ] the right hand side of ( [ eq : cauchyineq2 ] ) tends to .therefore , the right hand side of ( [ eq : cauchyineq1 ] ) tends to .furthermore , under the high snr assumption the conditional expectations can be approximated by their unconditional ones , since for a sufficiently small their difference is due to an event of probability .therefore , \approx\nonumber\\ & & \left\{\frac{\sigma_x^2e[|{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}-h|^2]+\sigma_w^2}{e[|{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}|^2]}\right\}+o(\lambda^2).\end{aligned}\ ] ] combining all the above results yields +\sigma_w^2}{e[|{\mbox{\boldmath }_{}^{h}}{\mbox{\boldmath }_{\rm tr}^{}}|^2]}\right\}+o(1).\ ] ] the term is not negligible but for sufficiently small its dependence on is insignificant .hence , for a sufficiently small and a sufficiently high snr during training , minimizing is equivalent to minimizing the following approximation =\frac{e\left[|\hat{h}-h|^2\right]}{e\left[|\hat{h}|^2\right]}\sigma_x^2+\sigma_w^2\frac{1}{e\left[|\hat{h}|^2\right]}.\label{eq : mseddczf0_app}\ ] ] 99 f. f. abari , f. k. sharifabad , o. edfors , `` low complexity channel estimation for lte in fast fading environments for implementation on multi - standard platforms '' , _ proc .vtc fall 2010 _ , ottawa , canada , sept . 69 , 2010 . j. g. andrews , a. ghosh , and r. muhamed , _ fundamentals of wimax : understanding broadband wireless networking , _ prentice - hall , 2007 .x. bombois , g. scorletti , m. gevers , p. m. j. van den hof , r. hildebrand , least costly identification experiment for control , " _ automatica _ , vol . 42 , no . 10 , pp . 16511662 , 2006 .chen , w - h .he , h - s .chen , y. lee , `` mode detection , synchronization , and channel estimation for dvb - t ofdm receiver '' , _ proc .globecom 2003 _ , san francisco , usa , december 15 , 2003 . j. m. cioffi , _ course readers for ee379a , ee379b , ee379c , ee479 _ , available at :http://www.stanford.edu / group / cioffi/. y. c. eldar , _ rethinking biased estimation : improving maximum likelihood and the cramer - rao bound _ , foundations and trends in signal processing , vol . 14 , pp . 305449 , 2008 . j. fan , g. ye li , q. yin , b. peng , x. zhu , `` joint user pairing and resource allocation for lte uplink transmission '' , _ ieee trans .on wireless communications _ , vol .11 , no . 8 , pp .28382847 , aug .o. n. gharehshiran , a. attar , v. krishnamurthy , `` collaborative sub - channel allocation in cognitive lte femto - cells : a cooperative game - theoretic approach '' , _ ieee trans . on communications _, accepted for publication , doi : 10.1109/tcomm.2012.100312.110480 . m. k. hati , t. k. bhattacharyya , `` digital video broadcast services to handheld devices and a simplified dvb - h receiver subsystem '' , _ proc .ncc 2012 _ , indian institute of technology kharagpur , india , february 35 , 2012 .h. hjalmarsson , system identification of complex and structured systems , " _ plenary address european control conference / european journal of control _ , vol .15 , no . 4 , pp . 275310 , 2009 . s. hu, j. xie , f. yang , `` improved pilot - aided channel estimation in lte uplink '' , _ proc .iccp 2011 _ , chengdu , china , october 2123 , 2011 .j. huber , _ robust statistics _ , john wiley sons , 2005 .a. goldsmith , _ wireless communications _ , cambridge university press , 2005 .h. jansson , h. hjalmarsson , input design via lmis admitting frequency - wise model specifications in confidence regions , " _ ieee trans .50 , no .10 , pp . 15341549 , 2005 .d. katselis , c. r. rojas , h. hjalmarsson , m.bengtsson , application - oriented finite sample experiment design : a semidefinite relaxation approach , " _ proc .sysid-2012 , _ brussels , belgium , july 2012 .s. m. kay , _ fundamentals of statistical signal processing , volume i : estimation theory _ , prentice hall , 1993 . s. m. kay , _ fundamentals of statistical signal processing , volume ii : detection theory _ , prentice hall , 1998 .d. lee , m. choi , s. choi , `` channel estimation and interference cancellation of feedback interference for docr in dvb - t system '' , _ ieee trans . on broadcasting _ ,1 , pp . 8797 , march 2012 .j. c. lee , d. s. han , s. park , `` channel estimation based on path separation for dvb - t in long delay situations '' , _ ieee trans . on consumer electronics _ , vol .2 , pp . 316321 , may 2009 .y. liu , s. sezginer , `` iterative compensated mmse channel estimation in lte systems '' , _ proc .icc 2012 _ , ottawa , canada , june 1015 , 2012 .d. lpez - prez , x. chu and j. zhang , `` dynamic downlink frequency and power allocation in ofdma cellular networks '' , _ ieee trans .on communications _ , vol .29042914 , oct .j. g. proakis , _ digital communications _, 3rd edition , mcgraw - hill , 1995 .d. schafhuber , g. matz , f. hlawatsch , p. loubaton , `` mmse estimation of time - varying channels for dvb - t systems with strong co - channel interference '' , _ proc .eusipco 2002 _ , toulouse , france , sept .p. siebert , `` dvb : developing global television standards for today and tomorrow '' , _ proc .itu wt 2011 _ , geneva , switzerland , 2427 october , 2011 . i. l. j. da silva , a. l. f. de almeida , f. r. p. cavalcanti , r. baldemair , s. falahati , `` improved data - aided channel estimation in lte pucch using a tensor modeling approach '' , _ proc .icc 2010 _ , cape town , south africa , may 2327 , 2010 .i. siomina , d. yuan , `` analysis of cell load coupling for lte network planning and optimization '' , _ ieee trans . on wireless communications _ ,11 , no . 6 , pp .22872297 , june 2012 .l. yang , g. ren , b. yang , z. qiu , `` fast time - varying channel estimation technique for lte uplink in hst environment '' , _ ieee trans . on vehicular technology _ ,9 , pp . 40094019 , nov . 2012 .
|
the fundamental task of a digital receiver is to decide the transmitted symbols in the best possible way , i.e. , with respect to an appropriately defined performance metric . examples of usual performance metrics are the probability of error and the mean square error ( mse ) of a symbol estimator . in a coherent receiver , the symbol decisions are made based on the use of a channel estimate . this paper focuses on examining the optimality of usual estimators such as the minimum variance unbiased ( mvu ) and the minimum mean square error ( mmse ) estimators for these metrics and on proposing better estimators whenever it is necessary . for illustration purposes , this study is performed on a toy channel model , namely a single input single output ( siso ) flat fading channel with additive white gaussian noise ( awgn ) . in this way , this paper highlights the design dependencies of channel estimators on target performance metrics . minimum mean square error ( mmse ) , minimum variance unbiased ( mvu ) , probability of error , single input single output ( siso ) .
|
mobile ad hoc networks often use the ad - hoc on - demand distance - vector ( aodv ) routing protocol , which discovers and maintains multihop paths between source mobiles and destination mobiles . however , these paths are susceptible to disruption due to changes in the fading , terrain , and interference , and hence the control overhead requirements are high . an alternative class of routing protocols that do not maintain established routes between mobiles are the geographic routing protocols .these protocols require only a limited amount of topology storage by mobiles and provide flexibility in the accommodation of the dynamic behavior of ad hoc networks , . among the many varieties of geographic routing protocols ,four representative ones are evaluated in this paper : greedy forwarding and known nearest - neighbor routing , which use beacons , and contention - based nearest - neighbor and maximum - progress routing , which are beaconless .the tradeoffs among the average path reliabilities , average conditional delays , average conditional number of hops , and area spectral efficiencies and the effects of various parameters are illustrated for large ad hoc networks with randomly placed mobiles .. a comparison is made with the popular aodv routing protocol to gain perspective about the advantages and disadvantages of geographic routing .this paper uses a dual method of closed - form analysis and simple simulation to provide a realistic performance evaluation of the five routing protocols .the method performs spatial averaging over network realizations by exploiting the deterministic geometry of rather than the conventional stochastic geometry , thereby eliminating many unrealistic restrictions and assumptions , as explained in .the method has great generality and can be applied to the performance evaluation of most other routing protocols .the network comprises mobiles in an arbitrary two- or three - dimensional region . the variable represents both the mobile and its location , and is the distance from the mobile to the mobile .mobile serves as the reference transmitter or message source , and mobile serves as the reference receiver or message destination .the other mobiles are potentially relays or sources of interference .each mobile uses a single omnidirectional antenna . _ exclusion zones _ surrounding the mobiles , which ensure a minimum physical separation between two mobiles , have radii set equal to the mobiles are uniformly distributed throughout the network area outside the exclusion zones , according to a _ uniform clustering _ model .the mobiles of the network transmit asynchronous quadriphase direct - sequence signals .for such a network , interference is reduced after despreading by the factor , where is the _ processing gain _ or _ spreading factor _ , and is the chip factor , which reduces interference due to its asynchronism .let denote the received power from at the reference distance before despreading when fading and shadowing are absent .after the despreading , the power of s signal at the mobile is where for the desired signal , for an interferer , is the power gain due to fading , is a shadowing factor , and is a path - loss function .the path - loss function is expressed as the power law where is the path - loss exponent , is sufficiently far that the signals are in the far field , and the \{ are independent with unit - mean but are not necessarily identically distributed ; i.e. , the channels from the different to may undergo fading with different distributions . for analytical tractability and close agreement with measured fading statistics , nakagami fading is assumed , and , where is nakagami with parameter .it is assumed that the \{ remain fixed for the duration of a time interval but vary independently from interval to interval ( block fading ) . in the presence of shadowing with a lognormal distribution ,the are independent zero - mean gaussian random variables with variance .for ease of exposition , it is assumed that the shadowing variance is the same for the entire network , but the results may be easily generalized to allow for different shadowing variances over parts of the network . in the absence of shadowing , . while the fading may change from one transmission to the next , the shadowing remains fixed for the entire session . the _ service probability _ is defined as the probability that mobile can serve as a relay along a path from a source to a destination , and is the probability that is a potential interferer .a mobile may not be able to serve as a relay in a path from to because it is already receiving a transmission , is already serving as a relay in another path , is transmitting , or is otherwise unavailable with _ interference probability _ , a potentially interfering transmits in the same time interval as the desired signal .the can be used to model the servicing of other streams , controlled silence , or failed link transmissions and the resulting retransmission attempts .mobiles and do not cause interference . when the mobile serves as a potential relay , we set let denote the noise power , and the indicator denote a bernoulli random variable with probability =p_{i}$ ] .since the despreading does not significantly affect the desired - signal power , ( [ 1 ] ) and ( [ 2 ] ) imply that the instantaneous signal - to - interference - and - noise ratio ( sinr ) at the mobile for a desired signal from mobile is where is the normalized power of at , and is the snr when is at unit distance from and fading and shadowing are absent .the _ outage probability _ quantifies the likelihood that the interference , shadowing , fading , and noise will be too severe for useful communications .outage probability is defined with respect to an sinr threshold , which represents the minimum sinr required for reliable reception . in general , the value of depends on the choice of coding and modulation .an _ outage _ occurs when the sinr falls below . in ,closed - form expressions are provided for the outage probability conditioned on the particular network geometry and shadowing factors .let represent the set of normalized powers at .conditioning on , the _ outage probability _ of the link from to receiver is .\ ] ] the conditioning enables the calculation of the outage probability for any specific network geometry , which can not be done using tools based on stochastic geometry .the closed - form equations for are used in the subsequent performance evaluations of the routing protocols .the three routing protocols that are considered are reactive or on - demand protocols that only seek routes when needed and do not require mobiles to store details about large portions of the network .the aodv protocol relies on flooding to seek the _ fewest - hops path _ during its _ path - discovery phase_. the flooding diffuses request packets simultaneously over multiple routes for the purpose of discovering a successful route to the destination despite link failures along some potential paths .when the first request packet reaches the destination , backtracking by an acknowledgement packet establishes the route the request packet followed as the single static fewest - hops path for subsequent message packets during a _ message - delivery phase_. subsequent receptions of request packets by the destination are ignored .there is a high overhead cost in establishing the fewest - hops path during the path - discovery phase , and the fewest- hops path must be used for message delivery before changes in the channel conditions cause an outage of one or more of its links .geographic protocols limit information - sharing costs by minimizing the reliance of mobiles on topology information , .since geographic routing protocols make routing decisions on a hop - by - hop basis , they do not require a flooding process for path discovery .two geographic routing protocols are examined : the _ greedy forwarding protocol _ and the _ maximum progress protocol_. both geographic routing protocols assume that each mobile knows its physical location and the direction towards the destination .the greedy forwarding protocol relies on _ beacons _ , which are mobiles that periodically broadcast information about their locations .a source forwards a packet to a _relay _ that is selected from a set of neighboring beacons that are modeled as the set of active mobiles that lie within a _ transmission range _ of radius .the next link in the path from source to destination is the link to the relay within the transmission range that shortens the remaining distance to the most .there is no path - discovery phase because the relays have the geographic information necessary to route the messages to the destination .the maximum progress protocol is a contention - based protocol that does not rely on beacons but comprises alternating _ path - discovery phases _ and _ message - delivery phases_. during a path - discovery phase , a single link to a single relay is discovered . during the following message - delivery phase , a packet is sent to that relay , and then the alternating phases resume until the destination is reached . in a path - discovery phase , the next relay in a path to the destination is dynamically selected at each hop of each packet and depends on the local configuration of available relays . a source or relay broadcasts _ request - to - send _ ( rts ) messages to neighboring mobiles that potentially might serve as the next relay along the path to the destination .the rts message includes the location of the transmitting source or previous relay . upon receiving the rts , a neighboring mobile initiates a timer that has an expiration time proportional to the remaining distance to the destination .when the timer reaches its expiration time , the mobile sends a _ clear - to - send _ ( cts ) message as an acknowledgement packet to the source or previous relay .the earliest arriving cts message causes the source or previous relay to launch the message - delivery phase by sending message packets to the mobile that sent that cts message , and all other candidate mobiles receiving that cts message cease operation of their timers .for the analysis and simulation , we draw a random realization of the network ( topology ) using the uniform clustering distribution of mobiles. the source and destination mobiles are placed , and then , one by one , the location of each remaining is drawn according to a uniform distribution within the network region . however , if an falls within the exclusion zone of a previously placed mobile , then it has a new random location assigned to it as many times as necessary until it falls outside all exclusion zones . using the service probabilities , the set of potential relays is randomly selected for each simulation trial .the routing protocols use a _ distance criterion _ to exclude a link from mobile to mobile as a link in one of the possible paths from to if .these exclusions ensure that each possible path has links that always reduce the remaining distance to the destination .all links connected to mobiles that can not serve as relays are excluded as links in possible paths from to links that have not been excluded are called _ eligible _ links .the eligible links are used to determine the greedy - forwarding path from to during its message - delivery phase .there is no path - discovery phase . if no path from to be found or if the message delivery fails , a _ routing failure _ is recorded .a _ candidate link _ is an eligible link that does not experience an outage during the path - discovery phase . to identify the candidate links within each topology, we apply our analysis to determine the outage probability for each eligible link .a monte carlo simulation decides whether an eligible link is in an outage by sampling a bernoulli random variable with the corresponding outage probability .a links that is not in an outage is called a _ candidate link_. for aodv ,the _ candidate paths _ from to are paths that can be formed by using candidate links .the candidate path with the fewest hops from to is selected as the _ fewest - hops path_. this path is determined by using the _ djikstra algorithm _ with the unit cost of each candidate link . if two or more candidate paths have the fewest hops , the fewest - hops path is randomly selected from among them. if there is no set of candidate links that allow a path from to then a routing failure occurs .if a fewest - hops path exists , then a monte carlo simulation is used to determine whether the acknowledgement packet traversing the path in the reverse direction is successful .if it is not or if the message delivery over the fewest - hops path fails , then a routing failure occurs a _ two - way candidate link _ is an eligible link that does not experience an outage in either the forward or the reverse direction during the path - discovery phase .a monte carlo simulation is used to determine the two - way candidate links . for the maximum progress protocol , the two - way candidate link starting with source with a terminating relay that minimizes the remaining distance to destination is selected as the first link in the maximum - progress path . the link among the two - way candidate links that minimizes the remaining distance and is connected to the relay at the end of the previously selected linkis added successively until the destination is reached and hence the maximum - progress path has been determined . after each relayis selected , a message packet is sent in the forward direction to the selected relay . if no maximum - progress path from to can be found or if a message delivery fails , a routing failure is recorded .the cts message transmitted by the maximum progress protocol during its path - discovery phase establishes guard zones .potentially interfering mobiles within the guard zones are silenced during the message - delivery phase of the maximum progress protocol .it is assumed that the guard zones have sufficiently small radii that the cts message is correctly decoded .any potentially interfering mobile that lies in one of the guard zones surrounding the two mobiles at the ends of each link of a selected path is silenced by setting its during message delivery .let denote the maximum number of transmission attempts over a link of the path . during the path - discovery phases , . during the message - delivery phases , because message retransmissions over an established link are feasible .for each eligible or candidate link , a bernoulli random variable with failure probability is repeatedly drawn until there are either failures or success after transmission attempts , where .the _ delay of link _ of the selected path is , where is the _ delay of a transmission over a link _ , and is the _ excess delay _ caused by a retransmission .each network topology is used in simulation trials .the path delay of a path from to for network topology and simulation trial is the sum of the link delays in the path during the message - delivery phase : \ ] ] where is the set of links constituting the path .if there are transmission failures for any link of the selected path , then a routing failure occurs . if there are routing failures for topology and simulation trials , then the _ probability of end - to - end success _ or _ path _ _ reliability _ within topology is let denote the set of trials with no routing failures .if the selected path for trial has links or hops , then among the set , the average conditional _ number of hops _ from to is let denote the link delay of packets during the path - discovery phase .the average conditional _ delay _ from to during the combined path - discovery and message - delivery phases is where for the greedy forwarding protocol , and for the maximum progress and aodv protocols let denote the network area and denote the density of the possible transmitters in the network .we define the _ normalized _ _ area spectral efficiency _ for the trials of topology as where the normalization is with respect to the bit rate or bits per channel use .the normalized area spectral efficiency is a measure of the end - to - end throughput in the network . after computing and for network topologies , we can average over the topologies to compute the _ topological averages : _ and host of network topologies and parameter values can be evaluated by the method described . here , we consider a representative example that illustrates the tradeoffs among the routing protocols .we consider a network occupying a circular region with normalized radius the source mobile is placed at the origin , and the destination mobile is placed a distance from it .times are normalized by setting .each transmitted power is equal .there are no retransmissions during the path - discovery phases , whereas during the message - delivery phases a _ distance - dependent fading _ model is assumed , where a signal originating at mobile arrives at mobile with a nakagami fading parameter that depends on the distance between the mobiles .we set where is the _ line - of - sight radius_. the distance - dependent - fading model characterizes the typical situation in which nearby mobiles most likely are in each other s line - of - sight , while mobiles farther away from each other are not .other fixed parameter values are , db , db , and the service and interference probabilities are assumed to have the same values for all mobiles so that and . unless otherwise stated , , , and when shadowing is present , it has a lognormal distribution with db . however , the transmitted packets encounter the same shadowing in both directions over the same link during both routing phases . fig . [ fig.1 ] and[ fig.2 ] display the average path reliabilities of the request packets and acknowledgement packets , respectively , for the complete selected paths during the path - discovery phases of the aodv and maximum progress ( mp ) protocols .figure 1 depicts the reliabilities both with and without shadowing as a function of the source - destination distance shadowing is assumed in fig .[ fig.2 ] and all subsequent figures . fig .[ fig.1 ] shows an initial decrease and then an increase in average path reliability as the source - destination distance increases .this variation occurs because at short distances , there are very few relays that provide forward progress , and often the only eligible or candidate link is the direct link from source to destination . as the distance increases , there are more eligible and candidate links , and hence the network benefits from the diversity .furthermore , as the destination approaches the edge of the network , the path benefits from a decrease in interference at the relays that are close to the destination .[ fig.1 ] shows that during the request stage , the aodv protocol provides the better path reliability because it constructs several partial paths before the complete path is determined . since the relays are already determined in fig .[ fig.2 ] , the maximum progress protocol shows only a mild improvement with increasing source - destination distance , and this can be attributed almost entirely to the edge effect .it is observed in fig .[ fig.2 ] that the aodv protocol has a relatively poor path reliability during the acknowledgement stage , which is due to the fact that a specified complete path must be traversed in the reverse direction , where the interference and fading may be much more severe .the maximum progress protocol does not encounter the same problem because the links in its paths are selected one - by - one with the elimination of links that do not provide acknowledgements .although both the shadowing and the path - loss exponent affect both the packets and the interference signals , the two figures indicate that the overall impact of more severe propagation conditions is detrimental for all distances .[ fig.3 ] displays the average path reliabilities for the message - delivery phases of the three protocols , assuming that the path - discovery phase , if used , has been successful .the figure illustrates the penalties incurred by the greedy forwarding ( gf ) protocol because of the absence of a path - discovery phase that eliminates links with excessive shadowing , interference , or fading and creates guard zones for the message - delivery phase . the figure illustrates the role of the transmission range in determining average path reliability for greedy forwarding protocols . as increases , the links in the complete path are longer and less reliable .however , this disadvantage is counterbalanced by the increased number of potential relays and the reduction in the average number of links in a complete path . fig .[ fig.4 ] shows the overall average path reliabilities for the combined path - discovery and message - delivery phases of all three routing protocols .the aodv protocol is the least reliable .the maximum progress protocol is much more reliable than the greedy forwarding protocol if is large , but is not as reliable if because of the relatively low reliability of its request packets .the average conditional delay the average conditional number of hops and the normalized area spectral efficiency for each routing protocol as a function of are displayed in fig .[ fig.5 ] , fig .[ fig.6 ] , and fig .[ fig.7 ] , respectively .the greedy forwarding protocol has the highest if is small , whereas the maximum progress protocol has the highest if is large .the reason is the rapid loss of reliability and increase in the average conditional delay of the greedy forwarding protocol when is large .this paper presents performance evaluations and comparisons of two geographic routing protocols and the popular aodv protocol .the trade - offs among the average path reliabilities , average conditional delays , average conditional number of hops , and area spectral efficiencies and the effects of various parameters have been shown for a typical ad hoc network . since acknowledgements are often lost due to the nonreciprocal interference and fading on the reverse paths , the aodv protocol has a relatively low path reliability , and its implementation is costly because it requires a flooding process . in terms of the examined performance measures , the greedy forwarding protocol is advantageous when the separation between the source and destination is small and the spreading factor is large , provided that the transmission range and the relay density are adequate .the maximum progress protocol is more resilient when the relay density is low and is advantageous when the separation between the source and destination is large .the general methodology of this paper can be used to provide a significantly improved analysis of multihop routing protocols in ad hoc networks .many unrealistic and improbable assumptions and restrictions of existing analyses can be discarded .k. z. ghafoor , k. a. bakar , j. lloret , r. h. khokhar , and k. c. lee , `` intelligent beaconless geographical forwarding for urban vehicular environments , '' _ wireless networks _ , vol .345 - 362 , apr . 2013 .h. elsawy , e. hossain , and m. haenggi , `` stochastic geometry for modeling , analysis , and design of multi - tier and cognitive cellular wireless networks : a survey , '' _ ieee commun .surveys tut .996 - 1019 , 3rd quarter , 2013 .
|
geographic routing protocols greatly reduce the requirements of topology storage and provide flexibility in the accommodation of the dynamic behavior of ad hoc networks . this paper presents performance evaluations and comparisons of two geographic routing protocols and the popular aodv protocol . the trade - offs among the average path reliabilities , average conditional delays , average conditional number of hops , and area spectral efficiencies and the effects of various parameters are illustrated for finite ad hoc networks with randomly placed mobiles . this paper uses a dual method of closed - form analysis and simple simulation that is applicable to most routing protocols and provides a much more realistic performance evaluation than has previously been possible . some features included in the new analysis are shadowing , exclusion and guard zones , and distance - dependent fading .
|
among many attempts to understand quantum theory axiomatically , an operationally natural approach has attracted increasing attention in the recent development of quantum information theory . by constructing a general framework of theories to include not only classical and quantum theories but also more general theories, one can reconsider the nature of quantum theory from outside , preferably with the operational and informational point of view .this also enables us to prepare for a ( possible ) post - quantum theory in the future .for instance , it is important to find conditions to achieve a secure key distribution in a general framework . among others ,the convexity or operational approach , or recently referred as general ( or generic ) probabilistic theories ( or models ) " , is considered to provide operationally the most general theory for probability .of course , both classical probability theory and quantum theory are included as typical examples of general probabilistic theories , but it is known that there exist other possible physical models for probability ( see an example in sec .iv b ) .although this approach has relatively long history , there are still many fundamental problems especially from the applicational and informational points of view to be left open. this may not be surprising if one recalls that quantum information theory has given new insights and provided attractive problems on the foundation and application of quantum mechanics .one of them is a state discrimination problem .the problem asks how well a given ensemble of states is distinguishable .it has been one of the most important questions in quantum information theory , and there are various formulations of the problem depending on measures to characterize the quality of discrimination .the property that there is no measurement perfectly distinguishes non - orthogonal pure states plays an essential role in the various protocols such as quantum key distribution , and is often considered as the most remarkable feature of quantum theory . on the other hand ,in the context of general probabilistic theories , the property can characterize the nature of classical theory .indeed , it is known that a general probabilistic theory is a classical theory if and only if all the pure states can be perfectly discriminated in a single measurement . in this paper , we discuss an optimal state discrimination problem in general probabilistic theories by means of bayesian strategy .while the existence of bayes optimal measurements has been discussed in general setting , we provide a geometrical method to find such optimal measurement and optimal success probability .our figure of merit is the optimal success probability , in discriminating numbers of states under a given prior distribution .we introduce a useful family of ensembles , which we call a _ helstrom family of ensembles _ , in any general probabilistic theories , which generalizes a family of ensembles used in in -level quantum systems for binary state discrimination , and show that the family enable us to obtain optimal measurements by means of bayesian strategy .this method reveals that a certain geometrical relation between state space and the convex subset generated by states which we want to distinguish is crucial for the problem of state discrimination : in the case of uniform prior distribution , what one has to do is to find as large convex subset ( composed of helstrom family of ensembles ) as possible in state space which is reverse homothetic to the convex subset generated by states under consideration .the existences of the helstrom families for which again have a simple geometrical interpretation are shown in both classical and quantum systems in generic cases .some other works on the problem in quantum theory are related with our purpose ; the no - signaling condition was used in deriving the optimal success probability between two states in -level quantum systems , a bound of the optimal success probability and a maximal confidence among several non - orthogonal states in general quantum systems . in particular, we discuss the relation between our method and the one used in , and show that our method generalizes the results in to general probabilistic theories .the paper is organized as follows . in sec .[ sec : review ] , we give a brief review of general probabilistic theories . in sec .[ sec:1 ] , we introduce a _ helstrom family of ensembles _ and show the relation with an optimal measurement in state discrimination problem ( propositions [ prop : bdd ] , [ prop : sc ] , theorem [ thm : he2 ] ) .we also prove the existences of the families of ensembles for in classical and quantum systems in generic cases ( theorems [ thm : qhe ] , [ thm : che ] ) . in sec .[ sec : ex ] , we illustrate our method in -level quantum systems , and reproduce the optimal success probabilities for binary state discrimination and numbers of symmetric quantum states . as an example of neither classical nor quantum theories , we introduce a general probabilistic model with square - state space .our method is also applied to this model to exemplify its usability . in sec .[ ref:2 ] , we summarize our results .in order to overview general probabilistic theories as the operationally most general theories of probability , let us start from a very primitive consideration of physical theories where a probability plays a fundamental role .in such a theory , a particular rule ( like borel rule in quantum mechanics ) to obtain a probability for some output when measuring an observable under a state should be provided .therefore , states and observables are two fundamental ingredients with an appropriate physical law to obtain probabilities in general probabilistic theories .let us denote the set of states by . in a simplified view , an -valued observable numbers of states .note that it is straightforward to formalize general observables with measure theoretic language .] can be considered as an numbers of maps on a state space so that ] which represents an ensemble of preparing state with probability and state with probability .furthermore , it is natural to assume the so - called separating condition for states ; namely , two states and should be identified when there are no observables to statistically distinguish them .then , it has been shown that without loss of generality , the state space is embedded into a convex ( sub)set in a real vector space such that a probabilistic - mixture state is given by a convex combination . in a real vector space called convex if for any and ] .there are two trivial effects , unit effect and zero effect , defined by for all . with this language ,an -valued observable is a set of effects satisfying , meaning that is the probability to obtain the output when measuring the observable in the state .we denote by and the sets of all the effects and -valued observables , respectively . while the output of an observable can be not only from real numbers but also any symbols , like head " or tail " , hereafter we often identify them with .physically natural topology on is given by the ( weakest ) topology so that all the effects are continuous . without loss of generality , is assumed to be compact with respect to this topology .typical examples of the general probabilistic theories will be classical and quantum systems . for simplicity, the classical and quantum systems we consider in this paper will be finite systems : [ example 1 : classical systems ] finite classical system is described by a finite probability theory .let be a finite sample space .a state is a probability distribution , meaning that the probability to observe is .therefore , the state space is , and forms a ( standard ) simplex . with numbers of extreme points is called a simplex if any element has the unique convex combinations with respect to .equivalently , is called a simplex iff the affine dimension of is . ]the set of extreme points is where .an effect is given by a random variable ] as a subset of and ] , give a ( nontrivial ) weak helstrom family of ensembles with a helstrom ratio .notice that for general cases , the similarity between two polytopes generated by and is distorted .( see fig .[ fig : whe ] [ b ] for . ) in the following , we show that a weak helstrom family of ensembles is closely related to an optimal state discrimination strategy , and provide a geometrical method to obtain the helstrom bound and an optimal measurement in any general probabilistic theories .let us again consider a state discrimination problem from with a prior distribution .let be any -valued observable from which alice decides the state be in if she observes an output .suppose that we have a weak helstrom family with the reference state and a helstrom ratio .then , using , affinity of and eq . , it follows since , we obtain which holds for any observables .thus we have proved the following proposition .[ prop : bdd ] let be a weak helstrom family of ensembles with a helstrom ratio . then , we have a bound for the helstrom bound .this means that , once we find a weak helstrom family of ensembles , a bound of the helstrom bound is automatically obtained .a trivial weak helstrom family gives a trivial condition , which is the reason we called it trivial .examples of nontrivial weak helstrom families are given in fig .[ fig : whe ] , where [ a ] and [ b ] .namely , the optimal success probability in this general probabilistic model is at most and for [ a ] and [ b ] , respectively .moreover , proposition [ prop : bdd ] leads us to a useful notion of helstrom family of ensembles defined as follows : let be a weak helstrom family of ensembles for distinct states and a prior probability distributions .we call it a helstrom family of ensembles if the helstrom ratio attains the helstrom bound : . from equations ,an observable satisfies if for any .then , it follows .consequently , we have [ prop : sc ] a sufficient condition for a weak helstrom family of ensembles to be helstrom family is that there exists an observable satisfying for all . in this case, the observable gives an optimal measurement to discriminate with a prior distribution .two states are said to be distinguishable if there exists an observable which discriminates and with certainty ( for any prior distributions ) , or equivalently satisfy therefore , as a corollary of proposition [ prop : sc ] for , we obtained the following theorem for a binary state discrimination ( ) .[ thm : he2 ] let be a weak helstrom family of ensembles for states and a binary probability distribution such that and are distinguishable states .then , is a helstrom family with the helstrom ratio . an optimal measurement to distinguish and given by an observable to distinguish and . * proof * the distinguishability of and satisfies the sufficient condition in proposition [ prop : sc ] .let us consider the case where is a subset of finite dimensional real vector space . from condition ,geometrical meaning of two distinguishable states is that they are on the boundary of which possess parallel supporting hyperplanes ( see fig . [ fig : ds ] ) . here, a supporting hyperplane at a point is a hyperplane such that and is contained in one of the two closed half - spaces of the hyperplane . indeed ,if there exist two parallel supporting hyperplanes and at and respectively , one can construct an affine functional on such that on and for .then , the restriction of to is an effect which distinguishes and with certainty since is contained between and and .then , to find a helstrom family of ensembles given in theorem [ thm : he2 ] is nothing but a simple geometrical task . here , we explain this in the uniform distribution cases : from the definition of a ( weak ) helstrom family of ensembles and theorem [ thm : he2 ] , two ensembles for a distinct stats with the uniform distribution are ensembles of a helstrom family if are distinguishable and with some . from, and should be parallel , and therefore one easy way to find helstrom family is as follows : search conjugate states and on the boundary of which are on a line parallel to such that there exist parallel supporting hyperplanes at and .then , the crossing point is a reference state while the ratio between ( ) and ( ) determines the helstrom ratio . in fig .[ fig : model ] , helstrom families for some models on are illustrated .now it is important to ask whether a helstrom family of ensembles always exists for any general probabilistic theories or not . in this paper , we show a helstrom family of ensembles for a binary state discrimination ( ) always exist in generic cases for both classical and quantum systems .( for the existence in more general general probabilistic theories , see our forthcoming paper . ) here , we mean by generic cases all the cases except for trivial cases where ] , )_{i=1}^d ] . clearly , it is itself , with the similarity point at the center of .more precisely , one can choose conjugate states where denotes the exclusive or , and .therefore , we obtained a weak helstrom family with the helstrom ratio .it turns out that this weak helstrom family is a helstrom family , and thus we obtain to discriminate all pure states in this system .indeed , it is easy to see that affine functionals on defined by ( and hence satisfying ) for any forms a -valued observable on .this satisfies the sufficient condition in proposition [ prop : sc ] .in this paper , we introduced a notion of a ( weak ) helstrom family of ensembles in general probabilistic theories and showed the close relation with state discrimination problems .basically , helstrom family can be searched by means of geometry , and once we have the family , or at least a nontrivial weak family , the optimal success probability , or a bound of it , is automatically obtained from the helstrom ratio . in binary state discriminations , a weak helstrom family of ensembles with distinguishable conjugate statesis shown to be a helstrom family which has again a simple geometrical interpretation .we illustrated our method in -level quantum systems and reproduced the helstrom bound for binary state discrimination and symmetric quantum states .as an nontrivial general probabilistic theories , a probabilistic model with square - state space is investigated and binary state discrimination and pure states discrimination are established using our method . in this paper , we showed the existences of helstrom families of ensembles analytically in both classical and quantum theory in any generic cases in binary state discriminations .for the more general models , it will be investigated in our forthcoming paper .there , we also clarify the relation between our method and linear programming problem .99 l. hardy , arxiv : quant - ph/0101012 . c. a. fuchs , arxiv : quant - ph/0205039 . j. barrett , phys . rev .a. * 75 * ( 2007 ) 032304 .r. clifton , j. bub , and h. halvorson , found .phys . * 33 * , 1561 ( 2003 ) .g. m. dariano , arxiv : quant - ph/0603011 . j. barrett , l. hardy , and a. kent , phys .lett . * 95 * , 010503 ( 2005 ) .s. p. gudder , _ stochastic method in quantum mechanics _( dover , mineola , 1979 ) ; s. p. gudder , _ quantum probability _( academic , boston , 1988 ) .h. barnum , j. barrett , m. leifer , and a. wilce , phys .lett . * 99 * 240501 ( 2007 ) ; arxiv : quant - ph/0611295 . h. barnum , j. barrett , m. leifer , and a. wilce , arxiv:0805.3553 . c. m. edwards , comm .* 16 * , 207 ( 1970 ) ; e. b. davies and j. t. lewis , comm .* 17 * 239 ( 1970 ) ; g. ludwig , _ foundations of quantum mechanics _vol i and ii , ( springer - verlag , new york , 1983 and 1985 ) . e.b .davies , _ quantum theory of open systems _ ( academic , london , 1976 ) . a. s .holevo , _ probabilistic and statistical aspects of quantum theory _( north - holland , amsterdam , 1982 ) . c. w. helstrom , _ quantum detection and estimation theory _ , ( academic , newyork , 1976 ) .i. d. ivanovic , phys .a * 123 * , 257 ( 1987);d .dieks , phys .a * 126 * , 303 ( 1988 ) ; a. peres , phys .a * 128 * , 19 ( 1988 ) ; h. p. yuen , r. s. kennedy , and m. lax , ieee trans .theory * it-21 * , 125 ( 1975 ) ; see also a. s .holevo , _ statistical structure of quantum theory _ ( springer , berlin , 2001 ) .w. y. hwang , phys .a * 71 * , 062315 ( 2005 ) ; j. bae , j. w. lee , j. kim , and w. y. hwang , arxiv : quant - ph/0406032 . s. m. barnett and e. andersson , phys .a * 65 * , 044307 ( 2002 ) ; d. qiu , phys . lett .a * 303 * , 140 ( 2002 ) ; y. feng , s. zhang , r. duan , and m. ying , phys . rev .a * 66 * , 062313 ( 2002 ) .s. croke , e. andersson , and s. m. barnett , phys .a * 77 * , 012113 ( 2008 ) .m. ozawa , ann .phys . * 311 * , 350 ( 2004 ) .k. nuida , g. kimura , t. miyadera , and h. imai ( in preparation ) .s. r. lay , _ convex sets and their applications _( krieger , malabar , 1992 ) .m. ban , k. kurokawa , r. momose , and o. hirota , int .phys . * 36 * , 1269 ( 1997 ) .g. kimura , k. imafuku , t. miyadera , k. nuida , and h. imai ( in preparation ) .
|
we investigate a state discrimination problem in operationally the most general framework to use a probability , including both classical , quantum theories , and more . in this wide framework , introducing closely related family of ensembles ( which we call a _ helstrom family of ensembles _ ) with the problem , we provide a geometrical method to find an optimal measurement for state discrimination by means of bayesian strategy . we illustrate our method in -level quantum systems and in a probabilistic model with square - state space to reproduce e.g. , the optimal success probabilities for binary state discrimination and numbers of symmetric quantum states . the existences of families of ensembles in binary cases are shown both in classical and quantum theories in any generic cases .
|
muscle fatigue is defined as `` any reduction in the ability to exert force in response to voluntary effort '' , and it is believed that the muscle fatigue is one of potential reasons leading to musculoskeletal disorders ( msds ) in the literature .great effort has been contributed to integrate fatigue into different biomechanical models , especially in virtual human simulation for ergonomic application , in order to analyze the fatigue in muscles and joints and further to decrease the msd risks . in general ,mainly two approaches have been adopted to represent muscle fatigue , either in theoretical methods or in empirical methods .one or more decay terms were introduced into existing muscle force models in theoretical fatigue models , and those decay terms were mainly based on physiological performance of muscles in fatigue contraction .for example , a fatigue model based on the intracellular ph was incorporated into hill s muscle mechanical model .this fatigue model was also applied by to demonstrate the fatigue of different individual muscles .another muscle fatigue model based on physiological mechanism has been included into the virtual solider research program , and in this model , dozens of parameters have to be fit for model identification only for a single muscle . as stated in , `` these theoretical models are relatively complex but useful at the single muscle level. however , they do not readily handle task - related biomechanical factors such as joint angle and velocity . ''meanwhile , several muscles around a joint are engaged in order to realize an action or a movement around the joint , and mathematically this results in an underdetermined equation while determining the force of each engaged muscle due to muscle redundancy and complex muscle force moment arm - joint angle relationships .although different optimization methods have been used to face this load sharing problem , it is still very difficult to validate the optimization result and further the fatigue effect , due to the complexity of anatomical structure and the physiological coordination mechanism of the muscles .muscle fatigue is often modeled and extended based on maximum endurance time ( met ) models at joint level in empirical methods .these models are often used in ergonomic applications to handle task - related external parameters , such as intensity of the external load , frequency of the external load , duration , and posture . in these models ,the met of a muscle group around a joint was often measured under static contraction conditions until exhaustion .using this method can avoid complex modeling of individual muscles , and net joint strengths already exist in the literature for determining the relative load .the most famous one of these met models is rohmert s curve which was usually used as guideline for designing the static contraction task . besides rohmert s met model ,there are several other empirical met models in the literature .these met models are very useful to evaluate physical fatigue in static operations and to determine work - rest allowances , and they were often employed in biomechanical models in order to minimize fatigue as well .for example , proposed a dynamic model for forearm in which the fatigue component was modeled for each single muscle by fitting rohmert s curve in . proposed a half - joint fatigue model , more exactly a fatigue index , based on mechanical properties of muscle groups .the holding time over maximum endurance time is used as an indicator to evaluate joint fatigue . in ,different fiber type composition was taken into account with endurance model to locate the muscle fatigue into single muscle level .however , in met models , the main limitations are : 1 ) the physical relationship in these models can not be interpreted directly by muscle physiology , and there is no universality among these models .2 ) all the met models were achieved by fitting experimental results using different formulation of equation .it has been found that muscle fatigability can vary across muscles and joints .however , there is no general formulation for those models .3 ) differences have been found among those met models for different muscle groups , for different postures , and even for different models for the same muscle group . due to the limitation from the empirical principle ,the differences can not be interpreted by those met models .thus , it is necessary to develop a general met model which is able to replace all the experimental met models and explain all the differences cross these models . proposed a new muscle fatigue model based on motor units ( mu ) recruitment to combine the theoretical models and the task - related muscle fatigue factors . in this model , properties of different muscle fiber typeshave been assumed to predict the muscle fatigue at joint level .however , in their research , the validation of their fatigue model was not provided .furthermore , the different fatigability of different muscle groups has not been analyzed in details in this model .fatigability ( the reciprocal of endurance capacity or the reciprocal of fatigue resistance ) can be defined by the endurance time or measured by the number of times of an operation until exhaustion .this measure is an important parameter to measure physical fatigue process during manual handling operations . in , we constructed a new muscle fatigue model in which the external task related parameters are taken into consideration to describe physical fatigue process , and this model has also been interpreted by the physiological mechanism of muscle .the model has been compared to 24 existing met models , and great linear relationships have been found between our model and the other met models .meanwhile , this model has also been validated in comparison to three theoretical models .this model is a simpler , theoretical approach to describe the fatigue process , especially in static contraction cases . in this paper , further analysis based on the fatigue model is carried out using mathematical regression method to determine the fatigability of different muscle groups .we are going to propose a mathematical parameter , defined as fatigability , describing the resistance to the decrease of the muscle capacity .the fatigue resistance for different muscle groups is going to be regressed from experimental met models .the theoretical approach for calculating the fatigue resistance will be explained in section [ sec : method ] .the muscle fatigue model in is going to be presented briefly in section [ sec : model ] .a general met model is extended from this fatigue model in section [ sec : met ] .the mathematical procedure for calculating the fatigability contributes to the main content of section [ sec : regression ] .the results and discussion are given in section [ sec : result ] and [ sec : discussion ] , respectively .a dynamic fatigue model based on muscle active motor principle was proposed in .this model was able to integrate task parameters ( load ) and temporal parameters into manual handling operation in industry .the differential equation for describing the reduction of the capacity is eq .( [ eq : fcemdiff ] ) .the descriptions of the parameters for eq .( [ eq : fcemdiff ] ) are listed in table [ tab : parameters ] . [ cols="<,^,<",options="header " , ] there are several met models available in the literature , and they cover different body parts .these models are all experimental models regressed from experimental data , and each model is only suitable for predicting met of a specific group of people , although the similar tendencies can be found among these models .furthermore , those met model can not reveal individual differences in fatigue characteristic .however , it is admitted that different people might have different fatigue resistances for the same physical operation . in comparison to conventional met models ,the general analytical met model was extended from a simple dynamic fatigue model in a theoretical approach .the dynamic muscle fatigue model is based on muscle physiological mechanism .it takes account of task parameters ( or relative load ) and personal factors ( mvc and fatigue ratio ) , and it has been validated in comparison to other theoretical models in .different from the other met models , in this extended met model , there is a parameter representing individual fatigue characteristic .after mathematical regression , great similarities ( ) have been found between the extended met model and the previous met models .this indicates that the new theoretical met model might replace the other met models by adjusting the parameter .therefore , the extended met model generalizes the formation of met models .in addition , different fatigue resistances have been found while fitting to different met models , even for the same muscle group .therefore , it is interesting to find the influencing factors on the parameter and to analyze its statistical distribution for ergonomic application . in this paper, we tried to use the mean value and standard deviation of the regressed fatigue resistances .it has been found that the extended met model with adjustable parameter could cover most of the met prediction using experimental met models .if further experiments can be carried out , it should be promising that the statistical distribution of the fatigue resistance for a given population could be obtained .this kind of information might be useful to integrate early ergonomic analysis into virtual human simulation tools to evaluate fatigue at early work design stage .although the met models fitted from experiment data were formulated in different forms , the can still provide some useful information for the fatigue resistance , especially for different muscle groups .the differences in fatigue resistance result is possible to be concluded by the mean value and the deviation , but it is still interesting to know why and how the fatigue resistance is different in different muscle groups , in the same muscle group , and even in the same person at different period .there is no doubt that there are several factors influencing on the fatigue resistance of a muscle group , and it should be very useful if the fatigue resistance of different muscle groups can be mathematically modeled . in this section , the fatigue resistance and its variability are going to be discussed in details based on the fatigue resistance results from table [ tab : fatigueresistancetable ] and the previous literature about fatigability .different influencing factors are going to be discussed and classified in this section .all the differences inter muscle groups and intra muscle groups in met models can be classified into four types : 1 ) systematic bias , 2 ) fatigue resistance inter individual for constructing a met model , 3 ) fatigue resistance intra muscle group : fatigue resistance differences for the same muscle group , and 4 ) fatigue resistance inter muscle groups : fatigue resistance differences for different muscle groups .those differences can be attributed to different physiological mechanisms involved in different tasks , and influencing variables are subject motivation , central command , intensity and duration of the activity , speed and type of contraction , and intermittent or sustained activities . in those met models ,all the contractions were exerted under static conditions until exhaustion of muscle groups , therefore , several task related influencing factors can be neglected in the discussion , e.g. , speed and duration of contraction .the other influencing factors might contribute to the fatigue resistance difference in met models . * systematic bias :* all the met models were regressed or reanalyzed based on experiment results .due to the experimental background , there were several sources for systematic error .one possible source of the systematic bias comes from experimental methods and model construction , especially for the methods with subjective scales to measure met .the subjective feelings significantly influenced the result .furthermore , the construction of the met model might cause system differences for met model , even in the models which were constructed from the same experiment data ( e.g. huijgens model and sjogaard s model in general models ) .the estimation error was different while using different mathematic models , and it generates systematic bias in the result analysis . * fatigue resistance inter individual :* besides the systematic error , another possible source for the endurance difference is from individual characteristic . however , the individual characteristic is too complex to be analyzed , and furthermore , the individual characteristic is impossible to be separated from existing met models , since the met models already represent the overall performance of the sample participants .in addition , in ergonomic application , the overall performance of a population is often concerned .therefore , individual fatigue resistance is not discussed in this part separately , but the differences in population in fatigue resistance are going to be discussed and presented in the following part . *fatigue resistance intra muscle group :* the inter individual variability contributes to the errors in constructing met models and the errors between met models for the same muscle group .the influencing factors on the fatigue resistance can be mainly classified into sample population characteristic ( gender , age , and job ) , personal muscle fiber composition , and posture . as mentioned in section [ sec : intro ] , the influences on fatigability from gender and age were observed in the literature . in the research for gender influence, women were found with more fatigue resistance than men . based on muscle physiological principle , four families of factorswere adopted to explain the fatigability difference in gender in .they are : 1 ) muscle strength ( muscle mass ) and associated vascular occlusion , 2 ) substrate utilization , 3 ) muscle composition and 4 ) neuromuscular activation patterns .it concluded that although the muscle composition differences between men and women is relatively small , the muscle fiber type area is probably one reason for fatigability difference in gender , since the muscle fiber type i occupied significantly larger area in women than in men . in spite of muscle fiber composition , the motor unit recruitment pattern acts influences on the fatigability as well .the gender difference in neuromuscular activation pattern was found and discussed in , and it was observed significantly that females showed more alternating activity between homolateral and contralateral muscles than males .meanwhile , in older men were found with more endurance time then young men in certain fatigue test tasks charging with the same relative load .one of the most common explanations is changes in muscle fiber composition for fatigability change while aging .the shift towards a higher proportion of muscle fiber type i leads old adults having a higher fatigue resistance but smaller mvc .gender and age were also already taken into a regression model to predict shoulder flexion endurance . in ,the effects of age , gender , and task parameters on muscle fatigue during prolonged isokinetic torso exercises were studied .it constated that older men had less initial strength .it was also found that effects of age and gender on fatigue were marginal , while significant interactive effects of age and gender with effort level were found at the same time . besides those two reasons, the muscle fiber composition of muscle varies individually in the population , even in a same age range and in the same gender , and this could cause different performances in endurance tasks .different physical work history might change the endurance performance .for example , it appeared that athletes with different fiber composition had different advantages in different sports : more type i muscle fiber , better in prolonged endurance events .meanwhile , the physical training could also cause shift between different muscle fibers . as a result ,individual fatigue is very difficult to be determined using met measurement , and the individual variability might contribute to the differences among met models for the same muscle group due to selection of subjects . back to the existing met models ,the sample population was composed of either a single gender or mixed .at the same time , the number of the subjects was sometimes relative small .for example , only 5 female students ( age range 2133 ) were measured , while 40 ( 20 males , age range 2248 and 20 females , age range 2055 ) were tested in shoulder met model . meanwhile , the characteristics of population ( e.g. , students , experiences workers ) could cause some differences in met studies . due to different population selection method , different gender composition , and different sample number of participant , fatigue resistance for the same muscle group exists in different experiment results andfinally caused different met models under the similar postures .in hip / back models , even with the same sample participants , difference existed also in met models for different postures .the variation is possible caused by the different mu recruitment strategies and load sharing mechanism under different postures . observed that the activation of biceps brachi was significantly affected by joint angle , and furthermore confirmed that joint angle and contraction type contributed to the distinction between the activation of synergistic elbow flexor muscles .the lever of each individual muscle changes along different postures which results different intensity of load for each muscle and then causes different fatigue process for different posture .meanwhile , the contraction type of each individual muscle might be changed under different posture .both contraction type change and lever differences contribute to generate different fatigue resistance globally .in addition , the activation difference was also found in antagonist and agonist muscles as well , and it is implied that in different posture , the engagement of muscles in the action causes different muscle activation strategy , and as a result the same muscle group could have different performances . with these reasons , it is much difficult to indicate the contribution of posture in fatigue resistance because it refers to the sensory - motor mechanism of human , and how the human coordinates the muscles remains not clear enough until yet .* fatigue resistance inter muscle groups : * as stated before , the three different muscle fiber types have different fatigue resistances , and different muscle is composed of types of muscles with composition determining the function of each muscle . the different fatigue resistance can be explained by the muscle fiber composition in different human muscle groups . in the literature ,muscle fiber composition was used measured by two terms : muscle fiber type percentages and percentage fiber type area ( csa : cross section area ) .both terms contribute to the fatigue resistance of the muscle groups .type i fibers occupied 74% of muscle fibers in the thoracic muscles , and they amounted 63% in the deep muscles in lumbar region . on average type i muscle fibers ranged from 23 to 56% for the muscles crossing the human shoulder and 12 of the 14 muscleshad average so proportions ranging from 35 to 50% . in paper , the muscle fiber composition shows the similar composition for the muscle around elbow and vastus lateralis muscle and the type i muscle fibers have a proportion from 35 - 50% in average .although we can not determine the relationship between the muscle type composition and the fatigue resistance directly and theoretically , the composition distribution among different muscle groups can interpret the met differences between general , elbow models and back truck models .in addition , the fatigue resistance of older adults is greater than young ones could also be explained by a shift towards a higher proportion of type i fiber composition with aging .these evidences meet the physiological principle of the dynamic muscle fatigue model .another possible reason is the loading sharing mechanism of muscles .hip and back muscle group has the maximum joint moment strength among the important muscle groups .for example , the back extensors are composed of numerous muscle slips having different moment arms and show a particularly high resistance to fatigue relative to other muscle groups .this is partly attributed to favorable muscle composition , and the variable loading sharing within back muscle synergists might also contribute significantly to delay muscle fatigue .in summary , individual characteristics , population characteristics , and posture are external appearance of influencing factors for the fatigue resistance .muscle fiber composition , muscle fiber area , and sensory motor coordination mechanism are the determinant factors inside the human body deciding the fatigue resistance of muscle group .therefore , how to construct a bridge to connect the external factors and internal factors is the most important way for modeling the fatigue resistance for different muscle groups .how to combine those factors to model the fatigue resistance remains a challenging work . despite the difficulty of modeling the fatigue resistance ,it is still applicable to find the fatigue resistance for a specified population by met experiments in regression with the extended met model due to its simplicity and universal availability . in the previous discussion, the fatigue resistance of the existing met models were quantified using from regression .the possible reasons for the different fatigue resistance were analyzed and discussed .however , how to quantify the influence from different factors on the fatigue resistance remains unknown due to the complexity of muscle physiology and the correlation among different factors .the availability of the extended met model in the interval under 15% mvc is not validated .the fatigue resistance is only accounted from the 15% to 99% mvc due to the unavailability of some met models under 15% mvc .for the relative low load , the individual variability under 15% could be much larger than that over 15% .the recovery effect might play a much more significant role within such a range .in this paper , fatigue resistance of different muscle groups were calculated by linear regression from the new fatigue model and the existing met static models .high has been obtained by regression which proves that our fatigue model can be generalized to predict met for different muscle groups .mean and standard deviation in fatigue resistance for different muscle groups were calculated , and it is possible to use both of them together to predict the met for the overall population .the possible reasons responsible for the variability of fatigue resistance were discussed based on the muscle physiology .our fatigue model is relative simple and computation efficient . with the extendedmet model it is possible to carry out the fatigue evaluation in virtual human modeling and ergonomic application , especially for static and quasi - static cases .the fatigue effect of different muscle groups can be evaluated by fitting from several simple static experiments for certain population .this research was supported by the eads and the rgion des pays de la loire ( france ) in the context of collaboration between the cole centrale de nantes ( nantes , france ) and tsinghua university ( beijing , pr china ) .anderson , d. , madigan , m. , nussbaum , m. , 2007 .maximum voluntary joint torque as a function of joint angle and angular velocity : model development and application to the lower limb .journal of biomechanics 40 ( 14 ) , 31053113 .clark , b. , manini , t. , 2003 .the dj , doldo na , ploutz - snyder ll .gender differences in skeletal muscle fatigability are related to contraction type and emg spectral compression .journal of applied physiology 94 ( 6 ) , 22632272 .dahmane , r. , djordjevi , s. , imuni , b. , valeni , v. , 2005 . spatial fiber type distribution in normal human muscle histochemical and tensiomyographicaljournal of biomechanics 38 ( 12 ) , 24512459 .garg , a. , hegmann , k. , schwoerer , b. , kapellusch , j. , 2002 .the effect of maximum voluntary contraction on endurance times for the shoulder girdle .international journal of industrial ergonomics 30 ( 2 ) , 103113 .kim , s. , seol , h. , ikuma , l. , nussbaum , m. , 2008 .knowledge and opinions of designers of industrialized wall panels regarding incorporating ergonomics in design .international journal of industrial ergonomics 38 ( 2 ) , 150157 . larivire , c. , gravel , d. , gagnon , d. , gardiner , p. , bertrand arsenault , a. , gaudreault , n. , 2006 .gender influence on fatigability of back muscles during intermittent isometric contractions : a study of neuromuscular activation patterns .clinical biomechanics 21 ( 9 ) , 893904 .lynch , n. , metter , e. , lindle , r. , fozard , j. , tobin , j. , roy , t. , fleg , j. , hurley , b. , 1999 . muscle quality . i. age - associated differences between arm and leg muscle groups .journal of applied physiology 86 ( 1 ) , 188194 .niemi , j. , nieminen , h. , takala , e. , viikari - juntura , e. , 1996 . a static shoulder model based on a time - dependent criterion for load sharing between synergistic muscles. journal of biomechanics 29 ( 4 ) , 451460 .shepstone , t. , tang , j. , dallaire , s. , schuenke , m. , staron , r. , phillips , s. , 2005 .short - term high- vs. low - velocity isokinetic lengthening training results in greater hypertrophy of the elbow flexors in young men .journal of applied physiology 98 ( 5 ) , 17681776 .staron , r. , hagerman , f. , hikida , r. , murray , t. , hostler , d. , crill , m. , ragg , k. , toma , k. , 2000 .fiber type composition of the vastus lateralis muscle of young men and women .journal of histochemistry and cytochemistry 48 ( 5 ) , 623 .
|
in ergonomics and biomechanics , muscle fatigue models based on maximum endurance time ( met ) models are often used to integrate fatigue effect into ergonomic and biomechanical application . however , due to the empirical principle of those met models , the disadvantages of this method are : 1 ) the met models can not reveal the muscle physiology background very well ; 2 ) there is no general formation for those met models to predict met . in this paper , a theoretical met model is extended from a simple muscle fatigue model with consideration of the external load and maximum voluntary contraction in passive static exertion cases . the universal availability of the extended met model is analyzed in comparison to 24 existing empirical met models . using mathematical regression method , 21 of the 24 met models have intraclass correlations over 0.9 , which means the extended met model could replace the existing met models in a general and computationally efficient way . in addition , an important parameter , fatigability ( or fatigue resistance ) of different muscle groups , could be calculated via the mathematical regression approach . its mean value and its standard deviation are useful for predicting met values of a given population during static operations . the possible reasons influencing the fatigue resistance were classified and discussed , and it is still a very challenging work to find out the quantitative relationship between the fatigue resistance and the influencing factors . * relevance to industry :* msd risks can be reduced by correct evaluation of static muscular work . different muscle groups have different properties , and a generalized met model is useful to simplify the fatigue analysis and fatigue modeling , especially for digital human techniques and virtual human simulation tools . muscle fatigue , biomechanical muscle modeling , fatigue resistance , maximum endurance time , muscle groups
|
in the spring of 1996 i was visiting the city college of new york for a month , in order to pursue a research project with stuart samuel , who was a professor at city university of new york at the time , and to run in the 100th boston marathon . several evenings and part of weekendsi d spend with our mutual friend pascal gharemani , a tennis coach and instructor at trinity school ( a private high school on west 91st street in manhattan ) .typically we would go dining , visit places or fly kites .pascal had an iranian background but grew up in versailles near paris before moving to the us .my wife and i had come to know him during my postdoc years at city college ( 198790 ) , when we would meet weekly at various restaurants in the columbia university neighborhood for an evening of french conversation .he was important for our socialization in manhattan and had grown into a good friend .pascal was a very curious individual , with a great sense of humor and always ready to engage in discussions about savoir vivre , philosophy , and the natural sciences . regarding the latter , he regularly pondered phenomena and questions which involved physics . lacking a formal science training, he would go to great lengths and try his physicist friends for explanations .so one evening in 1996 he shared his musings about the gravitational force of a long and homogeneous rod , as it is felt by a ( say , minuscule ) creature crawling on its surface .clearly , the mass points in its neighborhood are mainly responsible for creating the force . on one hand , at the end of the rod , the nearby mass is fewer than elsewhere , but it is all pulling roughly in the same direction . on the other hand , around the middle part of the rod ,twice as much mass points are located near the creature , yet their gravitational forces point to almost opposing directions and hence tend to cancel each other out .so which location gives more weight to the mini - bug ?where along the rod is its surface gravity largest ?this was a typical ` pascal question ' , and my immediate response was : `` that s an easy one .let me just compute it . ''well , easier said then done . for the mid - rod positionthe resulting integrals were too tough to perform on the back of an envelope . to simplify my life , i persuaded pascal to modify the problem .let us vary not the position of the bug but the geometry of its planet : keep the bug sitting on the top of a cylinder , and compare a long rod with a slim disk of the same volume and mass .then it was not too hard to calculate the surface gravity as a function of the ratio of the cylinder s diameter to its length .to our surprise , in a narrow window of this parameter the weight of the bug exceeds the value for a spherical ball made from the same material .this finding inspired us to generalize the question to another level : given a bunch of homogeneous material ( fixed volume and density , hence total mass ) , for which shape is the gravitational force somewhere on its surface maximized ?thus , the idea of `` asteroid engineering '' was born . after solving the problem and comparing the result with a few other geometries , i put the calculations aside and forgot about them .four years later , when teaching mathematical methods for physics freshmen , i was looking for a good student exercise in variational calculus . coming across my notes from 1996 ,i realized they can be turned into an unorthodox , charming and slightly challenging homework problem .and so i did , posing the challenge in the summer of 2000 and again in 2009 , admittedly with mixed success .but let the reader decide !it is textbook material how to compute the newtonian gravitational field generated by a given three - dimensional static mass distribution . in the absence of symmetry arguments ,it involves a three - dimensional integral collecting the contributions produced by the masses at positions , with denoting the gravitational constant . for the case of a solid homogeneous body of volume and total mass ,clearly is constant , and one gets where is the unit vector pointing from the observer ( at ) to the mass point at . the surface gravity ( specific weight of a probe ) located somewhere on the surface of my solid is obtained by simply restricting to .one might think of simplifying the task by computing the gravitational potential rather than the field , since the corresponding integral is scalar and appears to be easier .however , evaluating the surface gravity then requires taking a gradient in the end and thus keeping at least an infinitesimal dependence on a coordinate normal to the surface . retaining this additional parameter until finally computing the derivative of the potential with respect to it before setting it to zero yields no calculational gain over a direct computation of .the original question of pascal concerned a cylindrical rod , whose length and radius i denote by and , respectively , so that .the integral above has dimension of length , and i shall scale out a factor of to pass to dimensionless quantities . for the remaining dimensionless parameteri choose the ratio of diameter to length of the cylinder , , see fig . 1 .i shall frequently have to express some of the four quantities , , and in terms of a pair of the others , so let me display the complete table of the relations , \ell & { \ = \ }2a / t { \ = \ } v/(\pi a^2 ) \ \,{\ = \ } \root3\of{4 v/(\pi t^2 ) } \\[4pt ] t & { \ = \ } 2a/\ell { \ = \ } 2\pi a^3/v \\;{\ = \ } \sqrt{4\,v/(\pi\ell^3 ) } \\[6pt ] v & { \ = \ } \pi a^2\ell { \ = \ } 2\pi a^3/t \ \ \ { \ = \ } \pi\ell^3 t^2/4\ . \end{aligned}\ ] ] pascal s problem was to compare for this cylinder the surface gravity at the symmetry axis point to the one at a point on the mid - circumference or equator .let me treat both cases in turn .naturally i employ cylindrical coordinates for and put the symmetry axis point in the origin . with the expression ( [ surfacegravity ] ) then becomes & { \ = \ } -2\pi\,\gamma\,{{\textstyle\frac{m}{v } } } \int_0^\ell\!{{\mathrm{d}}}{z}\int_0^a\!{{\mathrm{d}}}{\rho}\ \frac{\rho\,z}{(z^2+\rho^2)^{3/2}}\ \vec{e}_z \ = : \ -g_a\,\vec{e}_z\ .\end{aligned}\ ] ] the and integrals are elementary , ^a \\[4pt ] & { \ = \ } 2\pi\gamma\,{{\textstyle\frac{m}{v } } } \int_0^\ell\!{{\mathrm{d}}}{z}\ \bigl\ { 1 - \frac{z}{\sqrt{z^2+a^2 } } \bigr\ } { \ = \ } 2\pi\gamma\,{{\textstyle\frac{m}{v } } } \,\bigl [ z - \sqrt{z^2+a^2 } \bigr]_0^\ell \\[4pt ] & { \ = \ } 2\pi\gamma\,{{\textstyle\frac{m}{v } } } \,\bigl\ { \ell + a - \sqrt{\ell^2+a^2 } \bigr\ } { \ = \ } 2\pi\gamma\,{{\textstyle\frac{m}{v}}}\,\ell\,\bigl\ { 1 + { { \textstyle\frac{t}{2 } } } - \sqrt{1+{{\textstyle\frac{t^2}{4 } } } } \bigr\}\ .\end{aligned}\ ] ] it is a bit curious that the result is symmetric under the exchange of and , and so in the thin rod ( ) and thin disk ( ) limits one finds that respectively , with fixed of course .apart from the linear dependence on the gravitational constant and the mass density , the surface gravity must carry a dimensional length factor , which choose to be the cylinder length .however , , and are obviously related , and for comparing different shapes of the same mass and volume it is preferable to eliminate in favor of and .the resulting expression for the surface gravity has the universal form where the shape function depends on dimensionless parameters like only . for the case at hand ,i obtain the asymptotic behavior for a thin rod ( ) and for a thin disk ( ) takes the form t^{-2/3 } - t^{-5/3 } + t^{-11/3 } + o(t^{-17/3 } ) & \textrm{for } \quad t\to\infty \end{cases}\ .\ ] ] this is the harder case , as it lacks the cylindrical symmetry .naturally putting the origin of the cylindrical coordinate system at the cylinder s center of mass , hence , the surface gravity integral ( [ surfacegravity ] ) reads ^ 2+[\rho\sin\phi]^2+z^2\bigr)^{-3/2}\ \bigl ( \begin{smallmatrix } \rho\cos\phi - a \\ \rho\sin\phi \\z \end{smallmatrix } \bigr ) \\[4pt ] & { \ = \ } \gamma\,{{\textstyle\frac{m}{v } } } \int_{-\ell/2}^{\ell/2}\!\!\!{{\mathrm{d}}}{z}\int_0^a\!{{\mathrm{d}}}{\rho}\,\rho\int_0^{2\pi}\!\!\!{{\mathrm{d}}}\phi\ \frac{\rho\cos\phi - a}{(z^2+a^2+\rho^2 - 2\,a\rho\cos\phi)^{3/2}}\ \vec{e}_x \\[4pt ] & { \ = \ } 2\,\gamma\,{{\textstyle\frac{m}{v}}}\,\ell\ , \int_0^{1/2}\!\!{{\mathrm{d}}}{u}\int_0 ^ 1\!{{\mathrm{d}}}{v}\int_0^{2\pi}\!\!\!{{\mathrm{d}}}\phi\ \frac{v\,(v\cos\phi-1)}{u^2\ell^2/a^2 + 1+v^2 - 2\,v\cos\phi)^{3/2}}\ \vec{e}_x \ = : \-g_m\,\vec{e}_x\ , \end{aligned}\ ] ] where i employed the symmetry and substituted and for a dimensionless integral .the integration is elementary , & { \ = \ }2\,\gamma\,{{\textstyle\frac{m}{v}}}\,\ell\,\int_0 ^ 1\!{{\mathrm{d}}}{v}\int_{-1}^{1}\!\!{{\mathrm{d}}}{w}\ \frac{v\,(1-v\,w)/(1+v^2 - 2\,v\,w)}{\sqrt{(1-w^2)(t^{-2}+1+v^2 - 2\,v\,w)}}\ , \end{aligned}\ ] ] after substituting and using the definition .the remaining double integrals leads to lengthy expressions in terms of complete elliptic integrals , which i do not display here .for it diverges logarithmically .it is possible , however , to extract the limiting behavior for as which in leading order surprisingly agrees with that of . to get a feeling for these results , it is natural to compare them with the surface gravity of a homogeneous ball of the same mass and density , thus of radius the surface gravity of the latteris well known , hence , the relation of the cylindrical to the spherical surface gravity is for the axis position , see fig . 2 .surprisingly , in the interval \ \approx\\bigl [ 0.98271\ , \ 1.50000 \bigr]\ ] ] the weight on the cyclinder s axis exceeds that on the reference ball ! indeed , its maximal value is attained at the asymptotic behavior is easily deduced to be for and , respectively . forthe equatorial position s surface gravity i do not have an analytic expression , only its limiting forms for and , respectively .numerical analysis shows that ( see fig . 2 ) attains a maximum at furthermore , for any given shape in an asymptotic regime , the equatorial position is superior to the axis one . only in the interval our mini - bug heavier on the axis .this finding suggests the question : can one do better than the cylinder with a clever choice of shape ? it turns the problem into a variational one .suppose i have by some means discovered the homogeneous body which , for fixed mass and volume , yields the maximally possible gravitational pull in some location on its surface .without loss of generality i can put this point to the origin of my coordinate system and orient the solid in such a way that its outward normal in this point aims in the positive direction , so gravity pulls downwards as is customary . expressing the surface gravity at this position for an arbitrary body as a functional of its shape , then must maximize this functional , under the constraint of fixed mass and volume .the following three features of the optimal shape are evident : * it does not have any holes , so has just a single boundary component * it is convex * it is rotationally symmetric about the normal at the origin ( 140,260)(60,65 ) ( 0,0),title="fig:",width=453 ] ( 140,180)(-50,0 ) ( 0,0),title="fig:",width=226 ] these facts imply that the surface may be parametrized as in fig . 3 , with and .the function ( which may be extended via ) completely describes the shape of the solid of revolution .it may be viewed as the boundary curve of the intersection of with the plane .its convexity implies the condition employing the symmetry under reflection on the rotational axis , the surface gravity functional ( [ surfacegravity ] ) then reads { \ = \ } \gamma\,{{\textstyle\frac{m}{v } } } \int_b\!\frac{{{\mathrm{d}}}^3{\!\vec{\,r}}}{r^2}\ { { \textstyle\frac{1}{2}}}(\vec{e}_{{\!\vec{\,r}}}+s\vec{e}_{{\!\vec{\,r } } } ) \ = : \-g[r]\,\vec{e}_z\ , \ ] ] {\ = \ } 2\pi\,\gamma\,{{\textstyle\frac{m}{v } } } \int_0 ^ 1\!{{\mathrm{d}}}\cos{\theta}\int_0^{r({\theta})}\!\!{{\mathrm{d}}}{r}\;\cos{\theta}{\ = \ } 2\pi\,\gamma\,{{\textstyle\frac{m}{v } } } \int_0 ^ 1\!{{\mathrm{d}}}\cos{\theta}\ r({\theta } ) \cos{\theta}\ .\ ] ] it is to be maximized with the mass ( and thus the volume ) kept fixed , { \ = \ } { { \textstyle\frac{m}{v } } } \int_b\!{{\mathrm{d}}}^3{\!\vec{\,r}}{\ = \ } 2\pi\,{{\textstyle\frac{m}{v } } } \int_0 ^ 1\!{{\mathrm{d}}}\cos{\theta}\int_0^{r({\theta})}\!\!r^2{{\mathrm{d}}}{r } { \ = \ } { { \textstyle\frac{2\pi}{3}}}\,{{\textstyle\frac{m}{v } } } \int_0 ^ 1\!{{\mathrm{d}}}\cos{\theta}\ r({\theta})^3 \! \over = \ m\ .\ ] ] such constrained variations are best treated by the method of lagrange multipliers , which here instructs me to combine the two functionals to { \ = \ } g[r]\ -\ { \lambda}\bigl(m[r]-m\bigr)\ , \ ] ] introducing a lagrange multiplier ( a real parameter to be fixed subsequently ) .more explicitly , { \ = \ } \int_0 ^ 1\!{{\mathrm{d}}}\cos{\theta}\ \bigl [ \gamma\,r({\theta})\cos{\theta}\ -\ { { \textstyle\frac{1}{3}}}{\lambda}\,r({\theta})^3 \bigr ] \ -\ { \lambda}\,{{\textstyle\frac{v}{2\pi}}}\ , \ ] ] so clearly fixes the volume of to be equal to . demanding that , for fixed but arbitrary , is stationary under any variation of the boundary curve , , determines : { \ = \ } \int_0 ^ 1\!{{\mathrm{d}}}\cos{\theta}\ \delta r({\theta})\ \bigl[ \gamma\,\cos{\theta}\ -\ { \lambda}\,r_{\lambda}({\theta})^2 \bigr]\ , \ ] ] so i immediately read off it remains to compute the value of the lagrange multiplier by inserting the solution into the constraint ( [ constraint ] ) , { \ = \ } { { \textstyle\frac{2\pi}{3}}}\,{{\textstyle\frac{m}{v } } } \int_0 ^ 1\!{{\mathrm{d}}}\cos{\theta}\ \bigl ( { { \textstyle\frac{\gamma}{\bar{\lambda}}}}\,\cos{\theta}\bigr)^{3/2 } { \ = \ } { { \textstyle\frac{4\pi}{15}}}\,{{\textstyle\frac{m}{v}}}\,\bigl({{\textstyle\frac{\gamma}{\bar{\lambda}}}}\bigr)^{3/2}\ , \ ] ] yielding and hence the complete solution as displayed in fig . 4 , what does this curve look like ?let me pass to cartesian coordinates in the plane , which yields the sextic curve ( cubic in squares ) the parameter only takes care of the physical dimensions and determines the overall size of the solid. in dimensionless coordinates it may be put to unity , which fixes the vertical diameter to be equal to 2 and allows for a comparison of my optimal curve with the unit circle , with ] . since in the interval of question ,my curve lies entirely outside the reference circle , touching it only twice on the axis .( note that so the corresponding volumes differ . )other than the sphere , my curve has a critical point : due to near the origin , the curvature vanishes there . clearly , the vertical extension of is while its width is easily computed to be the shape of my optimal body vaguely resembles an apple , with the flatter side up .my final goal is to calculate the maximal possible weight , or { \ = \ } 2\pi\,\gamma\,{{\textstyle\frac{m}{v}}}\,2r_0 \int_0 ^ 1\!{{\mathrm{d}}}\cos{\theta}\ \bigl ( \cos{\theta}\bigr)^{3/2 } { \ = \ } 2\pi\,\gamma\,{{\textstyle\frac{m}{v}}}\,\root 3 \of { { { \textstyle\frac{15\,v}{4\pi}}}}\;{{\textstyle\frac{2}{5 } } } { \ = \ } \bigl ( { { \textstyle\frac{4\pi\sqrt{3}}{5 } } } \bigr)^{2/3 } \gamma\,m\,v^{-2/3}\ .\ ] ] comparing with the spherical shape , }{g_b } { \ = \ } 3\cdot 5^{-2/3 } { \ = \ } { { \textstyle\frac{3}{5}}}\root 3 \of 5 \ \approx\ 1.02599 \ .\ ] ] i conclude that by homogeneous reshaping it is possible to increase the surface gravity of a spherical ball by at most ! ( 140,180)(60,165 ) ( 0,0),title="fig:",width=453 ] ( 140,180)(-15,20 ) ( 0,0),title="fig:",width=302 ]since the cylinder shape is already superior to the spherical one for maximizing surface gravity , it is interesting to explore a few other more or less regular bodies , to see how close they can get to the optimal value of .let me discuss three cases which are fairly easy to parametrize in the cylindrical coordinates chosen .( 140,140)(0,0 ) ( 0,0 ) ( 140,140)(0,0 ) ( 0,0 ) first , i consider a conical segment of a spherical ball centered in the origin , with opening angle and radius , see fig ., one simply has thus the surface gravity ( [ functional ] ) reduces to since at the same time , one gets leading to the curve in fig . 6 , the best opening angle occurs at an angle of about , clearly , the spherical ball beats any cone . the value describes a semi - ball , which yields ( 140,240)(-20,-10 ) ( 0,0),title="fig:",width=264 ] ( 140,240)(-20,0 ) ( 0,0),title="fig:",width=264 ] second ,let me try out the radius function being an arbitrary power of , displayed in fig .7 for .this produces the special value of yields a spherical ball , which separates squashed forms ( ) from elongates ones ( ) .with i can eliminate and find which is shown in figthis is indeed maximized for as was already found in ( [ solution ] ) and ( [ maximum ] ) .it exceeds unity in the interval ., width=302 ] ( 140,150)(-20,10 ) ( 0,0),title="fig:",width=264 ] ( 140,150)(-20,0 ) ( 0,0),title="fig:",width=264 ] third , i look at an oblate ellipsoid of revolution with minor semi - axis length and eccentricity , see fig . 9 . in this case , which includes the sphere for .( the prolate case corresponds to imaginary . ) the surface gravity and mass integrals then become respectively . from thisi conclude that shown in figthis is larger than one for and is maximized numerically at hence , i can come to within less than of the optimal surface gravity by engineering an appropriate ellipsoid ., width=302 ]the main result of this short paper is a universal sixth - order planar curve , which characterizes the shape of the homogeneous body admitting the maximal possible surface gravity in a given point , for unit mass density and volume .it is amusing to speculate about its use for asteroid engineering in an advanced civilization or our own future .this curve seems not yet to have occurred in the literature , and so i choose to name it `` gharemani curve '' after my deceased friend who initiated the whole enterprise .the maximally achievable weight on bodies of various shapes is listed in the following table .it occurs at the intersection of the rotational symmetry axis with the body s surface and is normalized to the value on the spherical ball . [ cols="^,^,^,^,^,^",options="header " , ] it can pay off to get inspired by the curiosity of your non - scientist friends .the result is a lot of fun and may even lead to new science !i thank michael flohr for help with mathematica and the integral ( [ hardintegral ] ) .
|
i pose the question of maximal newtonian surface gravity on a homogeneous body of a given mass and volume but with variable shape . in other words , given an amount of malleable material of uniform density , how should one shape it in order for a microscopic creature on its surface to experience the largest possible weight ? after evaluating the weight on an arbitrary cylinder , at the axis and at the equator and comparing it to that on a spherical ball , i solve the variational problem to obtain the shape which optimizes the surface gravity in some location . the boundary curve of the corresponding solid of revolution is given by or , and the maximal weight ( at ) exceeds that on a solid sphere by a factor of , which is an increment of . finally , the values and the achievable maxima are computed for three other families of shapes . itp uh19/15 + 1.5 cm * on asteroid engineering * + _ institut fr theoretische physik and riemann center for geometry and physics + leibniz universitt hannover + appelstrae 2 , 30167 hannover , germany _ + email : olaf.lechtenfeld.uni-hannover.de _ in memory of pascal gharemani , 03/08/1953 03/08/2015 _
|
on 27 march 2004 , students from across the midwest united states gathered at the rose - hulman institute of technology for the 2004 mupec ( midwest undergraduate private engineering colleges ) conference .this is an annual conference sponsored by the mupec group , comprising the institutions listed in table [ table1 ] .a different institution hosts the event each year .participants presented papers or posters on projects in mathematics , computer , and engineering disciplines , and also participated in a multidisciplinary design competition .this paper will focus on the design competition developed by the authors , and especially the cryptography problem which students had to solve .the challenge for the conference organizers is to create a design problem suitable for students from a variety of science , mathematics and engineering disciplines .our goal in designing the competition was to create a day - long design problem suitable for undergraduates in engineering , mathematics and science .[ table1 ] .mupec member institutions [ cols= " < " , ]the purpose of the workshop was to familiarize the students both with the ciphers they were going to be breaking and the software they were going to be using to assist them .the workshop introduced three types of ciphers : additive ciphers ( a.k.a shift ciphers or general caesar ciphers ) , affine ciphers , and two - by - two hill ciphers ( a.k.a .matrix ciphers ) . for each of the three types of cipher we took the students through a similar routine .first , we gave a brief explanation and an example .then we had the students encipher a given message by hand using a given key . to check their answer, we showed them how to decipher the message using custom software , as described below .( the workshop was held in a computer - equipped classroom . ) after that , we talked about how to break the cipher using frequency distributions .we showed them how to use the software to determine and test a probable key for the cipher .finally , we let the students practice breaking a set of sample ciphers programmed into the software .slides from the workshop are available on the web at , under `` competition materials '' .the software was written by scott for the workshop and the competition .it was written in java and distributed as a web - based applet .there were three functions for each cipher : construct a letter frequency distribution ( or , in the case of the hill cipher , a digraph frequency distribution ) , recover a probable key based on a ( very small ) set of ciphertext - plaintext pairs , and decipher the message based on the probable key .the codebreaking functions for the additive and affine ciphers , which are letter substitution ciphers , were based on the letter frequency method . in this method, the codebreaker prepares a `` letter frequency distribution '' showing how often each ciphertext letter appears in the text .this is then compared against the known average frequency of letters in english plaintext ; `` e '' is the most common , `` t '' is next , and so on . in the case of the additive cipherthe key may be recovered from the knowledge of a single plaintext - ciphertext pair : the key consists of the numerical value of the ciphertext letter minus the value of the corresponding plaintext letter , modulo 26 . thus if the codebreaker can correctly guess the ciphertext letter corresponding to `` e '' , he or she can obtain the key and decipher the message . for the affine cipher ,the numerical value of the plaintext letter is multiplied by the first key number and added to the second key number , modulo 26 , to obtain the numerical value of the ciphertext letter .thus the key can be recovered from the knowledge of two plaintext - ciphertext pairs , which sets up a system of two equations in two unknowns which the codebreaker can ( hopefully ) solve .for example , if the codebreaker can correctly guess , from the letter frequency distribution , the ciphertext letters corresponding to `` e '' and `` t '' , he or she can obtain the key . the hill cipher is slightly different because it is a block substitution cipher . in the two - by - two case used in the workshop , each pair of consecutive plaintext numbers is multiplied by the key matrix modulo 26 to obtain a pair of ciphertext numbers . therefore recovering the key requires the knowledge of two different plaintext - ciphertext correspondences , each consisting of a plaintext pair and the corresponding ciphertext pair . in this casethe codebreaker prepares a `` digraph frequency distribution '' showing how often each possible pair of ciphertext letters appears consecutively in the text .this is then compared against the average frequency of letter pairs in english plaintext .this is not as well known as for letter frequencies , but it has been found that `` th '' is the most common letter pair , followed by `` he '' , and so on . if the codebreaker now successfully guesses the ciphertext pairs corresponding to `` th '' and `` he '' , he or she can solve two matrix equations in two unknowns and once again obtain the key .the software was designed to aid this process as follows : the user entered a ciphertext or selected one out of a sample set of ciphertexts .then he or she used the software to create either a letter frequency or digraph frequency distribution .the user then determined a probable key as follows : for the case of an additive cipher , the user entered a guess for the ciphertext equivalent of plaintext `` e '' ; for an affine cipher guesses for both `` e '' and `` t '' were entered ; and for the two - by - two hill cipher guesses for the pairs `` th '' and `` he '' were entered .the `` attack '' function of the software solved the appropriate equations and returned the corresponding key , or a message that no such key was possible .( the equations were unsolvable . ) if a key was returned , the user went on to the deciphering function and attempted to decipher the messages .the students were made aware that all of the plaintext messages used in the contest were recognizable ( if not necessarily meaningful ) english sentences , so that it was immediately apparent if the key was correct .the software , including sample ciphertexts , is available at , under `` competition and software '' , and screen shots are shown in figures [ fig : frequency ] and [ fig : decrypt ] .students were told that ciphertexts 110 were encrypted with an additive cipher , ciphertexts 1120 were encrypted with an affine cipher , and ciphertexts 2130 were encrypted with a hill cipher .[ fig : frequency ] [ fig : decrypt ] students were directed to pay special attention to the form of the decrypted sample messages , which were constructed in exactly the same manner as the plaintext of the messages used in the actual competition .each message started with a four digit pin ( spelled out in words ) , which was the only part of the message that the students needed to know in the actual competition .the rest of the message consisted of several meaningless ( but grammatically correct ) sentences which were chosen at random by a computer program from a list .the list was constructed to try to produce a large number of `` e ' 's , `` t ' 's , `` th ' 's , and `` he ' 's in order to make the frequency distribution attack feasible with a reasonably small number of guesses .however , this was not completely successful for the case of the hill cipher , and some of the sample texts required quite a few guesses .one `` round '' was conducted for each team ; for each round an `` offensive '' and a `` defensive '' team was picked such that each team got exactly one offensive and one defensive opportunity .the offensive team s rocket was loaded into the underwater missile - launching tube .the hackers from the offensive and defensive teams were each seated at a laptop computer with the workshop software installed and a set of ciphertexts loaded which they had not seen before . all ciphertexts used in the actual contest involved affine ciphers , and this was made known to the contestants at the start of the contest . also loaded onto the computers was control software written by laurence merkle which took a round number and a four digit pin and checked the pin to see if it corresponded to the ciphertext for that round .if the pin was correct , the software was programmed to send a signal to a switching module designed and built by tina hudson .the switching module was built to determine which of the two laptops had sent the signal first .the intended plan for the switching module was that if the offensive team sent the signal first then the module would produce a `` launch '' result which would connect a six - volt battery to the ignitor of the model rocket , causing the rocket to launch .if the defense succeeded first the module would produce a `` no - launch '' result and the rocket would not launch .( due to electrical issues this result was not conveyed directly to the rocket launcher during the actual competition ; rather an indicator light indicated which team had succeeded first .the electrical issues have since been resolved , and a new launch board has been built and tested for future use . ) in the actual execution , the round was started with an announcement of the round number and the simultaneous starting of a stopwatch for each team .the hackers then proceeded to enter the round number into the workshop software and get their ciphertext , which they attempted to break .( for fairness , both teams received the same problem . ) when they had broken the text and obtained the four - digit number they entered the number into the control software , which determined if it was correct .each team s stopwatch was stopped when the correct code number was recognized .both of the measured times were used in scoring , so both hackers were directed to proceed until they had broken the cipher .the winner of the ciphertext competition was announced , and in either case the rocket was launched so that its performance could be used in the scoring .figure [ fig : rocket ] presents a schematic of the competition .[ fig : rocket ] team scores for the competition as a whole were based on the performance of the rocket ( 65% ) , the accuracy of the team s prediction of the performance of the rocket ( 12% ) , the aesthetics of the painted nose - cones of the rockets ( 3% ) , and the code - breaking times for both the offensive opportunity ( 10% ) and the defensive opportunity ( 10% ) .timing scores were calculated using the time measured as a fraction of the maximum time measured for any team in any round .another scoring possibility might take into account the `` head - to - head '' nature of the competition more directly . also, the system we used did not account for the possibility that some ciphertexts might be more difficult to decipher than others .our goal in designing the competition was to create a day - long design problem suitable for undergraduates in engineering , mathematics and science disciplines .surveys filled out by the students support this goal , both in general ( see ) and in the cryptography part of the competition .students were originally not very confident of their ability to break codes and ciphers and to use matrices to encipher messages , but they indicated that they gained confidence after attending the workshop and practicing ( or watching their teammates practice ) these skills over the course of the day .however , the students indicated that they had lost confidence in their ability to encrypt messages using a simple cipher .we hypothesize that students on average were perhaps not aware before the workshop of some of the complexities of what could be considered a `` simple '' cipher , and were thus overconfident .we think the surveys indicate that on average , some level of learning has occurred for a mixed group of students from different disciplines and different institutions .quite a bit of time and effort went in to putting this competition together .we estimate that prof .holden put in about 30 hours of work on the design of the cryptography part of the competition and preparing the cryptography workshop .hudson spent about 20 hours for the switching module .layton put in about 80 hours of work on the design and testing of the launch - tube apparatus , coordinating the overall competition design , and organizing the conference .merkle spent an estimated 50 hours on the control software .erin bender and gerald rea , mechanical engineering majors at rose - hulman , put in approximately 100 hours of work on the original analysis and simulation for the design as well as the building and testing and redesign of the physical apparatus .they were compensated for this as work - study employees .scott dial put in approximately 10 hours of work on the workshop software , for which he was compensated with extra credit in prof .holden s cryptography class . however , much of this would effort would not have to be duplicated by someone putting on a similar competition .complete plans and instructions for all aspects of the competition are or will be posted at and we estimate that a person or team of people with the appropriate expertise could reproduce the competition in perhaps a quarter of the time we spent .our thanks to so many without whom this design problem and competition simply would not have come together in time : our colleagues and coworkers ray bland , patsy brackin , gary burgess , pat carlson , mike fulk , and mike mcleish , and our students erin bender , scott dial and gerald rea .joshua holden is currently an assistant professor in the mathematics department of rose - hulman institute of technology , an undergraduate engineering college in indiana .he received his ph.d . from brown university in 1998 and held postdoctoral positions at the university of massachusetts at amherst and duke university .his research interests are in computational and algebraic number theory and in cryptography .his teaching interests include the use of technology in teaching and the teaching of mathematics to computer science majors , as well as the use of historically informed pedagogy .his non - mathematical interests currently include science fiction , textile arts , and choral singing .tina hudson received her ph.d . from georgia institute of technology in 2000 and is currently an assistant professor of electrical engineering at rose - hulman institute of technology .her research interests include the development of real - time neuromuscular models using integrated circuits and mems devices , linear threshold circuits , and methods to intuitively teach analog and digital integrated circuit design and mems devices .richard layton received his ph.d . from the university of washington in 1995 and is currently an associate professor of mechanical engineering at rose - hulman .his research and teaching interests are analysis , simulation and design of multidisciplinary engineering systems .prior to his academic career , he worked for twelve years in consulting engineering , culminating as a group head and a project manager .his non - engineering interests include woodworking and music composition and performance for guitar and small ensembles .larry merkle received his ph.d . from the air force institute of technology in 1996 and is currently an assistant professor of computer science and software engineering at rose - hulman institute of technology .prior to joining rose - hulman , he served almost 15 years as an active duty officer in the united states air force . during that time he served as an artificial intelligence project management officer , as chief of the plasma theory and computation center , and on the faculty of the united states air force academy .his interests include computer science education and the application of advanced evolutionary computation techniques to computational science and engineering problems .
|
for a recent student conference , the authors developed a day - long design problem and competition suitable for engineering , mathematics and science undergraduates . the competition included a cryptography problem , for which a workshop was run during the conference . this paper describes the competition , focusing on the cryptography problem and the workshop . notes from the workshop and code for the computer programs are made available via the internet . the results of a personal self - evaluation ( pse ) are described .
|
suppose that we have a large population in which only a small number of people are infected by a certain viral disease ( e.g. , one may think of a flu epidemic ) , and that we wish to identify the infected ones . by testing each member of the population individually, we can expect the cost of the testing procedure to be large .if we could instead pool a number of samples together and then test the pool collectively , the number of tests required might be reduced .this is the main conceptual idea behind the classical _ group testing _problem which was introduced by dorfman and later found applications in variety of areas .a few examples of such applications include testing for defective items ( e.g. , defective light bulbs or resistors ) as a part of industrial quality assurance , dna sequencing and dna library screening in molecular biology ( see , e.g. , and the references therein ) , multiaccess communication , data compression , pattern matching , streaming algorithms , software testing , and compressed sensing . see the books by du and hwang for a detailed account of the major developments in this area .symbols represent infected people among healthy people indicated by symbols .the dashed lines show the individuals contacted by the agents ., width=328 ] one way to acquire collective samples is by sending agents inside the population whose task is to contact people ( see fig .[ fig : agents ] ) .the agents can also be chosen as atm machines , cashiers in supermarkets , among other possibilities . once an agent has made contact with an `` infected '' person ,there is a _ chance _ that he gets infected , too . by the end of the testing procedure ,all agents are gathered and tested for the disease . here, we assume that each agent has a _ log file _ by which one can figure out with whom he has made contact .one way to implement the log in practice is to use identifiable devices ( for instance , cell phones ) that can exchange unique identifiers when in range .this way , one can for instance ask an agent to randomly meet a certain number of people in the population and at the end learn which individuals have been met from the data gathered by the device that is carried by the agent . note that , even if an agent contacts an infected person , he will not get infected with certainty . hence , it may well happen that an agent s result is negative ( meaning that he is not infected ) despite a contact with some infected person .we will assume that when an agent gets infected , the resulting infection will not be contagious , i.e. , an agent never infects other people .our ultimate goal is to identify the infected persons with the use of a simple recovery algorithm , based on the test results .we remark that this model is applicable in certain scenarios different from what we described as well .for instance , in classical group testing , `` dilution '' of a sample might make some of the items present in a pool ineffective .the effect of dilution can be captured by the notion of contamination in our model .it is important to notice the difference between this setup and the classical group testing where each contact with an infected person will infect the agent with certainty . in other words , in the classical group testing the decoder fully knows the sampling procedure , whereas in our setup , it has only uncertain knowledge .hence , in this scenario the decoder has to cope simultaneously with two sources of uncertainty , the unknown group of infected people and the partially unknown ( or stochastic ) sampling procedure .the collective sampling can be done in adaptive or non - adaptive fashions . in the former, samplings are carried out one at a time , possibly depending the outcomes of the previous agents . however , in the latter , the sampling strategy is specified and fixed before seeing the the test outcome for any of the agents . in this paperwe only focus on non - adaptive sampling methods , which is more favorable for applications . the idea behind our setup is mathematically related to compressed sensing .nevertheless , they differ in a significant way : in compressed sensing , the samples are gathered as linear observations of a sparse real signal and typically tools such as linear programming methods is applied for the reconstruction .to do so , it is assumed that the decoder knows the measurement matrix a priori . however , this is not the case in our setup .in other words , using the language of compressed sensing , in our scenario the measurement matrix might be `` noisy '' and is not precisely known to the decoder . as it turns out , by using a sufficient number of agentsthis issue can be resolved .to model the problem , we enumerate the individuals from to and the agents from to . let the non - zero entries of indicate the infected individuals within the population .moreover , we assume that is a -sparse vector , i.e. , it has at most nonzero entries ( corresponding to the infected population ) .we refer to the _ support set _ of as the the set which contains positions of the nonzero entries . as typical in the literature of group testing and compressed sensing , to model the non - adaptive samplings done by the agents , we introduce an boolean _ contact _ matrix where we set to one if and only if the agent contacts the person .as we see , the matrix only shows which agents contact which persons . in particular it does not indicate whether the agents eventually get affected by the contactlet us assume that at each contact with a sick person an agent gets infected independently with probability ( a fixed parameter that we call the _ contamination probability _ ) .therefore , the real _ sampling _ matrix can be thought of as a variation of in the following way : * each non - zero entry of is flipped to independently with probability ; * the resulting matrix is used just as in classical group testing to produce the _ outcome _ vector , where the arithmetic is boolean ( i.e. , multiplication with the logical and and addition with the logical or ) .the contact matrix , the outcome vector , the number of non - zero entries , and the contamination probability are known to the decoder , whereas the sampling matrix ( under which the collective samples are taken ) and the input vector are unknown .the task of the decoder is to identify the non - zero entries of based on the known parameters . as a toy example , consider a population with members where only two of them ( persons and ) are infected .we send three agents to the population , where the first one contacts persons , the second one contacts persons , and the third one contacts persons .therefore , the contact matrix and the input vector have the following form let us assume that only the second agent gets infected .this means that the outcome vector is as we can observe , there are many possibilities for the sampling matrix , all of the following form : where the question marks are with probability and with probability .it is the decoder s task to figure out which combinations make sense based on the outcome vector .for example , the following matrices and input vectors fit perfectly with : more formally , the goal of our scenario is two - fold : 1 . designing the contact matrix so that it allows unique reconstruction of _ any _ sparse input from outcome with overwhelming probability ( ) over the randomness of the sampling matrix .2 . proposing a recovery algorithm with low computational complexity . in this work, we present a probabilistic and a deterministic approach for designing contact matrices suitable for our problem setting along with a simple decoding algorithm for reconstruction .our approach is to first introduce a rather different setting for the problem that involves no randomness in the way the infection spreads out .namely , in the new setting an adversary can arbitrarily decide whether a certain contact with an infected individual results in a contamination or not , and the only restriction on the adversary is on the total amount of contaminations being made . in this regard , the relationship between the adversarial variation of the problem and the original ( stochastic ) problem can be thought of akin to the one between the combinatorial problem of designing block codes with large minimum distances as opposed to designing codes for stochastic communication channels .the reason for introducing the adversarial problem is its combinatorial nature that allows us to use standard tools and techniques already developed in combinatorial group testing .fortunately it turns out that solving the adversarial variation is sufficient for the original ( stochastic ) problem .we discuss this relationship and an efficient reconstruction algorithm in section [ sec : advers ] .our next task is to design contact matrices suitable for the adversarial ( and thus , stochastic ) problem .we extend two standard techniques from group testing to our setting .namely , we give a probabilistic and an explicit construction of the contact matrix in sections [ sec : prob ] and [ sec : expl ] , respectively .the probabilistic construction requires each agent to independently contact any individual with a certain well - chosen probability and ensures that the resulting data gathered at the end of the experiment can be used for correct identification of the infected population with overwhelming probability , provided that the number of agents is sufficiently large .namely , for contamination probability , we require agents , where is the estimate on the size of the infected population .the explicit construction , on the other hand , precisely determines which agent should contact which individual , and guarantees correct identification with certainty in the adversarial setting and with overwhelming probability ( over the randomness of the contaminations ) in the stochastic setting .this construction requires agents which is inferior than what achieved by the probabilistic construction by a factor .we point out that , very recently , atia and saligrama developed an information theoretic perspective applicable to a variety of group testing problems , including a `` dilution model '' which is closely related to what we consider in this work .contrary to our combinatorial approach , they use information theoretic techniques to obtain bounds on the number of required measurements .their bounds are with respect to random constructions and typical set decoding as the reconstruction method .specifically , in our terminology with contamination probability , they obtain an information theoretic upper bound of on the number measurements , which is comparable to what we obtain in our probabilistic construction .as is customary in the standard group testing literature , we think of the spartsity as a parameter that is noticeably smaller than the population size ; for example , one may take . indeed ,if becomes comparable to , there would be little point in using a group testing scheme and in practice , for large it is generally more favorable to perform trivial tests on the individuals .nevertheless it is easy to observe that our probabilistic scheme can in general achieve , but we ignore such refinements for the sake of clarity .the problem described in section [ sec : setting ] has a stochastic nature , in that the sampling matrix is obtained from the contact matrix through a random process . in this sectionwe introduce an adversarial variation of the problem that we find more convenient to work with . in the adversarial variation of the problem, the sampling matrix is obtained from the contact matrix by flipping up to arbitrary entries to on the support ( i.e. , the set of nonzero entries ) of each column of , for some _ error parameter _ .the goal is to be able to exactly identify the sparse vector despite the perturbation of the contact matrix and regardless of the choice of the altered entries .note that the classical group testing problem corresponds to the special case .thus the only difference between the adversarial problem and the stochastic one is that in the former problem the flipped entries of the contact matrix are chosen arbitrarily ( as long as there are not too many flips ) while in the latter they are chosen according to a specific random process .it turns out that the combinatorial tool required for solving the adversarial problem is precisely the notion of _ disjunct _ matrices that is well studied in the group testing literature .the formal definition is as follows .[ defn : disjunct ] a boolean matrix with columns is called -disjunct if , for every subset ] .take an element which is not in . by definition [ defn : disjunct] , we know that the column has more than entries on its support that are not present in the support of any .therefore , even after bit flips in , at least one entry in its support remains that is not present in the measurement outcome of , and this makes and distinguishable .for the reverse direction , suppose that is not -disjunct and take any ] with , which demonstrate a counterexample for being -disjunct .consider -sparse vectors and supported on and , respectively .an adversary can flip up to bits on the support of from to , leave the rest of unchanged , and ensure that the measurement outcomes for and coincide .thus is not suitable for the adversarial problem . of course, posing the adversarial problem is only interesting if it helps in solving the original stochastic problem from which it originates .below we show that this is indeed the case ; and in fact the task of solving the stochastic problem reduces to that of the adversarial problem ; and thus after this point it suffices to focus on the adversarial problem .[ prop : advers ] suppose that is an contact matrix that solves the adversarial problem for -sparse vectors with some error parameter .moreover , suppose that the weight of each column of is between and , for a parameter and a constant , and that , for a constant .then can be used for the stochastic problem with contamination probability , and achieves error probability at most , where probability is taken over the randomness of sampling ( and the constant behind depends on and ) .take any column of , and let be its weight .after the bit flips , we expect the weight of the column to reduce to . moreover , by chernoff bounds , the probability that ( for `` small '' ) the amount of bit flips exceeds is at most thus , by a union bound , the probability that the amount of bit flips at some column is not tolerable by is at most .note that , as we mentioned earlier , the adversarial problem is stronger than classical group testing , and thus , any lower bound on the number of measurements required for classical group testing applies to our problem as well .it is known that any measurement matrix that avoids confusion in standard group testing requires at least measurements .thus we must necessarily have as well , and this upper bounds the error probability given by proposition [ prop : advers ] by at most .suppose that the contact matrix is -disjunct .therefore , by proposition [ prop : disjunct ] it can combinatorially distinguish between -sparse vectors in the adversarial setting with error parameter . in this work we consider a very simple decoder that works as follows .* distance decoder : * for any column of the contact matrix , the decoder verifies the following : where is the vector consisting of the measurement outcomes .the coordinate is decided to be nonzero if and only if the inequality holds .the distance decoder correctly identifies the correct support of any -sparse vector ( with the above disjunctness assumption on ) .let be a -sparse vector and , , and denote the corresponding set of columns in the sampling matrix .obviously all the columns in satisfy ( as no column is perturbed in more than positions ) and thus the reconstruction includes the support of ( this is true regardless of the disjunctness property of ) .now let the vector be the bitwise or of the columns in so that , and assume that there is a column of outside that satisfies . thus we will have , and this violates the assumption that is -disjunct .therefore , the distance decoder outputs the exact support of .in light of propositions [ prop : disjunct ] and [ prop : advers ] , we know that in order to solve the stochastic problem with contamination probability and sparsity , it is sufficient to construct a -disjunct matrix for an appropriate choice of . in this section ,we consider a probabilistic construction for , where each entry of is set to independently with probability , for a parameter to be determined later , and with probability .we will use standard arguments to show that , if the number of measurements is sufficiently large , then the resulting matrix is suitable with all but a vanishing probability .let be an arbitrary ( and small ) constant .using chernoff bounds , we see that if ( which will be the case ) , with probability no column of will have weight greater than or less than . thus in order to be able to apply proposition [ prop : advers ] , it suffices to set as this value is larger than the error parameter required by the proposition . for the above choices of the parameters and , the probabilistic construction obtains a -disjunct matrix with probability using measurements .consider any set of columns of , and any column outside these , say the column where .first we upper bound the probability of a _ failure _ for this choice of and , i.e. , the probability that the number of the positions at the column corresponding to which all the columns in have zeros is at most . clearly if this event happens the -disjunct property is violated . on the other hand ,if for no choice of and a failure happens the matrix is indeed -disjunct .now we compute the failure probability for a fixed and .a row is _ good _ if at that row the column has a but all the columns in have zeros . for a particular row ,the probability that the row is good is .then failure corresponds to the event that the number of good rows is at most .the distribution on the number of good rows is binomial with mean . by a chernoff bound ,the failure probability is at most where the last inequality is due to the fact that is always between and .let .note that by choosing the parameters and as sufficiently small constants , can be made arbitrarily close to .now if we apply a union bound over all possible choices of and , the probability of coming up with a bad choice of would be at most .this probability vanishes so long as .along with propositions [ prop : disjunct ] and [ prop : advers ] , the result above immediately gives the following : the probabilistic design for construction of an contact matrix achieves measurements and error probability at most for the stochastic problem using distance decoder as the reconstruction method .the probabilistic construction results in a rather sparse matrix , namely , one with density that decays with the sparsity parameter . below we show that sparsity is necessary condition for the construction to work : let be an boolean random matrix , where for an integer , which is constructed by setting each entry independently to with probability .then either or otherwise the probability that is -disjunct ( for any ) approaches to zero as grows .suppose that is an matrix that is -disjunct .observe that , for any integer , if we remove any columns of and all the rows on the support of those columns , the matrix must remain -disjunct .this is because any counterexample for the modified matrix being -disjunct can be extended to a counterexample for being -disjunct by adding the removed columns to its support .now consider any columns of , and denote by the number of rows of at which the entries corresponding to the chosen columns are all zeros .the expected value of is .moreover , for every we have \leq \exp ( -\delta^2 ( 1-q)^t m/4 ) \ ] ] by a chernoff bound .let be the largest integer for which .if , we let above , and this makes the right hand side of upper bounded by .so with probability , the chosen columns of will keep at most , and removing those columns and rows on their union leaves the matrix -disjunct , which obviously requires at least rows ( as even a -disjunct matrix needs so many rows ) .therefore , we must have or otherwise ( with overwhelming probability ) will not be -disjunct .but the latter inequality is not satisfied by the assumption on .so if , little chance remains for to be -disjunct .now consider the case . by a similar argument as above, we must have or otherwise the matrix will not be -disjunct with overwhelming probability .the above inequality implies that we must have which , for gives .in the previous section we showed how a random construction of the contact matrix achieves the desired properties for the adversarial ( and thus , stochastic ) model that we consider in this work .however , in principle an unfortunate choice of the contact matrix might fail to be of use ( for example , it is possible though very unlikely that the contact matrix turns out to be all zeros ) and thus it is of interest to have an explicit and deterministic construction of the contact matrix that is guaranteed to work . in this section ,we demonstrate how a classical construction of superimposed codes due to kautz and singleton can be extended to our setting by a careful choice of the parameters .this is given by the following theorem .there is an explicit construction for an contact matrix that is guaranteed to be suitable for the stochastic problem with contamination probability and sparsity parameter , and achieves let be an even power of a prime , and . consider a reed - solomon code of length and dimension over an alphabet of size .the contact matrix is designed to have columns , one for each codeword .consider a mapping that maps each element of to a unique canonical basis vector of length ; e.g. , , , etc .the column corresponding to a codeword is set to the binary vector of length that is obtained by replacing each entry of by , blowing up the length of from to .note that the number of columns of is , and each column has weight exactly .moreover , the support of any two distinct columns intersect at less than entries , because of the fact that the underlying reed - solomon code is an mds code and has minimum distance .thus in order to ensure that is -disjunct , it suffices to have ( so that no set of columns of can cover too many entries of any column outside the set ) , or equivalently , by proposition [ prop : advers ] , we need to set for an arbitrary constant . thus in order to satisfy , it suffices to have , which gives .as can be chosen arbitrarily small , the denominator can be made arbitrarily close to and thus we conclude that this construction achieves measurements , which is essentially larger than the amount achieved by the probabilistic construction by a factor .observe that , unlike the probabilistic construction of the previous section , the explicit construction above guarantees a correct reconstruction in the adversarial setting ( where up to a fraction of the entries on the support of each column of the contact matrix might be flipped to zero ) .moreover , in the original stochastic setting with contamination probability , a single matrix given by the explicit construction guarantees correct reconstruction with overwhelming probability , where the probability is only over the randomness of the testing procedure .this is in contrast with the probabilistic construction where the failure probability is small , but originates from two sources ; namely , unfortunate outcome of the testing procedure as well as unfortunate choice of the contact matrix .h. ngo and d. du , `` a survey on combinatorial group testing algorithms with applications to dna library screening , '' _ dimacs series on discrete math . and theoretical computer science _ , vol . 55 , pp .171182 , 2000 .e. candes , j. romberg , and t. tao , `` robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , '' _ ieee trans .inform . theory _52 , no . 2 , pp . 489509 ,
|
detection of defective members of large populations has been widely studied in the statistics community under the name `` group testing '' , a problem which dates back to world war ii when it was suggested for syphilis screening . there , the main interest is to identify a small number of infected people among a large population using _ collective samples_. in viral epidemics , one way to acquire collective samples is by sending agents inside the population . while in classical group testing , it is assumed that the sampling procedure is fully known to the reconstruction algorithm , in this work we assume that the decoder possesses only _ partial _ knowledge about the sampling process . this assumption is justified by observing the fact that in a viral sickness , there is a chance that an agent remains healthy despite having contact with an infected person . therefore , the reconstruction method has to cope with two different types of uncertainty ; namely , identification of the infected population and the partially unknown sampling procedure . in this work , by using a natural probabilistic model for `` viral infections '' , we design non - adaptive sampling procedures that allow successful identification of the infected population with overwhelming probability . we propose both probabilistic and explicit design procedures that require a `` small '' number of agents to single out the infected individuals . more precisely , for a contamination probability , the number of agents required by the probabilistic and explicit designs for identification of up to infected members is bounded by and , respectively . in both cases , a simple decoder is able to successfully identify the infected population in time .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.