article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
lower dimensional models of shells are preferred in numerical computations to three - dimensional models when the thickness of the shells is ` very small ' .a lot of work has been done on the lower dimensional approximation of boundary value and eigenvalue problem for elastic plates and shells ( cf .recently some work has been done on the lower dimensional approximation of boundary value problem for piezoelectric shells ( cf . ) . in this paper, we would like to study the limiting behaviour of the eigenvalue problems for thin piezoelectric shallow shells .we begin with a brief description of the problem and describe the results obtained .let with and the mapping is given by for all , where is an injective mapping of class and is a unit normal vector to the middle surface of the shell .let with meas( and meas( .let and let .the shell is clamped along the portion of the lateral surface .then the variational form of the eigenvalue problem consists of finding the displacement vector , the electric potential and satisfying eq .( [ eq : a9 ] ) .we then show that the component of the eigenvector involving the electric potential can be uniquely determined in terms of the displacement vector and the problem thus reduces to finding satisfying equations ( [ eq : a31 ] ) and ( [ eq : a32 ] ) . after making appropriate scalings on the data and the unknowns , we transfer the problem to a domain which is independent of . then we show that the scaled eigensolutions converge to the solutions of a two - dimensional eigenvalue problem ( [ eq : e55 ] ) .throughout this paper , latin indices vary over the set and greek indices over the set for the components of vectors and tensors . the summation over repeated indices will be used .let be a bounded domain with a lipschitz continuous boundary and let lie locally on one side of .let with meas and meas( .let and .for each , we define the sets \gamma^\epsilon_1 & = \gamma_1\times(-\epsilon , \epsilon),\quad \gamma^\epsilon_e=\gamma_e\times(-\epsilon , \epsilon),\quad \gamma^\epsilon_s=\gamma_s\times(-\epsilon , \epsilon).\end{aligned}\ ] ] let be a generic point on and let and .we assume that for each , we are given a function of class .we then define the map by at each point of the surface , we define the normal vector for each , we define the mapping by it can be shown that there exists an such that the mappings are diffeomorphisms for all .the set is the reference configuration of the shell . for , we define the sets & \hat{\gamma}^\epsilon_e = \phi(\gamma^\epsilon_e),\quad \hat{\gamma}^\epsilon_s=\phi(\gamma^\epsilon_s),\quad \hat{\gamma}^\epsilon_{ed}=\hat{\gamma}^\epsilon_e\cup \hat{\gamma}^{\pm\epsilon}\end{aligned}\ ] ] and we define vectors and by the relations which form the covariant and contravariant basis respectively of the tangent plane of at .the covariant and contravariant metric tensors are given respectively by the christoffel symbols are defined by note however that when the set is of the special form and the mapping is of the form ( [ eq : a2 ] ) , the following relations hold : the volume element is given by where it can be shown that there exist constants and such that for .let and be the elastic , piezoelectric and dielectric tensors respectively .we assume that the material of the shell is _ homogeneous and isotropic_. then the elasticity tensor is given by where and are the lam constants of the material .these tensors satisfy the following coercive relations .there exists a constant such that for all symmetric tensors and for any vector , & \hat{\cal e}^{kl,\epsilon}t_kt_l \geq c{\sum_{j=1}^3t^2_j}.\label{eq : a5}\end{aligned}\ ] ] moreover we have the symmetries then the eigenvalue problem consists of finding such that \hat{\sigma}^\epsilon(\hat{u}^\epsilon , \hat{\varphi}^\epsilon)\nu = 0 \mbox { on } \hat{\gamma}^\epsilon_n\\[.1pc ] \hat{u}^\epsilon=0 \mbox { on } \hat{\gamma}^\epsilon_0 \end{array } \right\},\label{eq : aa1}\\[.2pc ] \left .\begin{array}{lcl } { \rm div}\hat{d}^\epsilon(\hat{u}^\epsilon , \hat{\varphi}^\epsilon)=0 \mbox { in } \hat{\omega}^\epsilon\\[.1pc ] \hat{d}^\epsilon(\hat{u}^\epsilon , \hat{\varphi}^\epsilon)\nu=0 \mbox { on } \hat{\gamma}_{s}^\epsilon\\[.1pc ] \hat{\varphi}^\epsilon=0 \mbox { on } \hat{\gamma}^\epsilon_{ed}. \end{array } \right\},\label{eq : aa2}\end{aligned}\ ] ] where \hat{d}_k & = \hat{p}^{kij,\epsilon}\hat{e}_{ij}^\epsilon + \hat{\cal e}^{kl,\epsilon}\hat{e}_l,\label{eq : aa4}\end{aligned}\ ] ] where and .we define the spaces \hat{\psi}^\epsilon & = \{\hat{\psi}\in h^1(\hat{\omega}^\epsilon ) , \hat{\psi}|_{\hat{\gamma}_{ed}^\epsilon}=0\}.\label{eq : aa6}\end{aligned}\ ] ] then the variational form of systems ( [ eq : aa1 ] ) and ( [ eq : aa2 ] ) is to find such that where & \quad\ + \int_{\hat{\omega}^\epsilon}\hat{\cal e}^{ij,\epsilon}\hat{\partial}_i^\epsilon { \hat{\varphi}}^\epsilon\hat{\partial}_j^\epsilon\hat{\psi}^{\epsilon } { \rm d}\hat{x}^\epsilon\nonumber\\[.2pc ] & \quad\ + \int_{\hat{\omega}^\epsilon } \hat{p}^{mij,\epsilon}(\hat{\partial}^\epsilon_m { \hat{\varphi}}^\epsilon \hat{e}_{ij}^\epsilon(\hat{v}^\epsilon)- \hat{\partial}_m^\epsilon\hat{\psi}^\epsilon \hat{e}_{ij}^\epsilon(\hat{u}^\epsilon)){\rm d}\hat{x}^\epsilon,\label{eq : aa8}\end{aligned}\ ] ] since the mappings are assumed to be diffeomorphisms , the correspondences that associate with every element , the vector and with every element , the function induce bijections between the spaces and , and the spaces and respectively , where \psi^\epsilon & = \{\psi^\epsilon\in h^1(\omega^\epsilon)|\psi^{\epsilon } = 0 \ { \rm on } \ \gamma^\epsilon_{ed}\}.\label{eq : a7}\end{aligned}\ ] ] then we have & \hat{e}_{ij}(\hat{v})(\hat{x}^\epsilon ) = e^\epsilon_{k\|l}(v^\epsilon ) ( g^{k,\epsilon})_i(g^{l,\epsilon})_j,\label{eq : aa10}\end{aligned}\ ] ] where then the variational form ( [ eq : aa7 ] ) posed on the domain is to find such that where & \quad\ + \int_{\omega^\epsilon } { \cal e}^{ij,\epsilon}\partial_i^\epsilon { \varphi}^\epsilon\partial_j^\epsilon\psi^\epsilon\sqrt{g^\epsilon } { \rm d}x^\epsilon\nonumber\\[.2pc ] & \quad\ + \int_{\omega^\epsilon } p^{mij,\epsilon}(\partial^\epsilon_m{\varphi}^\epsilon e_{i\|j}^\epsilon(v^\epsilon)\nonumber\\[.2pc ] & \quad\ -\partial_m^\epsilon\psi^\epsilon e_{i\|j}^\epsilon(u^\epsilon))\sqrt{g^\epsilon}{\rm d}x^\epsilon,\label{eq : a10}\end{aligned}\ ] ] & a^{pqrs , \epsilon } = \hat{a}^{ijkl , \epsilon}(g^{p,\epsilon})_i \cdot ( g^{q,\epsilon})_j \cdot ( g^{r,\epsilon})_k \cdot ( g^{s,\epsilon})_l,\label{eq : aa12}\\[.2pc ] & { \cal e}^{pq,\epsilon } = \hat{\cal e}^{ij,\epsilon}(g^{p,\epsilon})_i \cdot ( g^{q,\epsilon})_j,\label{eq : aa13}\\[.2pc ] & p^{pqr,\epsilon } = \hat{p}^{ijk,\epsilon}(g^{p,\epsilon})_i \cdot ( g^{q,\epsilon})_j \cdot ( g^{r,\epsilon})_k .\label{eq : aa14}\end{aligned}\ ] ] using the relations ( [ eq:1a2 ] ) , ( [ eq : a4 ] ) and ( [ eq : a5 ] ) , it can be shown that there exists a constant such that for all symmetric tensor and for any vector , & { \cal e}^{ij,\epsilon}t_it_j \geq c \sum_{i=1}^3t_i^2.\label{eq : aa16}\end{aligned}\ ] ] clearly the bilinear form associated with the left - hand side of ( [ eq : a9 ] ) is elliptic . hence by lax milgram theorem , given and , there exists a unique such that in particular , for each , there exists a unique solution such that this is equivalent to the following equations . and from relation ( 2.28 ) , it follows that the bilinear form associated with the left - hand side of ( [ eq : a15 ] ) is -elliptic .also for each , the mapping defines a linear functional on .hence for each , there exists a unique such that and that is continuous .in particular , it follows from ( [ eq : a15 ] ) and the above equation that and eqs ( [ eq : a14 ] ) and ( [ eq : a15 ] ) become & = \int_{\omega^\epsilon}\ ! f^\epsilon v^\epsilon \sqrt{g^\epsilon}{\rm d}x^\epsilon\ \ \ \forall v^\epsilon\in v^\epsilon,\label{eq : a22}\\[.2pc ] \int_{{\omega}^\epsilon}{\cal e}^{ij,\epsilon}{\partial}_i^\epsilon ( t^\epsilon(u^\epsilon)){\partial}_j^\epsilon{\psi}^{\epsilon } \sqrt{g^\epsilon}{\rm d}x^\epsilon & = \int_{{\omega}^\epsilon}{p}^{mij,\epsilon } { \partial}_m^\epsilon{\psi}^\epsilon { e}_{i\|j}^\epsilon({u}^\epsilon)\sqrt{g^\epsilon}{\rm d}x^\epsilon\nonumber\\[.2pc ] & \quad\ \forall \psi^\epsilon\in \psi^\epsilon.\label{eq : a23}\end{aligned}\ ] ] for each , there exists a unique such that \hskip -3.5pc & = \int_{\omega^\epsilon } h^\epsilon v^\epsilon \sqrt{g^\epsilon}{\rm d}x^\epsilon\quad \forall v^\epsilon\in v^\epsilon\label{eq : a24}\end{aligned}\ ] ] and that is continuous .let denotes the bilinear form associated with the left - hand side of eq .( [ eq : a22 ] ) . using ( [ eq : a23 ] ), we have & \quad\ + \int_{{\omega}^\epsilon } { p}^{mij,\epsilon}{\partial}^\epsilon_m ( t^\epsilon(u^\epsilon ) ) { e}_{i\|j}^\epsilon({v}^\epsilon)\sqrt{g^\epsilon}{\rm d}x^\epsilon\nonumber\\[.2pc ] & = \int_{{\omega}^\epsilon}{a}^{ijkl,\epsilon}{e}_{k\|l}^\epsilon({u}^\epsilon ) { e}_{i\|j}^\epsilon({v}^\epsilon)\sqrt{g^\epsilon}{\rm d}x^\epsilon\nonumber\\[.2pc ] & \quad\ + \int_{{\omega}^\epsilon}{\cal e}^{ij,\epsilon}{\partial}_i^\epsilon ( t^\epsilon(u^\epsilon)){\partial}_j^\epsilon(t^\epsilon(v^\epsilon ) ) \sqrt{g^\epsilon}{\rm d}x^\epsilon\nonumber\end{aligned}\ ] ] & \quad\ + \int_{{\omega}^\epsilon}{\cal e}^{ij,\epsilon}{\partial}_i^\epsilon ( t^\epsilon(v^\epsilon)){\partial}_j^\epsilon(t^\epsilon(u^\epsilon ) ) \sqrt{g^\epsilon}{\rm d}x^\epsilon\nonumber\\[.2pc ] & = b^\epsilon(v^\epsilon , u^\epsilon).\label{eq : a25}\end{aligned}\ ] ] also , using ( [ eq : a23 ] ) and the relations ( [ eq : aa15 ] ) and ( [ eq : aa16 ] ) , we have & \quad\ + \int_{{\omega}^\epsilon } { p}^{mij,\epsilon}{\partial}^\epsilon_m ( t^\epsilon(u^\epsilon ) ) { e}_{i\|j}^\epsilon({u}^\epsilon)\sqrt{g^\epsilon}{\rm d}x^\epsilon\nonumber\\[.2pc ] & = \int_{{\omega}^\epsilon}{a}^{ijkl,\epsilon}{e}_{k\|l}^\epsilon({u}^\epsilon ) { e}_{i\|j}^\epsilon({u}^\epsilon)\sqrt{g^\epsilon}{\rm d}x^\epsilon\nonumber\\[.2pc ] & \quad\ + \int_{{\omega}^\epsilon}{\cal e}^{ij,\epsilon}{\partial}_i^\epsilon ( t^\epsilon(u^\epsilon)){\partial}_j^\epsilon(t^\epsilon(u^\epsilon ) ) \sqrt{g^\epsilon}{\rm d}x^\epsilon\nonumber\\[.2pc ] & \geq c\|u^\epsilon\|^2_{v^\epsilon}.\label{eq : a26}\end{aligned}\ ] ] hence is symmetric and -elliptic . hence by lax milgram theorem , there exists a unique satisfying ( [ eq : a24 ] ) . letting in ( [ eq : a24 ] ) , we get & \quad\ + \int_{{\omega}^\epsilon } { p}^{mij,\epsilon}{\partial}^\epsilon_m ( t^\epsilon(g^\epsilon(h^\epsilon))){e}_{i\|j}^\epsilon(g^\epsilon(h^\epsilon ) ) \sqrt{g^\epsilon}{\rm d}x^\epsilon\nonumber\\[.2pc ] & = \int_{\omega^\epsilon } h^\epsilon g^\epsilon(h^\epsilon ) \sqrt{g^\epsilon}{\rm d}x^\epsilon\label{eq : a27 } .\end{aligned}\ ] ] using ( [ eq : a23 ] ) , it becomes & \quad\ + \int_{{\omega}^\epsilon}{\cal e}^{ij,\epsilon}{\partial}_i^\epsilon ( t^\epsilon(g^\epsilon(h^\epsilon))){\partial}_j^\epsilon ( t^\epsilon(g^\epsilon(h^\epsilon ) ) ) \sqrt{g^\epsilon}{\rm d}x^\epsilon\nonumber\\[.2pc ] & = \int_{\omega^\epsilon } h^\epsilon g^\epsilon(h^\epsilon ) \sqrt{g^\epsilon}{\rm d}x^\epsilon.\label{eq : a28}\end{aligned}\ ] ] using the relations ( [ eq : aa15 ] ) and ( [ eq : aa16 ] ) , we have hence which implies that is continuous .it follows from ( [ eq : a22 ] ) and the above lemma that .since the inclusion is compact , it follows that is compact . also since the bilinear form is symmetric, it follows that is self - adjoint .hence from the spectral theory of compact , self - adjoint operators , it follows that there exists a sequence of eigenpairs such that & \quad\ + \int_{{\omega}^\epsilon } { p}^{mij,\epsilon}{\partial}^\epsilon_m ( t^\epsilon(u^{m,\epsilon } ) ) { e}_{i\|j}^\epsilon({v}^\epsilon ) \sqrt{g^\epsilon}{\rm d}x^\epsilon\nonumber\\[.2pc ] & = \xi^{m,\epsilon}\int_{\omega^\epsilon } u^{m,\epsilon } v^\epsilon \sqrt{g^\epsilon}{\rm d}x^\epsilon\quad \forall v^\epsilon\in v^\epsilon,\label{eq : a31}\\[.2pc ] & \int_{{\omega}^\epsilon}{\cal e}^{ij,\epsilon}{\partial}_i^\epsilon ( t^\epsilon(u^{m,\epsilon})){\partial}_j^\epsilon{\psi}^{\epsilon } \sqrt{g^\epsilon}{\rm d}x^\epsilon\nonumber\\[.2pc ] & = \int_{{\omega}^\epsilon}{p}^{mij,\epsilon } { \partial}_m^\epsilon{\psi}^\epsilon { e}_{i\|j}^\epsilon({u}^{m,\epsilon})\sqrt{g^\epsilon}{\rm d}x^\epsilon\quad \forall \psi^\epsilon\in \psi^\epsilon,\label{eq : a32}\\[.2pc ] & 0<\xi^{1,\epsilon } \leq\xi^{2,\epsilon}\leq \cdots \leq\xi^{m,\epsilon}\leq\cdots \rightarrow \infty,\label{eq : a33}\\[.2pc ] & \int_{\omega^\epsilon}u^{m,\epsilon}_i u^{n,\epsilon}_i\sqrt{g^\epsilon}{\rm d}x^\epsilon = \epsilon^3 \delta_{mn}.\label{eq : a34}\end{aligned}\ ] ] the sequence forms a complete orthonormal basis for .define the rayleigh quotient for by then where denotes the collection of all -dimensional subspaces of .we now perform a change of variable so that the domain no longer depends on . with , we associate .let \gamma_e & = \gamma_e\times(-1 , 1),\quad \gamma_s = \gamma_s\times(-1 , 1),\\[.2pc ] \gamma_n & = \gamma_1\cup\gamma^{+}\cup\gamma^{-},\quad \gamma_{ed}=\gamma^{+}\cup\gamma^{-}\cup\gamma_e.\end{aligned}\ ] ] with the functions , we associate the functions defined by we assume that the shell is a shallow shell , i.e. there exists a function such that i.e. , the curvature of the shell is of the order of the thickness of the shell .with the tensors , we associate the tensors through the relation we define the spaces \psi(\omega ) & = \{\psi\in h^1(\omega ) , \psi|_{\gamma_{ed}}=0\}.\label{eq : b12}\end{aligned}\ ] ] we denote .then the variational equations ( eqs ( [ eq : a31])([eq : a34 ] ) ) become & \quad\ + \int_\omega p^{3kl}\partial_3{\varphi}^m ( \epsilon)e_{k\|l}(\epsilon , v)\sqrt{g(\epsilon)}{\rm d}x\nonumber\\[.2pc ] & \quad\ + \epsilon\int_\omega p^{\alpha kl}(\epsilon ) \partial_\alpha{\varphi}^m(\epsilon ) e_{k\|l}(\epsilon , v)\sqrt{g(\epsilon)}{\rm d}x\nonumber\\[.2pc ] & = \xi^m(\epsilon)\int_\omega [ \epsilon^2 u^m_\alpha(\epsilon)v_\alpha + u^m_3(\epsilon)v_3]\sqrt{g(\epsilon)}{\rm d}x \quad\mbox{for all } v\in v(\omega ) .\label{eq : b13}\\[.2pc ] & \int_\omega { \cal e}^{33}(\epsilon)\partial_3{\varphi}^m(\epsilon ) \partial_3\psi \sqrt{g(\epsilon)}{\rm d}x\nonumber\\[.2pc ] & \quad\ + \epsilon\int_\omega[{\cal e}^{3\alpha}(\epsilon ) ( \partial_\alpha{\varphi}^m(\epsilon ) \partial_3\psi+\partial_3{\varphi}^m(\epsilon)\partial_\alpha\psi ) ] \sqrt{g(\epsilon)}{\rm d}x\nonumber\\[.2pc ] & \quad\ + \epsilon^2\int_\omega { \cal e}^{\alpha\beta}(\epsilon ) \partial_\alpha{\varphi}^m(\epsilon ) \partial_\beta\psi\sqrt{g(\epsilon ) } { \rm d}x\nonumber\\[.2pc ] & = \int_\omega p^{3kl}(\epsilon)\partial_3\psi e_{k\|l}(\epsilon , u^m(\epsilon))\sqrt{g(\epsilon)}{\rm d}x \nonumber\\[.2pc ] & \quad\ + \epsilon\int_\omega [ p^{\alpha kl}(\epsilon ) \partial_\alpha\psi e_{k\|l}(\epsilon , u^m(\epsilon ) ) ] \sqrt{g(\epsilon)}{\rm d}x \quad\mbox{for all } \psi\in \psi(\omega),\label{eq : b14}\\[.2pc ] & \int_\omega [ \epsilon^2u^m_\alpha(\epsilon ) u^n_\alpha(\epsilon ) + u^m_3(\epsilon)u^n_3(\epsilon)]\sqrt{g(\epsilon)}{\rm d}x = \delta_{mn}. \label{eq : b15}\end{aligned}\ ] ]the following two lemmas are crucial ; they play an important role in the proof of the convergence of the scaled unknowns as . in the sequel, we denote by various constants whose values do not depend on but may depend on .the functions defined in ( [ eq : b10 ] ) are of the form e_{\alpha\|3}(\epsilon ; v ) & = \frac{1}{\epsilon}\{{\tilde{e}}_{\alpha 3}(v ) + \epsilon^2 e^{\sharp}_{\alpha\|3}(\epsilon ; v)\},\label{eq : c2}\\[.2pc ] e_{3\|3}(\epsilon ; v ) & = \frac{1}{\epsilon^2}{\tilde{e}}_{33}(v),\label{eq : c3}\end{aligned}\ ] ] where { \tilde{e}}_{\alpha 3}(v ) & = { \frac{1}{2}}(\partial_\alpha v_3+\partial_3 v_\alpha),\label{eq : c5}\\[.2pc ] { \tilde{e}}_{33}(v ) & = \partial_3 v_3\label{eq : c6}\end{aligned}\ ] ] and there exists constant such that also there exist constants and such that & \sup_{0<\epsilon\leq \epsilon_0}\max_{x\in\omega}|a^{ijkl}(\epsilon)-a^{ijkl}| \leq c_3\epsilon^2,\label{eq : c9}\end{aligned}\ ] ] where and for and for all symmetric tensors the proof is based on lemma 4.1 of . from relation ( [ eq : a5 ] ) and definition ( [ eq : b2 ] ) , it follows that there exists a constant such that for any vector , we assume that there exists functions and such that & \sup_{0<\epsilon\leq \epsilon_0}\max_{x\in\omega}|{\cal e}^{ij } ( \epsilon)- { \cal e}^{ij}| \leq c_7\epsilon.\label{eq : b5}\end{aligned}\ ] ] let be a given function and let the functions be defined as in ( [ eq : c4])([eq : c6 ] ) .then there exists a constant such that the following generalised korn s inequality holds : for all where is the space defined in ( [ eq : b11 ] ) . the proof is based on lemma 4.2 of .in this section , we show that for each positive integer , the scaled eigenvalues are bounded uniformly with respect to . for each , there exists a unique solution such that \sqrt{g(\epsilon)}{\rm d}x\nonumber\\ & \quad\ + \epsilon^2\int_\omega { \cal e}^{\alpha\beta}(\epsilon)\partial_\alpha t(\epsilon)(h ) \partial_\beta\psi \sqrt{g(\epsilon)}{\rm d}x\nonumber\\ & = \int_\omega p^{3kl}(\epsilon ) \partial_3\psi e_{k\|l}(\epsilon , h)\sqrt{g(\epsilon)}{\rm d}x\nonumber\\ & \quad\ + \epsilon\int_\omega p^{\alpha kl}(\epsilon ) \partial_\alpha\psi e_{k\|l}(\epsilon , h)\sqrt{g(\epsilon)}{\rmd}x\quad \forall \psi\in\psi.\end{aligned}\ ] ] taking and in the above equation , we have \sqrt{g(\epsilon)}{\rm d}x\nonumber\\ & \quad\ + \epsilon^2\int_\omega { \cal e}^{\alpha\beta}(\epsilon)\partial_\alpha t(\epsilon)(v_\varphi ) \partial_\beta t(\epsilon)(v_\varphi ) \sqrt{g(\epsilon)}{\rm d}x\nonumber\\ & = \int_\omega p^{3kl}(\epsilon ) \partial_3 t(\epsilon)(v_\varphi ) e_{k\|l}(\epsilon , v_\varphi)\sqrt{g(\epsilon)}{\rm d}x\nonumber\\ & \quad\ + \epsilon\int_\omega p^{\alpha kl}(\epsilon ) \partial_\alpha t(\epsilon)(v_\varphi)e_{k\|l}(\epsilon , v_\varphi ) \sqrt{g(\epsilon)}{\rm d}x.\end{aligned}\ ] ] it follows from lemma 6.2 in that the bilinear form defined by is -elliptic and symmetric . hence it is clear that is also -elliptic and symmetric .let , be the eigensolutions of problem ( [ eq : e56 ] ) found as limits of the subsequence of eigensolutions of the problem ( [ eq : b13 ] ) .then the sequence comprises all the eigenvalues , counting multiplicities , of problem ( [ eq : e56 ] ) and the associated sequence of eigenfunctions forms a complete orthonormal set in the space .99 bernadou m and haenel c , modelization and numerical approximation of piezoelectric thin shells , parts i , ii , iii : rapport de recherche , der - cs ( ecole suprieure dingnierie lonard de vinci , france ) ( 2002 ) no rr-6 , 7 , 8
in this paper we consider the eigenvalue problem for piezoelectric shallow shells and we show that , as the thickness of the shell goes to zero , the eigensolutions of the three - dimensional piezoelectric shells converge to the eigensolutions of a two - dimensional eigenvalue problem . = msam10 at 10pt = tibi at 10.4pt [ theo]*theorem * [ theo]lemma [ theo]remark [ theo]definition [ theo]corollary
probably the most significant result of the gromov analysis of classical networks is existence of a congestion core . under a network protocol that sends the packets along least cost paths, the _ core _ can be qualitatively defined as a point where most of the geodesics ( least cost paths ) converge , creating packet drops , high retransmission rates , and other nuisances under the tcp - ip protocol .existence of the core has been experimentally observed and mathematically proved if the network is gromov hyperbolic , subject to some highly technical conditions related to the gromov boundary .a gromov hyperbolic network can intuitively be defined as a network that `` looks like '' a negatively curved riemannian manifold ( e.g. , a saddle ) when viewed from a distance .see , e.g. , for a precise definition .next to classical networks , one can envision quantum networks : the nodes are spins that can be up ( not excited ) or down ( excited ) and the links are quantum mechanical couplings of the xx or heisenberg type . given some random source - destination pair , a valid question is whether some spin could act as a `` core , '' that is , a spin that could be excited no matter what the source and the destination are . for a linear chain , one would expect such a congestion core in the center as classically any excitation in one half of the chain would have to transit the center of the chain to reach the other half . in this work ,we demonstrate that quantum - mechanically the transmission of excitations does not need to occur this way , and in fact the center of an odd - length spin chain can act as an `` anti - core , '' excitation of which is avoided . this anti - core , " or `` anti - gravity '' center as it was originally called , was first observed in . the anti- core was defined as a point of high inertia , , , as opposed to the classical congestion core that has minimum inertia owing to the negative curvature of the underlying space ( * ? ? ?* theorem 3.2.1 ) .the inertia quantifies how difficult communication to and from the anti - core is . as it has been done along this line of work , a pre - metric based on the _ information transfer capacity ( itc ) _ ( see sec . [s : homogeneous ] ) is employed .unlike standard quantum mechanical distances , , this distance " measure aims to quantify not how distant two fixed quantum states are , but how close to a desired target state a quantum state can get under the evolution of a particular hamiltonian .the initial and target states are typically orthogonal . in this paper , we provide an _ analytical _ justification of the numerically observed anti - core phenomenon in spin chains with xx coupling , starting with finite - length chains , extending the itc concept to semi- and bi - infinite cases ( sec .[ s : infinite_chains ] ) , and finally _ proving _ that , in sec .[ s : anti_core ] .we further show that by adding a bias on the central spin its _ `` anti - core '' _ property can be made stronger in the sense that the probability of transmission of the excitation to and from it is infinitesimally small ( sec .[ s : engineered ] ) .the remaining nagging question is why was it observed in that spin chains appear gromov - hyperbolic and have an anti - core , while classical networks are gromov hyperbolic with the opposite core ?this will be clarified in sec .[ s : engineered ] by means of a spin chain example , showing that its gromov boundary has _ only one _ point , while classical networks need to have _ at least two points _ in their gromov boundary for the core to emerge .we consider a linear array of two - level systems ( spin particles ) with uniform coupling between adjacent spins ( homogeneous spin chain ) made up of an odd number of physically equally spaced spins with coupling hamiltonian here we shall be primarily interested in the case of xx coupling , for which .the factor is the pauli matrix along the or direction of spin in the array , i.e. , where the factor occupies the position among the factors and is one of the single spin pauli operators it is easily seen that is _ real _ and symmetric .the hamiltonian commutes with the operator which counts the total number of excitations .the hilbert space can therefore be decomposed into subspaces corresponding to the number of excitations .define to be the quantum state in which the excitation is on spin .the single excitation subspace is spanned by .restricted to this subspace , the hamiltonian in this natural basis takes the form for xx coupling ( ) , becomes the toeplitz matrix made up of zeros on the diagonal , ones on the super- and subdiagonal and zeros everywhere else .table [ t : eigenstructure ] shows the eigenvalues and eigenvectors of . in anticipation of letting ,define and the preceding sums can be rewritten as }_{\mathrm{max}}(i',j ' ) } & = 2\sum_{k'=-n,\mathrm{even}}^{+n } \left|f\left ( 2 \pi x_{k'}i'\right ) g\left ( 2\pi x_{k ' } j'\right)\right|\left(x_{k'+2}-x_{k ' } \right)\\ & \quad + 2 \sum_{k'=-n,\mathrm{odd}}^{+n } \left|\bar{f}\left ( 2 \pi x_{k'}i ' \right ) \bar{g}\left ( 2\pi x_{k ' } j ' \right)\right| \left(x_{k'+2}-x_{k ' } \right ) .\end{aligned}\ ] ] letting yields }_{\mathrm{max}}(i',j')}\\ & = 2\int_{-1/4}^{+1/4 } |f(2\pi xi')g(2\pi x j')| dx + 2\int_{-1/4}^{+1/4}|\bar{f}(2\pi xi')\bar{g}(2\pi x j')| dx.\end{aligned}\ ] ] in order to make the integrations more straightforward and to follow a procedure that parallels the one of appendix [ s : semi_infinite_chains ] , it is convenient to change the integration limits by making use of the periodicity of the integrands as functions of .observe that both and have decompositions in terms of sines or cosines of arguments .write the generic term as . if iseven , observe that . if is odd , . in either case, and have period . with this property ,the previous integrals can be rewritten as observe that , as easily seen from a cauchy - schwarz argument .also observe that ; indeed , if , table [ t : in_terms_of ] reveals that the integrands are of the form or , from which the assertion is trivial .the next step is to make , , , more manageable by expressing them as , , , if , , , are sines and by , , , if they are cosines . in the preceding , and are odd and even , resp ., square waves of unit amplitude and of period , with fourier decompositions if and are even , and if we let along the even number subsequence of , we need to take and , as seen from table [ t : in_terms_of ] , together with and .if on the other hand , we let along the odd number subsequence of , we need to take and , together with and .however , because of the symmetry of formula , both subsequences yield the same result : from table [ t : in_terms_of ] , we could let along the even number subsequence of , in which case we need to take and .if we let along the odd number subsequence of , we need to take and . in either case , because of the symmetry of , the result is the same and is given by from the above , it follows that all cases share a few quadrature integrals : the right - hand side of the last equality involves expressions like . to simplify the notation, we wrote such expressions as . here we proceed from the general formula for , utilize the quadrature integrals of the preceding section , and derive , first , an infinite series representation of the asymptotic maximum transfer probability and , finally , a representation in terms of special functions .since the general formula is in terms of function , , , that depend on whether and are even or odd ( see section [ s : consistency ] ) , it is necessary to examine each case in particular . from section [ s : consistency ], it follows that the case where both and are even and the case where both and are odd are the same . from the same section [ s : consistency ] , the case where iseven and odd is the same as the case where odd and even , but is different from the preceding one .so , there are essentially two cases to be distinguished .[ [ both - i - and - j - even - or - both - i - and - j - odd ] ] both and even or both and odd ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ where and are functions taking value or , and complementary in the sense that .next , we find that where observe that , are relatively prime ; hence they could not be both even .the developments follow closely the semi - infinite chain case , except that , here , and are not restricted to be positive .hence we have to consider several cases : here we have to make a distinction between the two cases : both and odd and both and even .we start with the easy case where in this case indeed , both is even .this along with is even yields putting everything together , we find , using partial fraction decompositions , the case where is more complicated . if and have the same power of in their prime number factorization , then is even and the preceding formula holds .if the powers of are different , then is odd and it is easily verified that with the above , we get despite the extra difficulties created by the and functions and the various signs , the pattern remains the same as before : the indicators are nonvanishing only if for some even .since is odd and is even , does not contain any positive power of in its prime divisors ; therefore , remains odd and remains even ; in other words , is odd . from this observation ,tedious but elementary manipulations lead to the following : e. jonckheere , f. c. langbein , and s. g. schirmer .curvature of quantum rings . in _ proceedings of the 5th international symposium on communications , control andsignal processing ( isccsp 2012 ) _ , rome , italy , may 2 - 4 2012 .e. jonckheere , s. schirmer , and f. langbein . geometry and curvature of spin networks . in _ieee multi - conference on systems and control _ , pages 786791 , denver , co , september 2011 .available at arxiv:1102.3208v1 [ quant - ph ] .
the purpose of this paper is to exhibit a quantum network phenomenon the anti - core that goes against the classical network concept of congestion core . classical networks idealized as infinite , gromov hyperbolic spaces with least - cost path routing ( and subject to a technical condition on the gromov boundary ) have a congestion core , defined as a subnetwork that routing paths have a high probability of visiting . here , we consider quantum networks , more specifically spin chains , define the so - called maximum excitation transfer probability between spin and spin , and show that the central spin has among all other spins the lowest probability of being excited or transmitting its excitation . the anti - core is singled out by analytical formulas for , revealing the number theoretic properties of quantum chains . by engineering the chain , we further show that this probability can be made vanishingly small .
to introduce and place the ideas of the present contribution in context , consider the paradigmatic problem of estimating the volume of a given region as shown in figure [ fig : illustration ] . in its most rudimentary form ,mc estimates of proceed using the `` hit - or - miss '' idea , whereby the user designs a reference region of known volume that fully overlaps with , and draws random points uniformly distributed in ( cf .[ fig : illustration ] , top panel ) .an estimate of can then be obtained from , where is the fraction of such points that fall inside . as is immediately apparent, the efficiency of this simple method hinges upon the amount of overlap between the volumes and : the tighter bounds , the more the hits and hence the better the quality of the estimate .conversely , bounding volumes that have little overlap with give rise to more misses than hits and hence to large errors in the estimate .this poses a major obstacle to the implementation of such methods , especially in cases where it is difficult to guess the precise location of the volume and its boundaries , often leading to the design of an unnecessarily conservative ( large ) reference region , and hence to very inefficient estimates of . despite its simplicity , the above example captures the central issues pertinent to integral estimation problems in general , for which more sophisticated methods exist .notably , in most efficient monte carlo techniques for normalizing constant or free energy estimation , a suitable family of intermediate integrands is required to interpolate between the two desired integrands ( or between the available reference system and the integrand of interest , in absolute integral estimation ) , the extent of overlap between the integrands dictating the efficiency of the methods much like in the example above .such requirements become particularly daunting when the integrands in question are highly concentrated in unknown and disparate regions of configuration space , as is typically the case in most interesting physical problems , thereby demanding substantial user insight prior to the applicability of such methods . in the present contribution, a new family of mc integration strategies that can greatly alleviate the demand for such types of insights will be introduced . as expressed by the central results underpinning these methods , eqs .( [ eq : z_1st ] ) and ( [ eq : z_2nd ] ) , the integrals of interest ( , eq . ( [ eq : z ] ) ) are computed individually , thus fundamentally departing from the aforementioned alternatives that rely on ratios of similar integrals . at the core of these new strategiesis the use of integration spaces of enlarged dimensionality ( `` replicas '' ) , a concept already widely invoked to speed up markov chain simulations ( parallel tempering , evolutionary monte carlo , etc . ) , combined with the replacement of reference integrals by normalized transition matrices ( also known as transition functions or kernels in the markov chain literature ) .two main variants will be presented : the first , more physically intuitive , requires a fluctuating number of replicas ( fig .[ fig : illustration ] , bottom panel ) ; the second , more abstract but also more easily adaptable to existing replica simulations , uses a fixed number of replicas along with `` virtual '' insertions / deletions of replicas .these two variants are complementary to each other much like grand - canonical monte carlo ( gcmc ) and widom s test particle insertion method in simulations involving chemical potentials .to see how in principle the simultaneous use of multiple configurations ( replicas ) allows for the absolute computation of integrals , let us go back to the volume estimation problem of fig .[ fig : illustration ] . as illustrated in the bottom panel of that figure , the volume of interest can be estimated by equilibrating it with an infinite reservoir of ideal gas particles ( replicas ) of density , and measuring the average number of particles inside , to obtain .this physical idea can be implemented computationally by means of a generalized form of the usual grand - canonical monte carlo method , where the particle reservoir is at density ( or , equivalently , at the corresponding chemical potential ) , and attempted particle insertions / deletions take place in the neighborhood of an existing particle rather than inside a fixed volume that bounds , that neighborhood being defined by a transition matrix .these are the central ideas that motivated the development of the two versions described below . in general, we would like to estimate integrals of the type where is the support of the integral , and is a positive - definite function ( see below , however ) of the -dimensional vector ; e.g. for most physical problems with energy function .partition functions of discrete systems , i.e. , can be dealt with in an analogous fashion .we are given a ( normalized ) transition matrix , such as those routinely used in the trial part of the metropolis algorithm ; for example , could be a uniform probability distribution up to some distance from ( see dashed circle in fig .[ fig : illustration ] ) , or a gaussian function centered about . as with metropolis and other markov chain simulations ,the width of these distributions can be chosen during a preliminary run of the simulation ( see below ) . for non - positive definite integrands, one can invoke the identity where is defined by eq .( [ eq : z ] ) , , and the average of the sign function of is with respect to points sampled from .provided they exist , both quantities on the right hand side of this identity are immediately available from the methods below .for the first version of the method , we would like to simulate a system of replicas in contact with a reservoir of ideal ( non - interacting ) replicas of specified density , such that each replica in independently samples the distribution .the corresponding grand - canonical partition function is thus note that , unlike the traditional case where , here the sum starts at , as in the present method ( see below ) replica insertions / deletions take place in the neighborhood of at least one existing replica .provided we can simulate according to this partition function , the desired integral can be found from the equality which follows straightforwardly from eq .( [ eq : q ] ) by computing each moment of separately .( alternatively , one can use or any other relationship between moments of and , and numerically solve the transcendental equation for ; the question of which estimator is more efficient will be left for future studies ) . an algorithm that samples according tothe above grand - canonical partition function goes as follows . at the beginning of the algorithmwe are given at least one point that belongs to ; let us assume the general case where we already have replicas in , and let us denote the vector of coordinates of the replicas by .we then decide whether to insert or remove a replica , typically but not necessarily ( see methods section)with equal probability . if the decision was an insertion , we sample a new replica coordinate from , where is a randomly chosen coordinate from the existing , and accept its insertion with probability except when lies outside , in which case an immediate rejection takes place . similarly ,if the decision was to try a deletion , we randomly pick a replica from the existing , and accept its deletion with probability where is excluding , and the sum over excludes .an exception is the case where only one replica remains , which is always rejected .a proof that this algorithm samples according to eq .( [ eq : q ] ) follows by detailed balance ( see methods section ) .when the replicas correspond to particles inserted uniformly in a fixed volume , i.e. , the above acceptance probabilities reduce to those of ref . ( with the due mappings between and chemical potential , and between and the boltzmann factor ) . of course , it is also possible to perform ordinary -preserving monte carlo moves on each replica before attempted insertion / deletions ( fig .[ fig : sinx ] uses this idea ) . by repeating the above procedure a number of times, the simulation will eventually equilibrate , and the number of replicas will fluctuate about its mean value . if the equilibration is too slow , i.e. too few replica insertions / deletions are accepted , the width of the distribution about can be adjusted accordingly during a preliminary run , in a fashion analogous to what is done in metropolis monte carlo to keep the rate of accepted trial moves within a reasonable range . likewise ,if the number of replicas starts to grow beyond one s computational capabilities , or diminish until it hardly departs from unity , can be adjusted so that is a reasonable number consistent with one s computing power . in practice , for many - particle systems with extensive free energies ( i.e. ) , it is best to write and adjust instead .note that the present algorithm generalizes standard grand - canonical simulation in two crucial ways .first , it inserts and removes entire replicas of the system of interest as opposed to individual particles of a many - body system .this conceptual difference is essentially what allows one to relate moments of to the partition function of the system of interest ( eq .( [ eq : z_1st ] ) ) .second , the replicas are introduced in the neighborhood of an existing replica instead of inside a fixed region , as prescribed by the transition matrix .this allows for efficient simulation when the integrand of interest is sharply peaked about unknown regions of configuration space ( cf .[ fig : illustration2 ] ) .although the use of arbitrary transition matrices in grand - canonical simulations is known in the mathematical literature , to our knowledge the use of such ideas for integral / partition function estimation is new .the second version of the replica gas method introduces two important advantages .first , the number of replicas is constant as opposed to fluctuating , making it more convenient for parallel computing architectures , and second the replicas can be simulated at different temperatures .these features also make the method easily implemented in existing parallel tempering ( replica exchange ) simulations , thereby benefiting from the greatly enhanced equilibration rates of these simulations .the integrals of interest can then be estimated by computing two separate averages involving both the integrand and the transition matrix , as described below . to introduce the method in its simplest form ,let us first assume that only two replicas exist ( ) , each of them independently sampling the distributions and , via e.g. metropolis .the integral of interest is , as in eq .( [ eq : z ] ) , and is an auxiliary integral ; the auxiliary distribution is arbitrary ( for example , it could be itself ) , but in typical applications it corresponds to at a higher temperature , i.e. , where .then the following identity holds : \int_\omega dx ' \ , t(x'|x ) \cdot \pi(x ' ) } { \int_\omega dx [ \tilde{\pi}(x)/\tilde{z } ] \int_\omega dx ' [ \pi(x')/z ] \cdot t(x'|x ) } \nonumber \\ & \equiv\frac{\left\langle \pi(x ' ) \right\rangle_{\tilde{\pi},t } } { \left\langle t(x'|x ) \right\rangle_{\tilde{\pi},\pi } } , \label{eq : z_2nd}\end{aligned}\ ] ] where the average of an observable means that configurations ( ) are sampled from the distribution ( ) .thus , the numerator in the above result requires to be sampled from while is sampled from ( `` virtual replica insertion , '' in analogy with widom s method ) , and for each such pair of configurations one computes the value of .likewise , for the average in the denominator , is sampled from while is sampled from , and for each pair one evaluates ( `` virtual replica deletion '' ) . in the limit of infinite samples, the ratio of these two averages converges to as expressed by eq .( [ eq : z_2nd ] ) . for simulations with multiple replicas at different temperatures ( ) , one can simply combine ( i.e. sum ) the above result for each pair of replicas .thus , for the ising model results in fig .[ fig : ising2d ] , the equation was used .the sums run over each replica at temperature , except .the energy function for a spin configuration is the usual ising model function with periodic boundary conditions .the transition matrix adopted generates a new spin configuration by flipping each spin of with probability .thus , , where is the hamming distance ( number of spin mismatches ) between the configurations and , and is the total number of spins .for illustrative purposes , the results of a volume estimation problem in two spatial dimensions using the version with varying number of replicas are reported in figure [ fig : vol2d ] .this example was chosen due to its infinite support , a property that would render the use of importance sampling methods difficult , as they would require the design of a non - trivial reference volume with similar support and known quadrature ( recall that in principle we do not know where the integrand is concentrated , or where its boundaries are ) .the present method performs well in such circumstances without any prior information concerning the support of the integrand , by using a simple uniform transition matrix ( fig .[ fig : vol2d ] , dashed square ) . as an application to integrals more general than simple volumes , in fig .[ fig : sinx ] a representative estimate of is shown .note that this integrand is non - positive definite , so eq . ( [ eq : trick ] ) was used .the quantities on the right hand side of that equation were estimated using the varying number of replicas version of the method , with the positive - definite integrand . in figure [ fig : ising2d ] the partition function of the two - dimensional ising model is computed to demonstrate the version with fixed number of replicas ( similar results are obtained with the non - fixed version ) . at low temperatures , i.e. ordered states , the replicas are densely localized about the spin - up and spin - down states , and hence a local transition matrix is sufficient to ensure efficient convergence of the averages .conversely , for higher temperatures close to the disordered state and above , the relevant configuration space and hence the spread of the replicas grows beyond the reach of the local transition matrix adopted , thus causing the averages to converge more slowly ( cf .right side of fig .[ fig : illustration2 ] ) . to understand such convergence issues in more detail ,consider for simplicity eq .( [ eq : z_2nd ] ) when ( see below for the version with varying number of replicas ) . in order for the averages in eq .( [ eq : z_2nd ] ) to converge efficiently , the transition matrix has to be such that : * most configurations sampled from fall in typical regions of for any typical configuration sampled from ( so that the numerator is not dominated by those rare configurations with high values of ) ; * most configurations , sampled from fall in typical regions of ( so that the denominator is not dominated by those rare events that cause to be of appreciable value ) . in the ising model example of fig .[ fig : ising2d ] , where typically flips only a few spins of , the low temperature estimates converge faster as typical spin configurations only differ by a few spins , thereby satisfying both requirements , while at higher temperatures close to and above , any two typical spin configurations differ by a substantial number of spins , and hence the requirement ( b ) is violated . adding more replicas , increasing the value of for higher temperatures , or using non - local transition matrices ( such as those of cluster algorithms )can alleviate the problem , but such issues will be left for future studies . notethat analogous convergence issues arise in the version with varying number of replicas .indeed , as can be seen by inspection of eqs .( [ pacc1 ] ) and ( [ pacc2 ] ) , the acceptance probabilities for insertion and deletion are affected by the choice of much like the averages in eq .( [ eq : z_2nd ] ) are affected by criteria ( a ) and ( b ) above .( a separate issue is how well the replicas explore the energy landscape . in the fixed number of replicas version ,different temperatures are used to overcome energy barriers .although replica exchange operations can be combined with the varying number of replicas method , as a proof of concept for the convergence issues above , it suffices to start a population of replicas that populate spin - up and spin - down states equally ) .as illustrated by the above example , the most attractive use of the present methods lies in problems where the integrand is sufficiently localized , so that a general - purpose , local transition matrix can be used to yield efficient results with moderate numbers of replicas .these are precisely the problems for which existing importance sampling - based methods are most demanding , and thus these methods can be seen as complementary to each other ( see fig .[ fig : illustration2 ] ) .it should be noted that the so - called `` flat histogram '' monte carlo methods are also able to bypass some of the difficulties with importance sampling strategies in some cases , especially for discrete systems .however , the required human input and scope of such integration methods are rather different : they require the existence and knowledge of suitable order parameters and their ranges ( this being particularly difficult for entropic problems , such as that of fig .[ fig : vol2d ] ) , knowledge of ground state degeneracies , and for continuum systems suffer from systematic errors due to discretization schemes , although attempts to alleviate some of these problems have been put forward . in summary ,the present contribution has introduced two variants of a novel monte carlo strategy for estimating integrals and partition functions .both versions can be seen as complementary to existing importance sampling or free energy methods , in that their utility is generally best when the integrands are concentrated in relatively small and unknown regions of configuration space . both continuum and discrete systems are equally amenable to their use . by shifting focus from importance sampling functions to transition matrices ,it is expected that these methods will encourage a change of paradigm in monte carlo integration .in this section it will be shown that eqs .( [ pacc1 ] ) and ( [ pacc2 ] ) satisfy the detailed balance condition according to the grand - canonical partition function eq .( [ eq : q ] ) , the probability of observing the microstate is given by where the proportionality constant does not depend on the replica coordinates or .note carefully the difference between the distribution of the labeled microstate , corresponding to replica with label `` 1 '' being at , `` 2 '' at , etc , and that of the unordered set of coordinates , corresponding any replica being at , another arbitrary replica at , etc .this probability is given by where is the sum over all possible permutations of .since the replicas are indistinguishable , we have , as in above . of course , it is possible to use either description ( labeled or unlabeled ) , provided the correct probability distributions are used ( eq . ( [ labeled ] ) or eq .( [ unlabeled ] ) , respectively ) . in this section ,following , we will only show the proof using labeled states , i.e. eq . ( [ labeled ] ) .it is a simple exercise to modify the development below for the unlabeled case ; the acceptance probabilities , of course , are unchanged .our acceptance probabilities are of the metropolis - hastings type , which by construction satisfy detailed balance . in the present notation ,the formulas are and analogously for . according to the insertion / deletion algorithm described before eq .( [ pacc2 ] ) , the trial probabilities for going between the states and are given by {q \cdot \frac{1}{n+1 } \cdot \sum_{i=1}^n \frac{1}{n } t(\xi|x_i ) } ( x_1 , \ldots , x_n , \xi ) , \ ] ] where is given by the expression above the arrows , and by the one below them . in the insertion trial probability, is the probability to try an insertion as opposed to a deletion , is the probability that the new coordinate is inserted in a given slot of the vector ( in the above case , the last slot ) , is the probability to pick coordinate as reference , and is the probability to sample the candidate position given the chosen reference . in the deletion trial probability , is the probability to try a deletion , and is the probability that the replica at will be chosen for attempted removal among the existing ones . plugging these trial probabilities in eq .( [ methastings ] ) , we obtain eq .( [ pacc1 ] ) ( an analogous procedure gives eq .( [ pacc2 ] ) ) .the case where can be easily taken care of by modifying the acceptance probabilities accordingly .alternatively , detailed balance can be directly proven by plugging the trial probabilities in eq .( [ trial ] ) and the acceptance probabilities given by eqs .( [ pacc1 ] ) and ( [ pacc2 ] ) into eq .( [ detbalance ] ) .monte carlo estimation of a volume ( shaded region ) by means of sampling from a reference volume ( top ) , and equilibration with a hypothetical , infinite reservoir of ideal gas particles at density ( bottom ) . in the former ,one draws random points uniformly from and counts the fraction that lands in , to obtain . in the latter , one performs a grand - canonical monte carlo simulation at reservoir density , and monitors the average number of ideal gas particles in ; upon equilibration , the density of particles in equals that of the reservoir , and thus .( in practice , due to the constraint , this formula for needs to be modified slightly ; see eq .( [ eq : z_1st ] ) ) .each particle corresponds to a point ( `` replica '' ) residing in , and attempted replica insertions / removals take place in the neighborhood ( dashed circle ) of an existing replica , defined by the transition matrix of the method .a version of the algorithm with fixed number of replicas possibly at different temperatures is also described in the text . ]estimation of a two - dimensional volume ( yellow region ) using the replica gas method with varying number of replicas , eq .( [ eq : z_1st ] ) . the volume is defined by the region for , and for .the points correspond to the replica configurations at the end of one simulation , and the dashed square defines the boundaries of the adopted transition matrix ( uniform distribution , each side of length unity ) for the particular configuration shown ._ inset : _ histogram of independent estimates of using the replica gas method with attempted insertions / deletions , and .the exact value of , obtained by analytic quadrature , is . ]an illustrative non - positive definite integrand , .the scale in the center of the graph corresponds to the parameter in the uniform transition matrix for ( zero otherwise ) adopted for the results shown in the inset ._ inset : _ running estimate of using eq .( [ eq : trick ] ) , where is estimated via the replica gas method with varying number of replicas , eq .( [ eq : z_1st ] ) .the mean sign function of required by this last equality , , is also obtained from this run , by averaging over all replicas during the simulation .the dashed red line is the exact result .for this example , , and ordinary monte carlo moves per replica are performed between every attempted insertion / deletion ( gcmc step ) .the ensuing number of replicas fluctuated about . ]natural logarithm of the partition function of the two - dimensional ising model with spins , according to the replica gas method ( circles ) , and the exact kaufman formula ( dashed curve ) .the replica gas results were obtained using the fixed number of replicas version , eq .( [ eq : z_2nd_manybeta ] ) , with replicas at the temperatures corresponding to the data points shown ( similar results are obtained with varying number of replicas , see text ) , with error bars indicating the standard deviation of 8 independent runs .importance sampling results were obtained using the ideal ( non - interacting ) spin partition function and , with independent configurations sampled from the ideal reference system ( increasing this number to does not lead to appreciable changes in the results ) . for disordered states ( i.e. close to or higher than the critical temperature ) ,the partition function is no longer dominated by a small fraction of the configuration space , and the replica gas method converges more slowly with the adopted ( local ) transition matrix ( see also fig .[ fig : illustration2 ] ) .the replica exchange simulation took mc steps , with an attempted exchange every steps , where each mc step is a simple spin flip .for the transition matrix sampling , ( cf .discussion after eq .( [ eq : z_2nd_manybeta ] ) ) . ]comparison of the merits of importance sampling ( left ) and replica gas ( right ) methods for sparse ( top ) and localized ( bottom ) integrands .integrands are represented by their densest regions ( shaded curvy shapes ) . for physical systems ,sparse integrands correspond to boltzmann factors at high temperatures , while at low temperatures the integrands tend to be localized in a small fraction of configuration space ( e.g. magnetized spin systems , crystals , proteins in their native state , etc ) . in importance sampling ,one typically has at their disposal a general - purpose sparse reference system ( polygon on the left ) such as an ideal gas , which is generally sufficient to ensure proper sampling at high temperatures , but not at lower ones .conversely , in replica gas methods one typically has at their disposal a local transition matrix ( small boxes on the right ) that is generally sufficient to sparingly `` cover '' the integrand of interest ( cf .efficiency criteria ( a ) and ( b ) discussed in the text ) at low temperatures , but not at higher temperatures . ]
owing to their favorable scaling with dimensionality , monte carlo ( mc ) methods have become the tool of choice for numerical integration across the quantitative sciences . almost invariably , efficient mc integration schemes are strictly designed to compute ratios of integrals , their efficiency being intimately tied to the degree of overlap between the given integrands . consequently , substantial user insight is required prior to the use of such methods , either to mitigate the oft - encountered lack of overlap in ratio computations , or to find closely related integrands of known quadrature in absolute integral estimation . here a simple physical idea measuring the volume of a container by filling it up with an ideal gas is exploited to design a new class of mc integration schemes that can yield efficient , absolute integral estimates for a broad class of integrands with simple transition matrices as input . the methods are particularly useful in cases where existing ( importance sampling ) strategies are most demanding , namely when the integrands are concentrated in relatively small and unknown regions of configuration space ( e.g. physical systems in ordered / low - temperature phases ) . examples ranging from a volume with infinite support to the partition function of the 2d ising model are provided to illustrate the application and scope of the methods .
genome - wide association studies have identified hundreds of dna variants associated to complex traits including disease in human alone . to understand how these variants affect disease risk , genotype and organismal phenotype data are integrated with intermediate molecular phenotypes to reconstruct disease networks .a first step in this procedure is to identify dna variants that underpin variations in expression levels ( eqtls ) of transcripts , proteins or metabolites . as modern technologies routinely produce genotype and expression data for a million or more single - nucleotide polymorphisms ( snps ) and ten - thousands of molecular abundance traits in a single experiment , often repeated across multiple cell or tissue types ,the number of statistical tests to be performed when testing each snp for association to each trait is huge .furthermore , multiple testing correction requires all tests to be repeated several times on permuted data to generate an empirical null distribution . despite being trivially parallelisable ,the computational burden of testing snp - trait associations one - by - one quickly becomes prohibitive .recently a new approach ( `` matrix - eqtl '' ) was developed which uses the fact that the test statistics for the additive linear regression and anova models can be expressed as multiplications between rescaled genotype and expression data matrices , thereby realising a dramatic speed - up compared to traditional qtl - mapping algorithms .a limitation of these models is their assumption that the expression data is always normally distributed within each genotype group .for this reason , qtl and eqtl studies have frequently used non - parametric methods which are more robust against variations in the underlying genetic model and trait distribution . in particular , the non - parametric kruskal - wallis one - way analysis of variance does not assume normal distributions and reports small -values if the median of at least one genotype group is significantly different from the others .here we report a matrix - based algorithm ( `` krux '' ) , implemented in matlab , python and r , to simultaneously calculate the kruskal - wallis test statistics for several millions of snp - trait pairs at once that is more than ten thousand times faster than calculating them one - by - one on a human test dataset with more than 500,000 snps and 20,000 expression traits .additional benefits of krux include the explicit handling of missing values in both genotype and expression data and the support of genetic markers with any number of alleles , including variable allele numbers within a single dataset .krux takes as input genotype values of genetic markers and expression levels of transcripts , proteins or metabolites in individuals , organised in an genotype matrix and expression data matrix .genetic markers take values , where is the maximum number of alleles ( for biallelic markers ) , while molecular traits take continuous values .we use built - in functions of matlab , python and r to convert the expression data matrix to a matrix of data ranks , ranked independently over each row ( i.e. molecular trait ) .krux assumes that the input expression data has been adjusted for covariates if it is necessary to do so and all data quality control has been performed .the genotype matrix is first converted to sparse logical index matrices of the same size , where if and otherwise ( ) .next observe that the vector with entries and matrices with entries are respectively the number of individuals and the sum of ranks for the trait in the genotype group of the marker .we can then calculate an matrix with entries using efficient vectorised operations . if none of the rows in contain ties , then each entry equals the kruskal - wallis test statistic for testing trait against marker . for markers with less than the maximum of genotype values, division will result in nan columns in the intermediate matrices with entries for the empty genotype groups . by replacing all nan s by zeros before making the sum in eq .( [ eq:2 ] ) , the corresponding entries in will be the correct statistics for a test with fewer than degrees of freedom .thus we need matrix multiplications and the associated element - wise operations to calculate the test statistic values for all marker - trait combinations .krux takes as input a -value threshold , calculates the corresponding test statistic thresholds for degrees of freedom ( ) , and identifies the entries in which exceed the appropriate threshold value . for these entriesonly a -value is calculated using the distribution .empirical false - discovery rate ( fdr ) values are computed by repeating the -value calculation ( with the same ) multiple times on data where the columns of the expression data ranks are randomly permuted .the fdr value for any value is defined as the ratio of the average number of associations with in the randomised data to the number of associations with in the real data .when data values are missing for some marker or trait , all test statistics for that marker or trait need to be adjusted for a smaller number of observations . for the expression data , missing valuesare easily handled since the ranking algorithms will give nan s the highest rank . by setting the entries corresponding to missing values in to zero in , eq .( [ eq:4 ] ) still produces the correct sums of ranks , while the matrix multiplication where is the matrix with whenever and otherwise , produces the corrected number of individuals in the group of the marker for the trait . replacing the constant in eq .( [ eq:2 ] ) by a matrix where is the number of non - missing samples for trait and performing element - wise division and substraction operations then gives the correct test statistic for all pairs .handling missing genotype data is less easy because the expression ranks that need to be adjusted are specific to each marker - trait combination ( e.g if a marker has a missing value where a trait has rank , then all samples with ranks need to be lowered by ) .krux uses the fact that missing genotype values are generally due to sample quality and therefore patterns of missing values are often repeated among markers . for each unique missing value pattern , a new genotype matrix for all markers with that pattern and a new expression data matrix with the corresponding samples removed are constructed to calculate the test statistics for all affected marker gene combinations .missing genotype data increases the computational cost of the algorithm considerably and it is recommended to limit the number of missing values by only considering markers with a sufficiently high call rate . in the presence of tied observations , the statistic in eq .needs to be divided by a factor , where the summation is over all groups of ties and for each group of ties , with the number of tied data in the group .the factor is automatically computed for each trait during the ranking step and the matrix is therefore easily corrected using element - wise matrix operations ( matlab version only ) .whereas ties are usually rare in standard gene expression datasets , the ability to handle tied data expands the scope of krux to count - based , discretised or qualitative data types . since krux needs to create intermediate matrices of size , where is the number of traits and the number of markers , which do not usually fit into memory for large datasets , krux supports the use of data ` slices ' to divide the complete data into manageable chunks .in typical applications , the number of markers is one or two orders of magnitude larger than the number of traits .therefore the default behaviour of krux is to keep the expression data as a single matrix and simultaneously test all traits against subsets of markers. the user can provide either a slice size and krux will process marker blocks of this size serially , or a slice size and initial marker and krux will process a single slice starting from that marker .the latter option allows trivial parallelisation across multiple processors .to test krux we provide example analysis scripts and a small anonymised dataset of 2,000 randomly selected genes and markers from 100 randomly selected yeast segregants . herewe describe an application of krux on a human dataset of 19,610 genes and 530,222 snp markers measured in 102 whole blood samples from the stockholm atherosclerosis gene expression ( stage ) study .all snps in the dataset had minor allele frequency greater than , no missing values and probability to be in hardy - weinberg equilibrium greater than .we first confirmed that krux produces the same results as testing marker - trait combinations one - by - one using the built - in kruskal - wallis functions to verify the correctness of our implementations . to test the performance of krux we divided the genotype data into slices of variable size and extrapolated the total run time from running a single genotype data slice against all expression traits and multiplying by the number of slices needed to cover the entire set of 530,222 snps .the total run time rapidly decreases until a genotype slice contains about 1,000 snps and stays almost constant thereafter . on a laptop with 8 gb ram, the limit is reached at around 3,000 snps per slice after which run time sharply increases again due to memory limitations ( fig .[ fig : cpu ] ) .we therefore recommend a genotype slice size of around 2,000 markers , resulting for this dataset in around 250 separate jobs , which will take around 2,500 seconds ( 42 minutes ) when run serially on a single processor . by comparison , the total extrapolated run time when computing all 19,610 530,222 associations one - by - one using the built - in kruskal - wallis function on the same hardware as in fig .[ fig : cpu ] are respectively ( 256 gb , 2.20 ghz server ) and ( 8 gb , 2.70 ghz laptop ) seconds such that krux is respectively 17,000 and 11,000 times faster on this particular dataset . on the same dataset and hardware , the comparatively simpler matrix operations for the parametric tests in matrix - eqtl took respectively 5 minutes ( linear model ) and 7.4 minutes ( anova model ) .next we compared the output of krux and matrix - eqtl s parametric anova and linear model ( henceforth called `` anova '' and `` linear '' ) methods .the kruskal - wallis test is more conservative than the anova and linear methods , i.e. it has a higher nominal -value for almost all marker - trait combinations ( fig .[ fig : pval ] ) . sincerandom data will be subjected to the same biases , nominal -values can not be directly compared to assess significance .we therefore performed empirical fdr correction for multiple testing using three randomised datasets ( cf .implementation ) .surprisingly , after fdr correction only a limited number of associations remained for anova even at an fdr threshold of , whereas the number of associations detected by krux and the linear method was comparable ( fig .[ fig : fdr](a ) ) .detailed analysis showed that this is due to pairing of snps with rare homozygous minor alleles ( one or two samples ) to genes with outlier expression levels , resulting in extremely low -values for the anova method in real as well as randomised data ( see also below ) . to reduce the incidence of chance associations between singleton genotype groups and outlying expression values in the anova method we repeated the empirical fdr correction , this time keeping only marker - trait combinations within 1mbp of each other ( `` cis - eqtls '' ) . at an fdr threshold of number of significant _ cis_-eqtl - gene pairs is indeed comparable between the three methods , with a large proportion of pairs detected by all three of them ( fig .[ fig : fdr](b ) ) .-values calculated by krux vs. parametric anova * ( a ) * and linear models * ( b ) * , showing all _ cis_-acting eqtl - gene pairs with detected by both methods ( blue dots ) and by only one of the methods ( red crosses ) .the black line indicates the line with slope . ]we classified eqtl - gene pairs as `` skewed group sizes '' ( smallest genotype group less than 5 elements ) , non - skewed `` non - linear '' [ median of heterozygous and homozygous samples significantly different ( wilcoxon rank sum ) ] and non - skewed `` other '' ( all others ) . _ cis_-associations identified exclusively by the kruskal - wallis test are more often non - linear and the overall distribution of eqtl - types is more similar to associations identified by all three methods , compared to the anova and linear methods ( fig . [ fig : groups ] and fig .[ fig : box](a - b ) ) . of the 701 associationsexclusively identified using the parametric anova method , 657 ( ) had skewed group sizes , including 426 ( ) with a singleton genotype group ( the aforementioned ` outliers ' , cf .[ fig : box](c ) ) .the associations exclusively identified by the linear method also contained a much higher proportion of snps with skewed group sizes than the corresponding krux associations ( vs. ) and , as expected , a reduced number of non - linear associations ( fig .[ fig : groups ] and fig .[ fig : box](d ) ) .we have developed krux , a software tool that uses matrix multiplications to simultaneously calculate the kruskal - wallis test statistics for millions of marker - trait combinations in a single operation , thereby realising a dramatic speed - up compared to calculating the test statistics one - by - one .the availability of a fast method to identify eqtl associations using a non - parametric test allowed us to assess in more detail how differences in model assumptions compared to parametric methods lead to differences in identified eqtls .our results on a typical human dataset indicate that the the parametric anova method is highly sensitive to the presence of outlying gene expression values and snps with singleton genotype groups .we caution against its use without prior filtering of such outliers .linear models reported the highest number of eqtl associations after empirical fdr correction .these are understandably biased towards additive linear associations and were also sensitive to the presence of skewed genotype group sizes , albeit to a much lesser extent than the parametric anova method .the kruskal - wallis test on the other hand is robust against data outliers and heterogeneous genotype group sizes and detects a higher proportion of non - linear associations , but it is more conservative for calling additive linear associations than linear models , even after fdr correction .13ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( )
the kruskal - wallis test is a popular non - parametric statistical test for identifying expression quantitative trait loci ( eqtls ) from genome - wide data due to its robustness against variations in the underlying genetic model and expression trait distribution , but testing billions of marker - trait combinations one - by - one can become computationally prohibitive . we developed krux , an algorithm implemented in matlab , python and r that uses matrix multiplications to simultaneously calculate the kruskal - wallis test statistic for several millions of marker - trait combinations at once . krux is more than ten thousand times faster than computing associations one - by - one on a typical human dataset . we used krux and a dataset of more than 500k snps and 20k expression traits measured in 102 human blood samples to compare eqtls detected by the kruskal - wallis test to eqtls detected by the parametric anova and linear model methods . we found that the kruskal - wallis test is more robust against data outliers and heterogeneous genotype group sizes and detects a higher proportion of non - linear associations , but is more conservative for calling additive linear associations . in summary , krux enables the use of robust non - parametric methods for massive eqtl mapping without the need for a high - performance computing infrastructure and is freely available from http://krux.googlecode.com .
the objectives of quantum control are to find ways to manipulate the time evolution of a quantum system such as to drive an initial given state to a pre - determined final state , the target state ; or optimize the expectation value of a target observable . among a wealth of applicationsare those to quantum computing , where it is clearly essential to be able to start off a quantum procedure with a given initial state , and to problems involving the population levels in atomic systems , such as the laser cooling of atomic or molecular systems .the mathematical tools necessary for the theoretical investigation of these control problems are diverse , involving algebraic , group theoretic and topological methods . the questions that one may ask include : when is a given quantum system completely controllable ?if a system is not completely controllable , how does this affect optimization of a given operator ?how near can you get to a target state for a not completely controllable system ?the answer to the first question depends on a knowledge of the lie algebra generated by the system s quantum hamiltonian , that to the second arises from properties of the lie group structure , while the last clearly involves ideas of topology . especially in the area of lie group theory, there is a large corpus of classical mathematics which can supply answers to questions arising in quantum control .in particular , for the type of controllability known as _ pure state controllability _ classical lie group theory has already given the basic results .the quantum control system we shall consider is typically of the form where is the internal hamiltonian of the unperturbed system and are interaction terms governing the interaction of the system with an external field . the dynamical evolution of the system is governed by the unitary evolution operator , which satisfies the schrodinger equation with initial condition , where is the identity operator . by use of the magnus expansion , it can be shown that the solution involves all the commutators of the .the operators , , in ( [ eq : h ] ) are hermitian .their skew - hermitian counterparts generate a lie algebra known as the dynamical lie algebra of the control system which is always a subalgebra of , or for trace - zero hamiltonians , .the degree of controllability is determined by the dynamical lie algebra generated by the control system hamiltonian . if then _ all _the unitary operators are generated and we call such a system _ completely controllable_. a large variety of common quantum systems can be shown to be completely controllable .the interesting cases arise when is a proper subalgebra of .such systems may still exhibit _ pure state controllability _ , in that starting with any initial pure state any target pure state may be obtained , as distinct from the completely controllable case , when all ( kinematically admissible ) states pure or mixed may be achieved .we shall restrict our attention here to finite - level quantum systems with discrete energy levels .the pure quantum states of the system are represented by normalized wavefunctions , which form a hilbert space .however , the state of a quantum system need not be represented by a pure state .for instance , we may consider a system consisting of a large number of identical , non - interacting particles , which can be in different internal quantum states , i.e. , a certain fraction of the particles may be in quantum state , another fraction may be in another state and so forth .hence , the state of the system as a whole is described by a discrete ensemble of quantum states with non - negative weights that sum up to one .such an ensemble of quantum states is called a _ mixed - state _ , and it can be represented by a density operator on with the spectral representation where is an orthonormal set of vectors in that forms an ensemble of independent pure quantum states .the evolution of is governed by with as above .clearly if _ all _ the unitary operators can be generated we have the optimal situation , complete controllability .however , classical lie group theory tells us that even when we only obtain a subalgebra of we can obtain pure state controllability .the results arise from consideration of the transitive action of lie groups on the sphere .the classical `` orthogonal '' groups where the field is either the reals , the complexes or the quaternions , are defined to be those that keep invariant the length of the vector ; the squared length is given by , where refers to the appropriate conjugation .these compact groups are , essentially , the only ones which give transitive actions on the appropriate spheres , as follows : transitive on transitive on transitive on .since we may regard our pure state as a normalized vector in and thus as a point on , we obtain pure state controllability only for ( or if we are not too fussy about phases ) and , the latter for even only .( note that we can not get as a subalgebra of . )complete controllability is clearly a stronger condition than pure state controllability . to illustrate our theme of the limitations on quantum control , we now give two examples based on a truncated oscillator with nearest - level interactions for which the algebras generated are and these examples are generic .consider a three - level system with energy levels , , and assume the interaction with an external field is of dipole form with nearest neighbor interactions only .then we have , where the matrix representations of and are , \quad { h}_1 = \left[\begin{array}{ccc } 0 & d_1 & 0 \\ d_1 & 0 & d_2 \\ 0 & d_2 & 0 \end{array}\right].\ ] ] if the energy levels are equally spaced , i.e. , and the transition dipole moments are equal , i.e. , then we have , \quad { h}_1 = d \left[\begin{array}{ccc } 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right]\ ] ] where is the traceless part of .both and satisfy where \ ] ] which is a defining relation for .the dynamical lie algebra in this case is in fact .it is easy to show that the matrix is a real anti - symmetric representation of if .explicitly , a suitable unitary matrix is given by .\ ] ] since the dynamical algebra and group in the basis determined by consists of _ real _ matrices , real states can only be transformed to real states ; this means that for any initial state there is a large class of unreachable states .this example is generic as it applies to -level systems , although for even we need other than dipole interactions to generate .the analogous dipole interaction generates in the even case , as we now illustrate .consider a four - level system with hamiltonian , and note that and satisfy for where is unitarily equivalent to which is a defining relation for .consider an initial state of the form where satisfies ( [ eq : sp ] ) , it can only evolve into states where satisfies ( [ eq : sp ] ) , under the action of a unitary evolution operator in exponential image of .hence , any target state that is not of the form ( [ eq : rho_1 ] ) is not accessible from the initial state ( [ eq : rho_0 ] ) .note that the initial state is of the form and that satisfies ( [ eq : sp ] ) . consider the target state which is clearly kinematically admissible since but note that and does not satisfy ( [ eq : sp ] ) .hence , is not dynamically accessible from for this system . given a target state that is not dynamically accessible from an initial state , we can easily construct observables whose kinematical upper bound for its expectation value can not be reached dynamically .simply consider .the expectation value of assumes its kinematical maximum only when the system is in state .since is not reachable , the kinematical upper bound is not dynamically realizable .in this short introduction to quantum control theory , we have described the goals of the subject briefly , and then illustrated the limitations by generic examples where complete control is not possible .the tools we have used are , in the main , those of classical lie group theory .theoretical problems that remain to be tackled include to what extent these non - controllable systems can , in fact , be controlled ; and , of course , the paramount problem of implementing these controls in practice .it gives us great pleasure to acknowledge many conversations with colleagues , including drs .hongchen fu and gabriel turinici .a.i.s would also like to thank the laboratoire de physique thorique des liquides , of the university of paris vi , for hospitality during the writing of this note .
in this note we give an introduction to the topic of quantum control , explaining what its objectives are , and describing some of its limitations .
in the sir model , the fixed population is divided into three distinct groups : susceptible , infected and removed .those at risk of the disease are susceptible , those that have it are infected and those that are either quarantined , dead , or have acquired immunity are removed .the following flow chart shows the basic procession of the sir model daley1. and are the infection coefficient and removal rate , respectively .a discrete model was adapted by gani from the original sir model through the coarse - graining process and was applied to successfully predict outbreak of influenza epidemics in england and walesgani1,spicer1 . i_{i } \label{sir2 } \\r_{i+1 } & = & r_{i}+ai_{i } \notag\end{aligned}\ ] ] during the epidemic process , is fixed .initially , we examine the data for the first 15 days to estimate the parameters and of sars for hk .the only data are new cases ( removal ) announced everyday by hk dept .of health from march 12 , 2003 followed by a revised version later . to avoid inadvertently using future information we do not use the revised data at this stage .population ; since it is reasonable to let in right hand side of ( [ sir2 ] ) . and is set as the initial condition . is replaced with whereas is uncertain .this assumption implies the incubation period is only one day , in spite of the fact that the true incubation period of the coronavirus is 2 - 7 days .the parameters and are scanned for the best fit for the stage .for every , a sequence of is obtained by numerical simulations .an euclidean norm of , which indicates a distance between the true and simulated data , is applied to measure the fit . in fig .[ graph1 ] of the lowest point is the best fit parameters for this stage and this value is used for the following prediction .we get and .the method is applied to get parameters of and in figs .[ graph2 ] and [ graph3 ] . vs. and plotted as dots for the first 15 days sars data for hk .the natural logarithm is applied . and are original and simulated data , respectively .the thick red curve shows the bottom of the sharp valley clearly .the lowest point of the valley is according to the best fit : and .,width=316 ] the prediction is available for the trend based on parameters and of this stage , the middle day ( march 20 , 2003 ) of the stage is applied as the first day and is assumed as .a curve of squares is plotted in fig .[ graph2 ] for the first stage prediction .in the same way the best fit and prediction is applied for the next two stages and plotted in fig .[ graph2 ] .the best fit is also processed weekly for detail discussion later . and ) ,circles ( and ) and triangles ( and ) are prediction for the 3 stages described in the text .each stage has 15 days .the curve of black dots is for revised data.,width=316 ] the curves for prediction clearly shows the 3 stages in fig .[ graph2 ] . the first stage ( march 12 - 27 , 2003 ) exhibits dangerous exponential growth .it shows that the extremely infectious sars coronavirus spread quickly in the public with few protections during this early stage .more seriously , it leads to a higher infection peak although in this stage the averaged number of new cases is below 30 .this prediction gets confirmation in the second stage ( march 28 - april 11 , 2003 ) .the peak characterized by the amoy garden outbreak comes earlier and higher .it indicates appearance of a new transmission mode that differs from intimate contact route observed in the first stage .we name the new transmission mode growth in the contrast to transmission in the first stage which we refer to as growth . outbreaks at prince wales hospital ( pwh ) ( where sars patients received treatment ) and amoy gardens ( a high - density housing estate in hong kong ) represent infections of and modes in the full epidemic , respectively .the revised data provided by hk dept . of health in fig .[ graph2 ] present a clearer indication for these two transmission modes .the outbreak of pwh sent the clear message that intimate contacts with sars patient led to infection .detailed investigations of the propagation at amoy garden suggested that faulty of sewerage pipes allowed droplets containing the coronavirus to enter neighboring units vertically in the building .furthermore , poor ventilation of lifts and rat infestation where also suggested as possible modes of contamination .however , control measures bring the spread of the disease under control in the second stage .contrary to the increasing trend in the first stage , the prediction curve of the 2nd stage ( circle ) declines .and it anticipates that new cases drop below 10 before the 60th day ( may 10 , 2003 ) . also on april 12 , 2003we predicted number of whole infection cases would reach 1700 . up to april 11 , 2003there were 32 deaths and 169 recoveries .we calculated the mortality of sars as the ratio of death to sum of both deathes and recoveries , and it was 15.9% .therefore we predicted approximately 270 fatalities in total . in the third stage( april 12 - 27 , 2003 ) , triangle curve in fig . [ graph2 ] refines results and gives more accurate prediction .this stage predicts that new cases per day drops to 5 before the 62th day , may 12 , 2003 .the travel warning for hk was cancelled by who because hk had kept new cases below 5 for 10 days since may 15 , 2003 . and, finally , we predict that the total cases reaches 1730 and nearly 287 deaths ( up to april 27 , 2003 there were 668 recovered and 133 , the mortality increased to 16.6% ) .these numbers are very close to the true data .precisely the method drawn from the sir model has been verified for prediction for full epidemic .however , the accuracy is only possible by the first dividing the epidemic into separate `` stages '' .this problem of determining to what extent an epidemic is under control is of greater strategic significance .information on the efficacy of epidemic control will help determine whether to apply more control policies or not , and balance cost and benefit from them . for each individual the same question will also inform the degree to which precautions are taken : i.e. wearing a surgical mask to prevent the spread ( acquisition ) of sars . obviously a way to estimate control level is required .actually this is a difficult problem because of significant statistical fluctuations in the data . a quite simple method to evaluate control efficiencyis discussed below . in the sir model ( [ sir2 ] ) , if a disease is regards as being controlled as new cases will decrease .this inequality leads to a control criterion for some diseases in epidemiology research . applying the approximation of we get a threshold from ( [ sir2 ] ) .we rescale ( is called infection rate in place of infection coefficient now ) and then get the threshold that is free from population . indicates that the removal rate exceeds the infection rate . in fig .[ graph3 ] the circles show the 3 stages evaluations with dash line of . and the parameters and of sars data for hk are estimated weekly as squares also in fig .[ graph3 ] .the line of is regarded as the critical line since number of infected cases increases when passes through it from below .it is possible to apply the diagram of and to compare control level for different countries and areas even for different diseases .this provides organizations like who with a simple and standard method to supervise infection level of any disease .the limitation of the method comes from the assumptions of the sir model .more accurate models may provide better estimate of epidemic state and future behaviours . .in this panel the line of dots is regarded as alert line or critical line .the parameters of and below it indicate that the epidemic is controlled , above the line indicates uncontrolled growth .the parameter is rescaled by .,width=316 ] in summary , a discrete sir model gives good predictions for epidemic of sars for hk .two distinct methods are described for the disease propagation dynamics .particularly the mode is much more hazardous than the mode .we have introduced a simple method to evaluate control levels .the method is generic and can be widely applied to various epidemiological data .contrary to the long established sir model , epidemiology research using small - world ( sw ) network models is young and growing area .the concept of sw was imported from the study of social network into nature sciences in 1998 .however , it provides a novel insight for networks andarouses a lot of explorations in the brain , social networks and the internet. .some sw networks also exhibit a scale - free ( power law ) distribution : a node in this network has probability to connect nodes , is one basic character for the system .most researches of virus spreading with the sw model , concern the propagation of computer viruses on the internet , however , a few studies have been published relating to epidemiologyebel1,liljeros1,strogatz1,kuperman1,huerta1 .the dynamics of sw models enriches our realization of epidemic and possibly provides better control policies . from the first sars patient to the last one , an epidemic chain is embedded on the scale free sw network of social contacts .it is of great importance to discover the underlying structure of the epidemic network because a successful quarantine of all possible candidates for infection will lead to a rapid termination of the epidemic . in hk e - sars , an electronic database to capture on - line and in real time clinical and administrative details of all sars patients , provided invaluable quarantine information by tracing contacts .unfortunately a full epidemic network of sars for hk is still unavailable . because data representing the underlying network structure is currently not available, we have no choice other than numerical simulations .therefore , our analysis of the sw model is largely theoretical .the only confirmation of our model we can offer is that the data appear to be realistic and exhibit the same features as the true epidemic data . to simulate an epidemic chain , a simple model of social contactsis proposed .the model is established on a grid network weaved by parallel and vertical lines .every node in the network represents a person .we set with population .all nodes are initiated with a value of 0 ( named nodes ) .every node has 2 , 3 or 4 nearest neighbors as short range contacts for corner , edge and center , respectively . for every nodethere are two long range contacts with 2 other nodes randomly selected in the whole system everyday .these linkages model the social contact between individuals ( i.e. social contacts that are sufficiently intimate to bring individuals at risk of spreading the disease ) .one random node of the system is set to 1 ( called the node ) , through its short and long range contacts , the value of the nodes linked with it turns into 1 according to probability of and , respectively .an infection happens if a node changes its value from 0 to 1 .this change is irreversible . during the full simulation process, the bad node is not removed from the whole system .we make this assumption because the number of deaths is small in comparision of the population , and there is no absolute quarantine even the sars patient in hospital can affect the medical workers . moreover , the treatment period for sars is relatively length , and during this time infected indviduals are highly infectious . to reflect the true variation in control strategy and individual behaviour , the control parameters ( ) vary with time .nodes with fixed infection probability which eventually leads to a full infection .a ) . the full epidemic curve of the simulated epidemic .b ) . while time is 45 , the infection cases clusteringly scatter in the map .the infection happens in full scale map because of random connections .the clusters show effect from the nearest neighbour connections ., width=316 ] in same model it would be interesting to compare with the complete infection epidemic . a small system with nodesis chosen .a fixed probability leads to eventual infection of the entire population since all nodes are linked and the infected are not removed .the epidemic curve for this process is plotted in fig .[ graph4 ] a ) and is typical of many plagues . in fig .[ graph4 ] b ) various sizes of clusters with infected nodes ( black dots ) scatter over the geographical map .it contains all infection facts in first 45 days . comparing to the true sars infection distribution in hkonly slight similarity is observed .the short and long range linkages give good infection dynamics as we expected . the epidemic chain is easily drawn by recording infection fact from the seed to last patient .however , during simulations there is a problem : for a node linked by more than one nodes , which node infects it ? a widely accepted preferential attachment of _ rich - get - richer _ is a good answer . a linear preference function is applied here . with presence of both growth and preferential attachment ,it is general to ask whether the chain is a scale - invariant sw network .we plot the distribution in fig .[ graph5 ] a ) and b ) in log - log and log - linear coordinates , respectively .the hollow circle is for the system with nodes .the scaling behaviour of it looks more like a piecewise linear ( i.e. bilinear ) in b ) rather than a power law in a ) . to confirm this we tried a larger system with nodes and the same fixed in the rest curves in figs .[ graph5 ] .the solid square curve is distribution of the whole epidemic network .the linear fit for the solid square gives a correlation coefficient of in fig .[ graph5 ] a ) . in b ) piecewise linear fits have of and .these provide positive support for the original model .the ratio of the cases of first 30% and 1% of whole process for this bigger system are plotted as hollow triangle and solid circle curves in figs .[ graph5 ] , respectively .the case of 1% , early stage of full infection epidemic , suggests a better fit for scale - free than the other cases since it has correlation coefficients of and ratio for linear fit in a ) . for the solid circle curve piecewise linearfits give of and in fig .[ graph5 ] b ) .so , what scaling behaviour is true during full infection process ?with absence of rigorous proof this problem is hard to answer correctly in simulations .we may only draw conclusions based on which simulations most closely matches the qualitative features of the observed data .the curves are for one with nodes .the curves of dots , triangles and squares are for the epidemic networks of 1% , 30% and 100% for full infection process .the two dash lines are piecewise linear fits for the full infection process with nodes .the ratio is and with and , respectively .it clearly shows the curves of full epidemic has two piecewise linear parts in log - linear graph . in the early stage of a full infection , the scaling behaviour of 1% infected ( dots )has a part which might be regards as a log - log linear .the ratio of it is with . in other words ,it has scale - free part with .,width=316 ] let s return to the system with nodes for modelling sars for hk .the probability is fitted to the true epidemic data .however , it is fruitless to obtain an exact coincidence between the simulated results and the true data as the model evolution is highly random ( moreover this would result in overfitting ) .the control parameters ( dots and dashes curves ) and a simulated epidemic curve ( black dots ) with column diagram of sars for hk is plotted in figs .[ graph6 ] a ) and b ) , respectively . for the simulated data ,the total number of cases is 1830 that has a 4.3% deviation from the true data of 1755 .contrary to the above full infection with fixed parameters , are believed to drop exponentially and lead to small part infection epidemic without quarantine or removal . in any case , the high probability of infection in the early stage is indicative of the critical ability of the sars coronavirus to attack an individual without protection .if sars returns , the same high initial infection level is likely to occur .the only hope to avoid a repeat of the sars crisis of 2003 is to shorten the high infection stage by quick identification , wide protection and sufficient quarantining . in other words , the best time to eliminate possible epidemicis the moment that the first patient surfaces .any delay may lead to a worsening crisis .the long range infections in the model and the world also imply an efficient mechanism to respond rapidly to any infectious disease is required to establish global control .a ) the short and long range linkages infection probability generally declines exponentially .b ) the simulated epidemic curve ( black dots ) in the model is plotted with the original sars data ( grey column ) for hk.,width=316 ] data on the geographical distribution of sars cases in hk is much easier to collect than the full epidemic chain .numerical simulations provide both simply . the full geographical map marked with all infected nodes ( black dots ) and an amplified window of a clusteris plotted in figs .[ graph7 ] a ) and b ) , respectively .similarity is expected and verified in cluster patterns of figs .[ graph4 ] b ) .the scaling behaviour of the epidemic chain is plotted in figs .[ graph8 ] .the curve in log - log diagram exhibits a power - law coefficient of and gives the linear fit correlation coefficient .the piecewise linear fit for the log - linear case gives correlation coefficients of and , respectively .for a sw network often few vertexes play more important roles than others .sars super - spreaders found in hk , singapore and china are consistent with this .data for the early spreading of sars in singapore show definite sw structure with a small number of nodes with a large number of links .the average number of links per node also shows a scale - free structure , but the available data is extremely limited ( the linear scaling can only be estimated from three observations ) .this character is also verified in our model .the first few nodes have a high chance of infecting a large number of individuals . in fig . [ graph8 ], a single node has 40 links .clearly the index node has many long range linkages .it has been suggested that travelling in crowded public places ( train , hospital , even an elevator ) without suitable precautions can cause an ordinary sars patient to infect a significant number of others .again , this is an indication that to increase an individual s ( especially a probable sars patient s ) personal protection is key to rapidly control an epidemic .actually , in our model , if duration of the early stage with high probability is reduced to less than 10 , the infection scale decreases sharply .an engrossing phenomenon is the points of inflection in curves in log - linear diagrams of figs .[ graph8 ] and figs .[ graph5 ] b ) .all are located near linkages number of 6 - 7 . on average a nodeshas about 6 contacts ( 2 - 4 short plus 2 long range ) every day , although in whole process there is no limitation for linkages . for a growing random network, a general problem always exists among its scaling behaviour , preferential attachment and dynamics , even embedding a geographical mapcohen1,warren1,krapivsky1,dorogovtsev1,moore1 .more work is required to address this issue . in conclusiona sw epidemic network is simulated to model sars spreading in hk .a comparison of the simulations with full infection data is presented .our discussion of the infection probability and occurrence of super spreaders lead to the obvious conclusion : rapid response of an individual and government is a key to eliminating an epidemic with limited impact and at minimal cost .
a simplified susceptible - infected - recovered ( sir ) epidemic model and a small - world model are applied to analyse the spread and control of severe acute respiratory syndrome ( sars ) for hong kong in early 2003 . from data available in mid april 2003 , we predict that sars would be controlled by june and nearly 1700 persons would be infected based on the sir model . this is consistent with the known data . a simple way to evaluate the development and efficacy of control is described and shown to provide a useful measure for the future evolution of an epidemic . this may contribute to improve strategic response from the government . the evaluation process here is universal and therefore applicable to many similar homogeneous epidemic diseases within a fixed population . a novel model consisting of map systems involving the small - world network principle is also described . we find that this model reproduces qualitative features of the random disease propagation observed in the true data . unlike traditional deterministic models , scale - free phenomena are observed in the epidemic network . the numerical simulations provide theoretical support for current strategies and achieve more efficient control of some epidemic diseases , including sars . during 2003 sars killed 916 and infected 8422 globally . in hong kong ( hk ) , one of the most severely affected regions , 1755 individuals were infected and 299 died . sars is caused by a coronavirus , which is more dangerous and tenacious than the aids virus because of its strong ability to survive in moist air and considerable potential to infect through close personal contact . unlike other well - known epidemic diseases , such as aids , sars spreads quickly . although significant , its mortality rate is , fortunately , relatively low ( approximately 11%) . researchers have decoded the genome of sars coronavirus and developed prompt diagnostic tests and some medicines , a vaccine is still far from being developed and widely usedmarra1,rota1,stadler1 . the danger of a recurrence of sars remains . irrespective of pharmacological research , the epidemiology study of sars will help to prevent possible spreading of similar future contagions . generally , current epidemiological models are of two types . first , the well - known susceptible - infected - recovered ( sir ) model proposed in 1927kermack1,daley1 . second , the concept of small - world ( sw ) networkswatts1 . arousing a new wave of epidemiological research , the sw model has made some progress recently . our work aims to model sars data for hk . practical advice for a better control are drawn from both the sir and sw models . in particular , a generalized method to evaluate control of an epidemic is promoted here based on the sir model with fixed population . using this method , measuring the spread and control of various epidemics among different countries becomes simple . quick action in the early stage is highlighted for both government and individuals to prevent rapid propagation .
in proton beam therapy treatment planning , analytical descriptions of the treatment beams are commonly used to determine the dose deposited in clinical patient models .although monte carlo based methods have become faster during the last few years , there still is a distinct advantage of using more efficient , analytical models when having to perform multiple calculations such as in the process of 4d robust optimization and adaptive therapy .this advantage has greater significance in the case of pencil beam based proton therapy , where multiple small beams need to be tracked and calculated . despite all these advantages , analytical algorithmshave been shown to be less reliable in clinically more complex treatments like lung and breast treatments , which prompted this effort to provide a more accurate description of the dose deposition by a scanned proton beam .in addition , a more analytical description provides greater insight in the macroscopic process of how a pencil beam behaves physically in a medium as issues such as energy and medium vary . a pencil beam enteringa medium will generate secondary particles such as scattered neutrons , generated photons , and large angle scattered protons that produce a _ nuclear halo _ of dose around the central beam axis .although , in the region away from the central axis , the contribution from a single pencil beam is small , a complex treatment plan is made of many pencil beams and the summation of the lateral contributions could be significant .the lateral extent of a beam at different depths is illustrated in figure [ depth_change ] .monte carlo based calculation showing changes in lateral dose deposition for a pencil beam of nominal energy of 230 mev .a logarithmic scale is used to better illustrate the difference in contributions from the nuclear halo and the primary particles . ] the description and treatment of the nuclear halo has been the focus of research by a number of groups who have proposed various methodologies describing the effects in an analytical way .gottschalk et al .provided an in - depth analysis of all the physical processes contributing to the nuclear halo , subdividing a pencil beam into a combination of four distinct regions ; core , halo , aura and ( possibly ) spray ; in the pencil beam .this approach requires up to 25 different physical parameters to characterise the beam . in most implementations ,the contribution from the nuclear halo is solved by adding different distributions to a central gaussian distribution describing the core of the pencil beam . the first proposed solution for the nuclear halo by pedroni et al .added another , broader gaussian to the core ; a methodology that is implemented in the varian eclipse ( varian medical systems , palo alto , ca ) calculation algorithm .the simplicity of this calculation also allowed for faster gpu implementation . in further refinements of this approach , other groups attempted combinations of gaussian , lorentz ( also known as cauchy ) and lvy distributions , increasing the complexity of the fitting procedure and necessitating look - up tables for the various parameters . a key insight that enables our novel approach is that each of these methods combine two or more stable functions in their analytical representation . a further clinically used algorithm for pencil beam calculationis used in raystationtps ( raysearch laboratories , stockholm , sweden ) . in this system, each spot is modelled as a superposition of 19 gaussian distributions ( 19 sub - spots : 1 at the center , and 6 and 12 positioned at two concentric circles around the center) .in this paper we review the concept of stable distributions and show that they can be used to represent the evolution of a proton pencil beam in a medium .we demonstrate that this approach provides a more accurate description of the pencil beam and is more efficient than the use of normal distributions , or a sum thereof .furthermore , we show that this parametrization allows interpolation of non measured energies from measured ( or calculated using monte carlo ) depth profiles .finally , we implement this algorithm in an open source treatment planning toolkit , matrad .stable distributions are a class of distributions which generalize a property of the normal distribution .namely , they extend the central limit theorem which says that if the number of samples drawn from random variables , _ with or without _ finite variance , tends to infinity , then the measured distribution tends to a stable distribution. if the variance is finite , the resultant distribution tends towards the normal distribution , a member of the class of stable distributions .other than for specific cases , these distributions do not possess an analytical representation .it is therefore necessary to describe them in terms of their characteristic function which always exists for a given stable distribution .more generally , the characteristic function , , of a distribution is the fourier transform of the probability function , , of that distribution , e.g. : it can be shown that all stable distributions can be characterised as having the same characteristic function , , barring a change in the parameters .\ ] ] with except for , in which case .the parameter ] is a measure for symmetry , ] .this parametrization implies that to fit the behaviour of a proton pencil beam at a given depth we need to determine three parameters : , , and .because the majority of stable distributions do not possess an analytic form ; it is difficult to use the classical approach to fit the data .indeed , the fitting of stable distributions is the subject of scientific research by itself .we opted to use a maximum likelihood estimation based on pre computed spline approximations .in essence it selects the distributions that match the pre computed ones the best .once the parameters are determined , the characteristic function is calculated in complex space and using an inverse fourier transform the actual stable distribution was generated , a straightforward methodology also proposed by mittnik et al. . the resulting curvecould then be compared with the monte carlo simulation . to fit the normal distribution based algorithms , a classical methodology using a least square fit of the analytical function based on a levenberg marquardt algorithm was used .both the stable and gaussian fit were compared to the monte carlo simulation using pearson s measure . the value for degrees of freedom then yields the probability that the fitted distribution is different from the simulated one , with being the number of parameters in the fit . for stable parametrization ( and ) , for a double gaussian ( , , and ) .we denote the value for normal and stable distribution as respectively and . for all pencil beams with nominal energy ( )the centrally located transversal distribution is extracted at all available depths ( ) .subsequently , the normalised stable distribution parameters and are determined using the above - mentioned fitting procedure . in a first approximationwe consider the pencil beams to be circularly symmetric .finally , the total integral dose at each depth is also calculated .this procedure yields three parameters which vary as a function of depth and nominal beam energy allowing us to calculate the dose distribution at any depth in a homogeneous medium .we propose a methodology to determine the beam characteristics of intermediate energies from two provided beam characterisations .let , , and be the parameters fully describing a proton pencil beam with nominal energy .then it is possible to calculate the parametrization of an intermediate energy by interpolation of the parametrization of energies and disregarding a scaling in the depth parameter which depends on the range of the given energy .let , : where : and with being the range of a proton in the medium under consideration . using the methodology of intermediate morphing, we determine the minimal amount of beams we need to fully characterize in order to obtain a full set of data across all nominal energies .we choose a threshold of 1% error determining the width of the beam ( ) and 3% for the shape ( tailedness ) ( ) .the total deposited energy needs to be correct to 1% dose and 1 mm position . to allow testing of our parametrised beam model with clinical patient plans, we implemented the stable distribution dose calculation algorithm in an open source treatment planning system , matrad ( dkfz , heidelberg , de ) .matrad is written in matlab ( mathworks , natick , ma ) and provides functionality for importing patient data , ray tracing , inverse planning and treatment plan visualisation .the proton dose calculation component was extended to support a beam model described by a stable distribution , in addition to single and double gaussian models . to provide radial symmetry of the beam ,a 2d normalisation is required when computing the lateral profile in a plane of distance into a medium .if is the value of the stable distribution that describes the 1d beam profile at a distance from the central axis , the 2d beam profile is described by : where , is the distance from the pencil beam central axis and and are the parametrization at depth . is the normalisation required such at the volume under the 2d distribution is unity and is calculated using the shell formula as : as there is no analytical representation in real space for stable distributions except when equals specific values , numerical computation of this integral is required to provide the normalisation . to increase efficiency ,the integral is pre calculated for each combination of and in the discrete beam parametrization and is interpolated as required within the dose calculation engine .figure [ v_variation ] shows that varies smoothly within the relevant range of parameter space .the 2d normalisation parameter , varies smoothly over the relevant range of the parameter space defined by and and is therefore interpolated as required within the dose calculation engine . ]the parameters required to fully characterise the knoxville beam at all energies were implemented in : namely , , and the integrated dose at a distance along the beam path .this implementation allows complex treatment plans to be generated .figure [ fit1]a ) shows the lateral dose deposition at a depth of 20 cm for a proton pencil beam with nominal energy of 226.08 mev with two parameterizations : the appropriate stable distribution and a double gaussian . from visual inspection of the graph ,it is clear that the stable distribution provides a better fit to the beam profile at this energy and depth .the fit of each parameterization is quantified by calculating the -values , yielding for a single gaussian distribution ( not plotted ) , for the stable distribution , and for a double gaussian .this corresponds to a probability of 0 , 1 , and 1 , for the lateral dose distributions to be represented by the respective fits .figure [ fit1]b ) plots the at each calculated lateral point in the beam profile where a value of 0 is a perfect fit at that point .this graph shows that each of the three distributions provides an adequate representation of the dose close to the centre of the beam . for the gaussian distribution ,the for each point quickly deviates substantially , showing that this distribution does not represent the range behaviour adequately beyond mm .the double - gaussian fit provides a good estimation of the beam profile to a distance of mm , however it becomes clear that there is a systematic underestimation of the dose faraway which increases the .furthermore , this parameterization requires the most variables to describe the system .the stable distribution provides the best fit of the profile of a proton pencil beam at this energy and depth .the effectiveness of these systems increases the number of variables needed to describe the system and depends on the region of interest ( i.e. the size of the region taken into account to measure the tail contributions ) .the behaviour of a proton pencil beam , as commissioned at the provision facility can be parametrised at a given depth and for a specific nominal energy using two parameters from the stable distribution fit : , describing the tail of the distribution and providing the width .these parameters provide a normalised distribution .a final parameter is the integral dose deposited at that depth ( fig .[ parametrization]c ) ) .the parameter reflects the increased contribution of interactions with longer range , most likely from scattered protons .the contribution diminishes due to two factors : 1 ) the decrease of generated secondary protons due to the lower energy of the primary protons , and 2 ) the decrease in energy of the secondary protons .figure [ interpol ] shows the methodology interpolating the data from two energies to generate data for a third beamlet . in the remaining figures we calculate the maximal error of the parametric representation . due to the non linearity of the parameters behavior as a function of energywe expect that linear interpolation is useful only in a limited energy range .indeed , figures [ interpol]b ) and c ) , show that the parameter is most sensitive and increases deviates to more than 1% if the interpolated energies are more than 20mev apart in nominal energy .the is not very sensitive to interpolating distance as the curve is relatively noisy .top figure : predicting the next using two outer source curves . left bottom : variation of is the limiting parameter , 1% error equates and error of 0.01 mm in estimation of the width .right bottom : the parameter does not change greatly as it is dominated by the curve noise.,title="fig:",scaledwidth=45.0% ] + top figure : predicting the next using two outer source curves .left bottom : variation of is the limiting parameter , 1% error equates and error of 0.01 mm in estimation of the width .right bottom : the parameter does not change greatly as it is dominated by the curve noise.,title="fig:",scaledwidth=45.0% ] top figure : predicting the next using two outer source curves . left bottom : variation of is the limiting parameter , 1% error equates and error of 0.01 mm in estimation of the width .right bottom : the parameter does not change greatly as it is dominated by the curve noise.,title="fig:",scaledwidth=45.0% ] the alpha - stable parametrization has been successfully incorporated into the matrad open source treatment planning system. calculation of alpha - stable distribution is performed using either a fast , parallel c / c++ library _ _libstable__ , if available , or a native matlab implementation in other situations . using the c/ c++ library , calculation of a complete pencil beam on a 200x200x200 3 mm cube takes 20 - 45 seconds , depending on beam energy , on a intel xeon e5 - 2670 based workstation with ten 2.5 ghz cores .the dose distribution from a single proton beam spot of 120 mev was calculated onto a homogeneous water phantom in matrad , raystation and fluka .the depth dose curve and beam profile across the bragg peak are shown in figure [ raystation_comparison ] .comparison of matrad and raystation calculations of a 120 mev proton pencil beam dose distribution.,title="fig : " ] comparison of matrad and raystation calculations of a 120 mev proton pencil beam dose distribution.,title="fig : " ] figure [ matrad_pdd ] shows the central axis depth - dose depositions for selected energies from 100.32 to 226.08 mev form the fluka simulations compared to the distributions re - calculated using matrad .original fluka simulated central axis beam dose depositions for selected energies ( blue ) and re - calculated in matrad ( red ) . ]the matrad calculated dose distributions use a different grid size and spacing to the fluka data , specifically 200x200x200 3 mm voxels , demonstrating that appropriate lookups as well as interpolation of data are being performed .the small differences seen are due to the fitting of the alpha - stable distributions to fluka data .the use of stable distributions provides a way of calculating the dose in a medium in a scanned pencil beam proton therapy machine that lends itself to implementation on gpu type architectures .the calculation in the fourier space can be done fast and the libraries are available to do this fast on such processors .alternatively , it is possible to directly estimate the integral provided by the inverse transform yielding : this can be numerically evaluated using a gaussian quadrature method , which is computationally faster than a fast fourier transform .although monte carlo simulation type dose calculators are becoming increasingly available , the use of an analytical alternative is interesting if exhaustive searches in treatment plans are being used . providing a parametrization of this type reduces the number of parameters to a more manageable level , allowing a better insight in the physics of proton therapy planning using scanned pencil beams .it becomes clear , for instance , that the scattering properties of combined scanned beams are different depending on the depth of the treated volume , and therefore different dose characteristics and maybe even biological effects can be expected .this because there might be variations of let depending on the contributions of the halo at various depths .it also provides a method to describe issues like changes in medium in terms of the used parameters . in a forthcoming studywe have already established that not all parameters behave in the same manner as a function of depth combined with changes in material ( data not shown ) . in this current studywe considered the pencil beam to be isotropic .in practise it is possible that that is not the case depending on the geometric properties of the machine used to generate the pencil beams .for instance , many proton therapy facilities will use spatially sequential magnets to bend the beams in the directions perpendicular to the beam axis in two perpendicular directions to each other .this results in an ellipsoid spot size due to a different virtual source positions .the implication is that we have to find a way to combine different stable distributions . in the case of the normal distributionthis is well understood , i.e. combining the variations depending on the mixing angle .combining generalized stable distributions is less straightforward , but still fairly trivial in the case where the parameter is constant . investigating eq .[ cf_short ] shows that the combination of two distributions with the same and scale parameters , and yields a new stable distribution with scale parameter : fortunately , we have seen that the parameter depends only on the amount of material that has been passed . as a resultthe value of is the same in every direction of the plane .combining stable distributions with different is not straightforward because as far as we know the resulting distribution is not stable and is still an area of mathematical research .we have also limited this study to that of symmetric zero centered pencil beams . while the zero centering is easily resolved by a well chosen coordinate transformation ,the asymmetry of a pencil beam is not resolvable in an easy way . indeed , in some cases the treatment beams are not symmetric , specifically if collimation is used and pencil beams near the collimator jaws need to be considered . in that casethe parameter is not zero and the full expression as outlined in eq .[ cf_long ] needs to be evaluated .this is subject of further research by our group . in theory ,the methodology we have shown here could be extended to other charged particles and photons .this is an area of further research by our group .we have demonstrated that alpha stable distributions are suited to describe charged particle pencil beams in a medium because they provide an accurate and efficient parameterization .we have shown how this parameterization of the pencil beam allows dose distributions from intermediate energies to be interpolated through intermediate morphing .furthermore , we have implemented the alpha - stable parameterization into a treatment planning system .frank van den heuvel , francesca fiorini , and ben george greatfully acknowledge core support by cancer research uk ( cruk ) and the medical research council ( mrc ) .frank van den heuvel : concept , fitting the stable distributions and editing the article , francesca fiorini : monte carlo simulations and co writing the article , niek schreuder : measurements to verify pencil beam monte carlo calculations and co writing the article , ben george : implementation of the algorithm in matrad and co writing the article 19ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] ( ) ( ) , http://stacks.iop.org/0031-9155/60/i=14/a=5627 [ * * , ( ) ] http://stacks.iop.org/0031-9155/50/i=3/a=011 [ * * , ( ) ] link:\doibase 10.3389/fonc.2015.00281 [ * * ( ) , 10.3389/fonc.2015.00281 ] link:\doibase 10.1016/j.ejmp.2015.05.004 [ * * , ( ) ] _ _ , ( , ) in _ _ ( , ) pp .`` , '' in _ _ ( , , ) chap . , pp . http://dx.doi.org/10.1038/nature09116 [ * * , ( ) ] , ( ) * * , ( ) `` , '' ( , , ) chap . , pp . \doibase http://dx.doi.org/10.1016/s0895-7177(99)00110-7 [ * * , ( ) ] `` , '' ( ) link:\doibase 10.1007/s11222 - 016 - 9691 - 9 [ ( ) ] * * , ( ) * * , ( )in this appendix we outline the notion of stable distributions , provide some definitions and show that the characteristic representation parameterizing the quantities ( and ) indeed represents all stable distributions .the text is extensively based on the treatise by uchaikin and zolotarev and is provided as a synthesis and guideline rather than an original scientific contribution , the original work is much more extensive and dense .we start out by quoting the law of large numbers which states that the difference between the estimated mean of a sample from a random variable tends to the mean of the distribution when enough samples are taken .it is best known in the form as proposed by bernouilli in the 18th century : let , , be independent , identically distributed random variables withmean then : since i.e. it converges to zero in probability as , which provides the reformulation of bernoulli s law of large numbers : a more sophisticated approach considers the random variables as functions on an interval [ 0,1 ] where is an instantiation of the experiment yielding 0 or 1 .in that case the strong law of large numbers can be replaced by a weaker version : which then becomes a degenerate function if infinite samples are taken , but more importantly , before reaching the degenerate condition the sum tends to the normal distribution , which is the classical form of the the central limit theorem .we also provide the notion of _ equivalent _ distributions and : as well as the notion of similar distributions which provides the possibility of introducing a linear transformation of the given distribution . using expression [ equival ] and [ similar ] it is clear that for similar distributions and on an infinitesimal interval : therefore , the same applies to the distribution functions ( cumulative of the density function ) : for example the normal distribution provides the following expression : an interesting property arises when , instead of looking at the variables themselves , we now investigate how sums of these variables ( summands ) behave .if we have two normal distributions and with variances and then it is easy to see that , using expression [ scaling ] , we get : by setting and with an arbitrary summands , , we obtain a well known result : or more interestingly expressed as : it is the generalization of this property that leads to the notion of stable distributions by allowing arbitrary values of and .let , , be independent , identically distributed random variables , and let there exist constants and such that for some function which is not degenerate is not a step function ]. then follows the stable law . in this section ,we show how we move from the expression in [ clt_stable ] to the parameterization that we have been using denoting the type of stable distribution based on the single parameter .before we move on , we narrow the definition of stable distributions to that of _ strictly _ stable distribution by setting , thus : defining the distribution of the sum . by calculating the variance of the distribution [ strictly ]we obtain : if and then there is only one possibility which reverts to the result obtained for the normal distribution . with the notion of summands, we can now use them to further extend the properties to the general case of summing strictly stable random variables . we now choose to limit ourselves to summands with terms keeping in mind that , which we can generalize to any pair , we can rewrite the expression in ( [ 2ksum ] ) to read : or in much shorter notation and using the expression in defining strictly stable random variables[strictly ] : , we obtain : transforming into \ln b_2 = \ln n^{(\ln b_2)/\ln 2}\ ] ] or we can now repeat this process with , and using induction , yielding , , and , bunching respectively 3 , 4 and summands . in general the following expression is valid : for arbitrary values of . from theselast two formula we conclude that . by inductionwe see that there is a single for all these stable distributions , and that the scaling factors follow the following law : [ [ characteristic - function - of - symmetrical - zerocentered - stable - distribution ] ] characteristic function of symmetrical zero centered stable distribution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ the characteristic function , c.f ., of a distribution can be defined as the fourier transform of the probability density function , p.d.f ., of that distribution .let be a p.d.f .of a set of random variables , then the c.f . , , is defined as it s expectation value , : from this definition some properties follow immediately : 1 . 2 . 3 . , where denotes the complex conjugate .4 . if is symmetric about zero , 5 . if then the continuous derivative of the c.f . exists and : 6 . if is the sum of independent random variables , , , , then : 7 .any is a uniformly continuous function .( inversion theorem ) any distribution function is uniquely defined by its c.f .if and are some continuity points if , then the inversion formula states that the principle advantage of using the c.f .is that the c.f . of a sum of independent random variablesis equal to the product of the c.f.s of the summands : if we take the logarithm ( obtaining the second characteristic : ) , we find that : this is important as it allows us to assess the summation of a large number of independent random variables without evaluating multiple integrals .for this reason we cite the continuity theorem : ( continuity theorem ) let be a sequence of c.f.s and let be a sequence of corresponding distribution functions . if as , for all and is continuous at , then is the c.f . of a cumulative distribution function , and the sequence weakly converges to , .the inverse is also true : if and is a distribution function , then , where is the c.f .of the distribution function . to gain some insight in how to perform this, we can look at two well known stable distributions to find a way forward .the distributions under consideration are the normal distribution and the cauchy distribution .the calculation of the characteristic function for these distributions is well known and they also form part of the stable distribution , in the form representing the stable distribution density : note that the traditional form of the density function for the normal distribution is slightly different : this observation allows us to generalize for any symmetric distributions .let be such a distribution with an arbitrary parameter .we remind that keeping in mind the property as elucidated in equation [ conv_prop ] using the expression for the second characteristic the results obtained earlier when investigating the normal- and the cauchy distribution lead us to propose a solution of the form this satisfies the previous expression with and an arbitrary complex valued , which we choose to be ,\quad \lambda , c_1 , \text { real numbers}\ ] ] to find we use property ( 3 ) of the chacaracteristic function , the link between conjugate characteristic function and negative estimates : , \quad -\infty < k<\infty , 0<\alpha\leq 2\ ] ] with which is a trick to rewrite the equation as , in such a way that we are explicitly splitting the expression in a real and an imaginary part by a good choice of the constant .we also have not specified what form the function takes , we do know that it will depend on the parameter as well as provide a measure for the asymmetry of the distribution , should that be present .later we will attribute that to a parameter .the constant is an arbitrary real number and can serve as a scaling factor , which can be renormalised to , without loss of generality .this implies that the full expression of a stable distributions characteristic function is of the form : taking into account that the characteristic function of a symmetric stable function is real valued due to property ( 3 ) of the characteristic functions : and the characteristic function for any stable function becomes : with as a scaling factor .
* purpose : * to introduce and evaluate the use of stable distributions as a means of describing the behavior of charged particle pencil beams in a medium , with specific emphasis on proton beam scanning ( pbs ) . * methods : * the proton pencil beams of a clinically commissioned proton treatment facility are replicated in a monte carlo simulation system ( fluka ) . for each available energy the beam deposition in water medium is characterized by the dose deposition . using an alpha stable distribution methodology each beam with a nominal energy is characterized by the lateral spread at depth : and a total energy deposition . the beams are then described as a function of the variation of the parameters at depth . finally , an implementation in a freely available open source dose calculation suite ( matrad , dkfz , heidelberg , germany ) is proposed . * results : * quantitatively , the fit of the stable distributions , compared to those implemented in standard treatment planning systems , are equivalent . the efficiency of the representation is better ( 2 compared to 3 and more parameters needed ) . the meta parametrization ( i.e. the description of the dose deposition by only providing the fitted parameters ) allows for interpolation of non measured data . in the case of the clinical data used in this paper , it was possible to only commission 1 out of 5 nominal energies to obtain a viable data set . * conclusions : * alpha stable distributions are intrinsically suited to describe charged particle pencil beams in a medium and can be easily implemented in existing treatment planning systems . the use of alpha - distributions can easily be extended to other particles .
management of the environment and inventory of natural resources often requires appropriate remote sensing data acquired at specific times on earth locations .but quite often , good resolution optical images have large damaged areas with lost information due to clouds or shadows produced by clouds . the general statistical approaches for imputation needs to consider the data loss in one of three categories : missing at random data , completely missing at random data , ( meaning that the missing data is independent of its value ) , and non - ignorable missingness , allison(2000 ) .the last case is the most problematic form , existing when missing values are not randomly distributed across observations , but the probability of missingness can not be predicted from the variables in the model .one approach to non - ignorable missingness is to impute values based on data otherwise external to the research design , as older images , little et al ( 2002 ) .merging information acquired at the same time from two or more sensors ( with possible different resolution ) is the core of data fusion techniques .their historical goals are to increase image resolution , sharpening or enhancement of the output , tsuda et al ( 2003 ) ; ling ( 2007 ) and pohl ( 1998 ) , and their major problem to overcome is the co - registration of the different sources to merge , blum et al ( 2005 ) . in the last decade several adaptations of data fusion techniques for information recovery have been devised to mitigate the effect of clouds on optical images , a classical problem , since 50% of the sky is usually covered by light clouds . in le hegarat - mascle et al .( 1998 ) contextual modeling of information was introduced in order to merge data from sar ( synthetic aperture radar ) images into optical images , to correct dark pixels due to clouds .arellano ( 2003 ) used wavelet transforms first for clouds detection , and then to correct the information of the located clouds pixels by merging older image information with a wavelet multiresolution fusion technique .rossi et al ( 2003 ) , introduced a spatial technique , krigging interpolation , for correction of shadows due to clouds .shadows and light clouds do not destroy all information , only distort it , but dark clouds or rain clouds produce non - ignorable missingness , a harder problem for interpolation techniques .another interesting example of incomplete data is the damaged imagery provided by the landsat 7 etm+ satellite after its failure on may 2003 .a failure of the scan line corrector , a mirror that compensates for the forward motion of the satellite during data acquisition , resulted on overlaps of some parts of the images acquired thereafter , leaving large gaps , as large as 14 pixels , in others .about 22% of the total image is missing data in each scene . in figure [ zcent ]we can see two parts of the same landsat 7 image , one with missing information and another almost without loss .[ zcent ] the landsat 7 science team ( 2003 ) , developed and tested composite products made with the damaged landsat 7 image and a database of older landsat images , using local linear histogram matching , a method that merges pieces of older images that best matches the surroundings of the missing information region , scaramuzza et al 2004 , usgs / nasa 2004 , ( united states geological survey/ national aeronautics and space administration ) . commonwealth of australia ( 2006 ) reported that the composite products appear similar in quality to non damaged landsat 7 images , but masked environment changes , an usual problem in data fusion when one of the sources is temporally inaccurate .zhang and travis ( 2006 ) developed a spatial interpolation method based on krigging , the krigging geostatistical technique , for filling the data gaps in landsat 7 etm+ imagery , without need of extra information .they compare their method with the standard local linear histogram matching technique chosen by usgs / nasa .they show in their case study that krigging is better than histogram matching in targets with radical differences in radiance , since it produces an imputation value closer to the values of the actual neighbors .a drawback of spatial techniques is that they rely on a neighborhood that could be completely empty of real information .the algorithms start imputation in the gap contour , where there are many good pixels , and use imputed and good values to create the next pixel value . in center pixels of large gaps , algorithms based only on damaged images will use previously imputed values to generate the next imputation , degrading visual quality and increasing interpolation error . in this paper , we propose three methods based on data fusion techniques for imputation of missing values in images with non - ignorable missingness .we suppose there is available temporally accurate extra information for the gap scenes , produced by a lower resolution sensor .this is not a very restrictive hypothesis , since there are many satellite constellations that can provide temporally accurate data with different cameras at different resolutions .this study proposes three different alternatives for filling the data gaps .the first method involve merging information from a co - registered older image and a lower resolution image acquired at the same time , in the fourier domain ( method a ) .the second used a linear regression model with the lower resolution image as extra data ( method b ) . the third method consider segmentation as the main target of processing , andpropose a method to fill the gaps in the map of classes , avoiding direct imputation for the segmentation task ( method c ) .radiometric imputation is later made assigning a random value from the convex hull made by the neighbor pixels within its class .all the methods were compared by means of a large simulation study , evaluating performance with a multivariate response vector with four measures : q , rmse , kappa and overall accuracy coefficients .two of these performance measures ( kappa and overall accuracy ) are designed for the assessment of segmentation accuracy .the other two , q and rmse , measure radiometric interpolation accuracy .difference in performance were tested with a manova mixed model design with two main effects , imputation method and type of lower resolution extra data , and a blocking third factor with a nested sub - factor , introduced by the real landsat image and the sub - images that were used .method b proved to be the best for all criteria .we are considering images acquired from the same geographic target , with fixed number of bands , and 256 levels of gray per band . damaged and older images have the same support , lower resolution images have different supports , and we suppose that there are pixels in the damaged image for each pixel in the lower resolution image .let be the damaged image , a lower spatial resolution image , and , an older image acquired with the same sensor that .let be the gap to be filled , i.e. the set of pixels of with missing values .the goal is to input values on using available good data from in a small neighborhood of each pixel , the values of and eventually , the values of .the first step in processing is the re - sampling of the lower resolution image in order to match the support of the three images .this is done replacing each pixel of by a matrix of size with constant value .imputation is done in each band separately with the same algorithm , thus the methods descriptions consider the images as one band only .the image does not need to be co - registered with reference to the damaged image , since they have been acquired by the same sensor , but calibration is indeed necessary to increase merging accuracy .we suppose a gaussian calibration have been made on per column , taking as pivots the values in the matched column of .let be the composite image , output from the imputation method a. we define values in the gap as a mixture of high frequencies of the older image and low frequencies taken from the actual but lower resolution image . with and the ideal high pass and low pass fourier filters .the filters are regulated by the coefficient , the bigger is the larger the influence of in the imputation .high pass filters are related to detail , edges and random noise , and low pass filters to structural information .therefore , method a takes structural information from the low resolution image and detail from the older one .imputation will be made with the only help of , a temporally accurate image with lower resolution .we have expanded the lower resolution image to match the support of , replacing each cross grained pixel by a block of pixels with the same radiometric value .each one of the constant blocks of the expanded image has a matched block in the damaged image , that could be valid ( having all the information ) , or non valid , ( with some loss ) .we will follow a time series approach now .let think we have data collected month to month along several years , and we have a missing january data .it is reasonably to impute that value using a regression model that only involves other january data , and extra data collected on summer that year . in our case, we have missing information on a location inside a block , we may impute that value using a regression model that only involves data in the same position inside the other blocks of the image ( january data ) , using as regressor variable the values of ( summer data ) . with .the coefficients and are estimated by ordinary least squares using only valid blocks. then where and we define the imputed image as in the pixels with no missing information and the value predicted by the regression in each damaged pixel . remote sense images are used in a wide range of applied sciences . for many of them , as agricultural and experimental biology , or environmental science , all useful information is contained on a class map , where the classes are characterized by special features under study .kind of crop , forested or deforested areas , regions with high , medium and low soil humidity , are examples of such features . in this section we will change our point of view by thinking on class maps instead on radiometric imageslet suppose we have our damaged image , and a temporally accurate image with lower resolution , and possible different number of radiometric bands .this is an advantage over radiometric imputation , which need the same spectral properties in both images .we also suppose that a map of classes has been drawn from the damaged image , using a non supervised method like means , with different classes .the class of missing information pixels is another class , called class 0 .the goal is to assign the pixels without radiometric information to one of the classes , and generate a radiometric value for them randomly from the selected class .there are two main steps in this method 1 .an enhancement of the initial classification , that could be made or not 2 .imputation of pixels in the zero class .[ [ initial - enhancement ] ] initial enhancement + + + + + + + + + + + + + + + + + + + automatic classification is a difficult task , and the imputation method under study heavily depends of the accuracy of the initial map of classes .so it is important to pay special attention to the coherence of the classes .we suggest a set of steps to improve class homogeneity as follows 1 .given construct a map of classes using means , and assign the zero label to the missing information regions .2 . given the class image , detect the pixels with non homogeneous neighborhood , i.e , pixels that have no neighbor pixels on the same class .call this set .3 . for each pixel in , verify its label using a forward mode filter , and a backwards mode filter .the filters with give a label that is consistent with the mode of the labels in a ( forward or backward ) neighborhood .1 . if both filters give the same label , update the label of the pixel to match this one .if the labels are not the same , maintain the original label , and put the pixel in a new set call .4 . for each pixel in that do not has label zero , we use again radiometric information for updating the label .1 . take the one step neighbors of the pixels , and compute the arithmetic mean for each class present in the neighborhood .2 . update the label of the pixel by the label of the class whose arithmetic mean is closest ( in euclidean distance ) to the radiometric value of the pixel .it is important to note that after this process , some of the pixels of the zero class may have been classified into a real class , only using contextual information .the other classes should have more defined borderlines . [[ class - zero - imputation ] ] class zero imputation + + + + + + + + + + + + + + + + + + + + + we suppose now that we have a stable map of classes , with missing information in class zero , and auxiliary information provided by a temporally accurate lower resolution radiometric image .if is a pixel in class zero , it does not have radiometric information .we will impute a label class on it with the following algorithm 1 .let be the smallest neighborhood of that have a pixel with non zero label .2 . let be the arithmetic mean of the pixels that have positions in class in .the label of is the label of the class that makes the smallest euclidean distance from to each . after this process , all pixels have a class label , but pixels in the gaps have not been imputed with radiometric information yet .a pixel s radiometric value is assigned randomly from the values of the convex hull generated by the pixels of a small neighborhood within its class .impartial imputation assessment is only possible when ground truth is available .complete simulation of the three types of images involved in the methods ( old , damaged and lower resolution ) will introduce errors beyond the ones produced by the imputation methods , degrading the quality of the assessment .for this reason , good quality landsat 7 etm+ imagery that have older matched imagery available were selected , and strips similar to the gaps in the slc - off etm+ imagery were cut manually , guaranteeing ground truth to compare with and co - registration between the images .the four landsat 7 etm+ images selected were quit large , having many different textures in them , like crop fields , mountains and cities , that challenge the imputation methods differently .landsat 7 etm+ had a lower resolution sensor in its constellation , the mmrs camera from the sac c satellite , whose imagery could be used as extra data . but to control also co - registration problems , lower resolution imagery were simulated with three resolution reduction methods ( rrm ) , by block averaging the ground truth , the congrid method from envi software and shifted block averaging .block averaging is a crude way of reducing resolution , the congrid method reduces the blocking effect smoothing the output , giving better visual appearance , but block averaging allows to shift the blocks in a controlled fashion , simulating lack of co - registration . in figure [ examples ] we see an example of four matched images , good , damaged , older and lower resolution by block averaging .we can see a bright spot in the center of the image , which is still present in the damaged one and the lower resolution one , but it is not present in the older one .database construction details are giving in the subsection [ database ] .the performance measures ( rmse , q , kappa and overall accuracy ) were modeled as a response vector in a manova mixed model with main effects ( imputation method , resolution reduction method ) , and random nested effects ( image and sub - image ) . subsection [ measures ] introduces the performance measures and subsection [ anova ] includes a detailed description of the manova design . following the manova rejection ,simultaneous multivariate comparisons were made by ( bonferroni corrected ) two means hotelling tests , and individual anova results with simultaneous comparisons via fisher lsd tests were studied to determine the best method , and the influence in performance of co - registration problems .we selected four landsat 7 etm+ images with six bands each , acquired before the failure of the slc mirror .these images have also companion good quality landsat images acquired approximately an year before . in the table [ tab ] , we see some information about the images selected , courtesy of conae - argentina . .landsat7 etm+ images selected for simulation study , courtesy of conae - argentina .[ cols="^,^,^,^",options="header " , ] we would like now to study how the location parameters cluster , to identify the methods that are significantly different from each other .we have some information form the multivariate bonferroni corrected set of hotelling test we made , but now we will see how the different variables cluster the methods .we will use fisher lsd multiple comparison test as clustering criteria .other multiple comparison test like tukey lsd and duncan test gave the same type of clustering output .we will study now the information contained on table [ table6 ] .c & 0,59 & 192 & a & & & & + a2 & 0,79 & 192 & & b & & & + a1 & 0,79 & 192 & & b & & & + a3 & 0,80 & 192 & & b & & & + b & 0,83 & 192 & & & c & & + c1 & 0,83 & 192 & & & c & & + method & mean kappa & n & & & & & + c & 0,36 & 192 & a & & & & + a2 & 0,65 & 192 & & b & & & + a1 & 0,66 & 192 & & b & & & + a3 & 0,66 & 192 & & b & & & + b & 0,71 & 192 & & & c & & + c1 & 0,72 & 192 & & & c & & + method & mean rmse&n & & & & & + b & 18,09 & 192 & a & & & & + a3 & 21,62 & 192 & & b & & & + a2 & 23,05 & 192 & & b & c & & + a1 & 23,93 & 192 & & & c & & + c1 & 39,30 & 192 & & & & d & + c & 42,67&192 & & & & & e + method & mean q & n & & & & & + c & 0,53 & 192 & a & & & & + c1 & 0,59 & 192 & & b & & & + a2 & 0,82 & 192 & & & c & & + a1 & 0,83 & 192 & & & c & & + a3 & 0,83 & 192 & & & c & & + b & 0,85 & 192 & & & & d & + when considering imputation method , method b and c1 are grouped together by the two measures of good classification , kappa and overall accuracy .the other two measures , q and rmse separate them into single clusters .the three versions of method a have been grouped together by all the measures but rmse , which considered method a1 and a3 significantly different .the two versions of method c , c and c1 , have been distinguished by all the measures .the multivariate simultaneous comparison distinguished all methods but the versions of method a. and even though , method a1 was considered significantly different from method a3 .it is interesting to see that this is the way of grouping of the two radiometric measures .the classification based measures do not distinguish between method b and c1 , or a1 , a2 and a3 . to the naked eye ,method b reconstruction of figure [ method ] is very different from method c1 reconstruction , and method a1 is sharper than method a3 .the multivariate simultaneous comparison agreed with this statement , with an experiment - wise type i error =0.05 .now we can come back to the profile analysis of section [ profiles ] . in that section ,the performance measures means , computed over the sub - images of each image , were plotted as profiles , and the highest value of q , kappa and overall accuracy indicated the best imputation method , and the lowest value of rmse back up the same statement .method b seemed to be the best of all them .but the analysis did not have any statistical confidence .the simultaneous comparisons made reported method b as significantly different in mean performance from all the others , and table [ table6 ] show the mean value of measures q , overall accuracy and kappa the highest of all , and the lowest of the error measure rmse .these results give confidence to the previous profile analysis .now we are concerned with the fact that imputation may have reduced performance when the lower resolution image used as extra information is not co - registered accurately .when simulating the lower resolution image , it was chosen chose block averaging and congrid method as basic resolution reduction methods , and as a third method , the block averaged image was moved slightly ( method rrm2 ) , simulating lack of co - registration .simultaneous comparisons using fisher lsd ( see table [ table7 ] ) report that the two versions of block averaging are indistinguishably , but rrm2 produced a significantly different mean in all measures , making them worse . in the case of kappa ,q and overall accuracy , the means are smaller , and in the case of rmse the mean is higher . 2 & 0,74&0,57&30,71 & 0,69 & 384 & a & + 0 & 0,79 & 0,66&27,05 & 0,76 & 384 & & b + 1 & 0,79 & 0,66&26,58 & 0,77 & 384 & & b +regression models are considered successful models for imputation in a wide range of situations in all applied sciences . in this paper, we introduced a simple regression model for imputation of spacial data in large regions of a remote sensed image , method b , that had a statistically significant better performance than two other main methods also adapted from the literature , in the frame of a careful simulation study .multivariate simultaneous comparisons of all imputation methods agreed with the visual inspection of the reconstructed images .method b reconstruction shows less contrast between the imputed stripes and the non imputed regions , but looses sharpness , and appear slightly blurred .method a1 and method a3 reconstructions are different , a1 produced more fine detail , but also a lot more contrast between imputed and non imputed regions , and a3 has a smoother appearance , closer to method b reconstruction , also with less detail .method c and c1 were designed to give good segmentations , despite the large regions without informations , and method c1 does .pixels radiometric imputation is made choosing at random a a value form the convex hull of pixels of its class in a small neighborhood . without the help of radiometric extra data , radiometric imputation become quite poor .method c has a preprocessing step that makes the initial segmentation sharper , which increases the difference with the original image , not only on the imputed regions but in all regions .one of the hypothesis of all methods was the existence of good , co - registered , temporally accurate imagery with possible lower resolution . to test the dependence of the methods of co - registration ,the block averaged image that act as lower resolution extra data was shifted slightly and the performance of all methods diminished .this reduction was observed statistically significant with fisher lsd test of simultaneous comparisons , for each performance measure , and globally with a series of hotelling tests ( bonferroni adjusted ) .the results introduced in this paper are part of the masters thesis in applied statistics of valeria rulloni , at universidad nacional de cordoba .this work was partially supported by pid secyt 69/08 .we would like to thank s. ojeda , j. izarraulde , m. lamfri , and m. scavuzzo for interesting conversations leading to the design of the methods .the imagery used in the simulation section was kindly provided by conae , argentina .imputation methods and performance measures were computed with envi software .statistical analysis was made with infostat , unc , provided by prof .julio di rienzo. little , r. and rubin , d .( 2002 ) . statistical analysis with missing data .john wiley , new york .allison , p .multiple imputation for missing data : a cautionary tale .methods res .28(3 ) ( 2000 ) 301309 .le hegarat - mascle , s. , bloch , i. y vidal madjar , d. ( 1998 ) , introduction of neighborhood information in evidence theory and application to data fusion of radar and optical images with partial cloud cover .pattern recognition , vol 31 : 11 , pp18111823 .rencher , a. ( 2002 ) .methods of multivariate analysis.2d edition .wiley series in probability and statistics .richards , j. a. y jia , x. ( 1999 ) , remote sensing digital image analysis . an introduction . springer .
imputation of missing data in large regions of satellite imagery is necessary when the acquired image has been damaged by shadows due to clouds , or information gaps produced by sensor failure . the general approach for imputation of missing data , that could not be considered missed at random , suggests the use of other available data . previous work , like local linear histogram matching , take advantage of a co - registered older image obtained by the same sensor , yielding good results in filling homogeneous regions , but poor results if the scenes being combined have radical differences in target radiance due , for example , to the presence of sun glint or snow . this study proposes three different alternatives for filling the data gaps . the first two involves merging radiometric information from a lower resolution image acquired at the same time , in the fourier domain ( method a ) , and using linear regression ( method b ) . the third method consider segmentation as the main target of processing , and propose a method to fill the gaps in the map of classes , avoiding direct imputation ( method c ) . all the methods were compared by means of a large simulation study , evaluating performance with a multivariate response vector with four measures : q , rmse , kappa and overall accuracy coefficients . difference in performance were tested with a manova mixed model design with two main effects , imputation method and type of lower resolution extra data , and a blocking third factor with a nested sub - factor , introduced by the real landsat image and the sub - images that were used . method b proved to be the best for all criteria .
hilbert s tenth problem asks for a _ single _ procedure / algorithm to systematically determine if any given diophantine equation has some positive integer solution or not .this problem has now been proved to be turing noncomputable , indirectly through an equivalence mapping to the noncomputable turing halting problem . nevertheless , we have proposed and argued ( with more and more details provided ) in a series of papers for an algorithm for hilbert s tenth problem based on quantum adiabatic computation . among the many valid concerns about our algorithm there are also some misleading objections which are still being spread despite their falsehood . in this short notewe will give an updated overall picture of the algorithm and go through , with pointers provided for further discussions elsewhere , some of the valid concerns as well as the misleading objections to dispel the entrenched but baseless prejudice against our proposed algorithm .a precise statement of the algorithm can be found elsewhere , but in a few words , given a diophantine equation our aim is to obtain ( by physical means or simulations or otherwise ) the ( fock ) ground state of an appropriate ( and bounded from below ) hamiltonian carrying the information of the input diophantine polynomial .we will achieve that by starting with yet another easily obtained ground state of another hamiltonian and by adiabatically deforming the hamiltonian in time to the one carrying the diophantine input above .essential ingredients of the algorithm are : + - ( for an unbounded hamiltonian in a dimensionally infinite space ) will ensure that , provided the adiabaticity and other conditions are satisfied , the initial ground state will turn into the sought - after ground state of with _high probability_. relevant to this point is the paper by tsirelson criticising our algorithm when it first came out in 2001 .this reference has not been published but somehow has been selectively cited by some as an evidence against our algorithm ! those citing , intentionally or not , have either ignored or missed out our reply to tsirelson which was posted only 3 days later on the same arxiv . in that reply ,we clearly pointed out that tsirelson s arguments were simply wrong : had they been right , the qat , and not just our algorithm , would have been mathematically wrong for all those years !+ - is necessary to obtain the adiabaticity condition for the qat with a _finite _ rate of hamiltonian deformation .we have employed certain mathematical theorems and a gauge - like symmetry for the class of time - dependent hamiltonians of the algorithm to show that there is indeed no level crossing .+ - is necessary because the qat is a nonconstructive theorem and does not tell us how the final probability of obtaining the ground state is approached as a function of the inverse of the rate of change of the hamiltonian deformation .such final probability certainly does not increase monotonically but varies for different diophantine equations in a complicated manner . to be an algorithm for hilbert s tenthwe need a _ single and universal _ criterion , applicable to any diophantine equation , to identify the sought - after ground state .we have shown that it is only the final ground state that can be obtained with a measurement probability _ more than 1/2 _ with our algorithm .such a probability is our identification criterion for the ground state .( the proof has now been extended to an infinite number of energy levels , not just the two levels of the ground state and the next excited state , and will appear elsewhere . )these analytical results have also been supported by numerical simulations .+ - is inevitable for our quantum algorithm , either in determining the identification probability through relative frequencies in physical measurements or in extrapolating to zero - size time step in some numerical simulations .we will come back to this probabilistic nature in a section below .further modification and extension of our work has also appeared in the literature .the recursive noncomputability of hilbert s tenth problem lies in the fact that a systematic substitution of positive integers ( in some increasing order of magnitude , say ) into a given diophantine equation could only terminate if the equation has a solution ; otherwise , the substitution would go on indefinitely without any termination point .( for this reason , the problem is sometimes also termed semi - computable , as we do not have a general method to determine when the equation has _ no _ solution . ) on the other hand , our quantum algorithm apparently is somehow able to explore the infinite ( of the whole domain of positive integers ) in a finite time ! " how can that be ? and some have even used this as an indication , which is misleading , that there must be something wrong with our algorithm !the logic behind all this is that hilbert s tenth problem belongs to the class of _ finitely refutable mathematical problems _ .that is , for any given diophantine equation it only requires a substitution up to some positive integer to determine whether it has any solution or not , even in the case of _ no _ solution : if the equation has no solution within a certain _ finite _ domain of positive integers , it will not have a solution anywhere else in the whole infinite domain !the noncomputability of hilbert s tenth problem is precisely because we do not have any universal recursive method to determine this finite decisive domain for every diophantine equation . in contradistinction to the recursive mathematics , quantum mechanics can give us the means to determine such finite decisive domain ( in order to make some conclusion in the infinite domain ) through the ( energetically ) ground state .the finiteness of such a domain is encoded and reflected accordingly in the finiteness of the algorithm evolution time and in the finiteness of the energy and occupation number of the final ground state .this is how the paradoxical power of our quantum algorithm can be understood .( in fact , the quantum algorithm provides an alternative proof for the finitely refutable character of hilbert s tenth and related problems . )our quantum adiabatic algorithm is probabilistic in the sense that it can produce a result with a probability , which can be made arbitrarily high , of being the correct result . as such, it has a non - zero probability , even though it can be made arbitrarily small , _ of being incorrect ! _ thus , in a way , in order to compute the noncomputable or decide the undecidable with our algorithm , we will have to allow for the possibility of being wrong , even though we can reduce this chance ( at a cost ) .it is this probabilistic nature of the algorithm that renders it outside the jurisdiction of cantor s diagonal arguments employed in the noncomputability proof of the turing halting problem ( and hence of hilbert s tenth problem ) .indeed , the discoverers of noncomputability and of incompleteness in mathematics were very much aware of this power of ( probabilistic ) flexibility , as reflected in their own statements , + - * gdel * : _ ... it remains possible that there may exist ( and even be empirically discovered ) a theorem - proving machine which in fact is equivalent to mathematical intuition , but can not be proved to be so , nor can be proved to yield only correct theorems of finitary number theory . " _+ - * turing * : _ ... if a machine is expected to be infallible , it can not be intelligent .there are several theorems which say almost exactly that . but these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility . "_ probabilistic computation may also be more powerful than turing deterministic computation in yet a different way , contrary to the often misquoted statement that the two are equivalent in terms of computability .pour - el and richards have shown that , surprisingly , the solution at a finite time of the linear wave equation in 3 dimensions can be _noncomputable _ , even with some computable initial condition !this result not only shows the limitation of turing computability even with linear and supposedly simple differential equations like the well - known wave equations , but also implies a hypothetical physical hypercomputation : starting a controlled wave propagation with a precisely prepared initial condition and then measuring the wave configuration at a given time later to obtain ( compute ) the otherwise noncomputable .this would work provided that ( i ) the physical wave propagation so performed is governed by the mathematical wave equation ; and ( ii ) the measurement of the wave configuration could be done with infinite precision .our quantum algorithm not only implies the noncomputability of the solution of the schrdinger equation with a special class of time - dependent hamiltonians ( associated with hilbert s tenth problem ) ; but also provides the procedure , physical or otherwise , for computing the turing noncomputable , albeit in a probabilistic manner . and thanks to the flexibility of this probabilistic character, we may not require an infinitely precise measurement ( but see below ) but still have to assume that the mathematics of quantum mechanics underlies any physical implementation of the algorithm , unless the algorithm is simulated on turing machines . beside ours , there are also other appeals to quantum mechanics as the possible evidence and/or resource for hypercomputation . postulated in quantum mechanicsis the inherent and irreducible randomness ; and true randomness is outside the turing computability and thus belongs to the domain of hypercomputation , and so does probabilistic computation in general .turing machines are not capable of generating truly random numbers , but only pseudo - random output with some finite and recursive algorithms .hypercomputation beyond the turing computability is thus not so mythical .at least everyone agrees that random number generation is a kind of hypercomputation , albeit a very special kind .the problem is whether and how randomness can be harnessed for more interesting computation rather than just being random in itself .each series of random numbers generated by a series of quantum measurements is different from each other and is not reproducible ( being random by the very definition ) .what reproducible is not the outcomes but more often is the probabilities for the outcomes .we have pointed out how _ some _ hypercomputation could be performed with the help of a ( quantum ) probability which is a noncomputable real number .the kind of problems that can be solved by this hypercomputation depends on the properties of that real number .can we make direct use of a particular ( irreproducible ) series of random numbers generated quantum mechanically or otherwise ?we hope to report these findings elsewhere .probabilistic computation may be more powerful than turing computation provided a suitable probability measure can be defined . the systematic substitution of positive integers into a diophantine equation , for example, does not lend itself to a definable probability measure since the cardinality of any subset used in the substitution is always finite while that of all the positive integers is not .( recall that there is no recursive way to determine the finite decisive subsets , even when we know that the problem is finitely refutable .were there a way , the problem would not have been turing noncomputable . ) our algorithm , on the other hand , possesses naturally defined probability measures through the use of quantum mechanics . in a physical implementation of the algorithm , the probability comes from the weak law of large numbers in determining some other quantum probabilities through the relative frequencies obtained from repetitive measurements .this determination does not require measurements of infinite precision .however , there have been some concerns that infinite precision is still required in physically setting up the various integer parameters in the time - dependent quantum hamiltonians . while the issue deserves further investigations as surely any systematic errors in the hamiltonians would be fatal , we still are not convinced that such integer parameters can not be satisfactorily set up . in particular , we would like to understand the effects of statistical ( as opposed to systematic ) errors on the statistical behaviour of the spectrum of our adiabatic hamiltonians . however , physical implementation is not the only way to carry out the algorithm .the algorithm could also be simulated on turing machines .then how does probability come into such simulations ?it comes in under the necessary extrapolation of the simulation time steps to zero sizes , which is essentially probabilistic .the probability measures here are different from those in the physical implementation but , on the other hand , we do not have the above problem associated with the integer parameters in our hamiltonians .we have listed and dealt with some of the concerns and objections against our quantum adiabatic algorithm for the turing noncomputable tenth problem of hilbert .most of these objections ( none of which actually appears in print ) simply root in the belief , and no more than a belief , that there must be something wrong with the algorithm because it claims to be able to compute the very problem that has been mathematically ( and recursively ) proven to be noncomputable ! closer inspection , however , shows that such a belief is simply false as all the noncomputability proof is only valid within a certain framework , outside of which the quantum algorithm operates and hence entails no contradiction with the known and proven facts .so , what can we expect from the algorithm ?once and if implemented , it could give an answer , _ with any pre - determined probability _ , with respect to the existence of solution of any given diophantine equation . the probability can be raised arbitrarily without bounds ; but the higher it is , the higher the cost of time and resources it takes .the other caveat is that the time it takes for a _ successful _ application of the algorithm is not known beforehand but only as an end product , even though it is always finite .( this successful time is not the time we fix _ a priori _ for each run of the algorithm each run of the algorithm always has an end point .we must then keep increasing this running time until successful . )i am indebted to many of my colleagues for the benefit i have derived through the communication and correspondence with them .i also wish to acknowledge in particular the on - going support from peter hannaford and alan head .this point emerged from a group discussion at mit with enrico deotto , ed farhi , jeffrey goldstone and sam gutmann ( 2002 ) .it goes without saying that if there are mistakes in the interpretation herein , they are solely mine .
we give an update on a quantum adiabatic algorithm for the turing noncomputable hilbert s tenth problem , and briefly go over some relevant issues and misleading objections to the algorithm .
the solution of the numerically ill - posed linear system of equations is considered .we suppose that the matrix has large dimension , may be over or underdetermined , , or , resp . and is severely ill - conditioned ; the singular values of decay exponentially to zero , or to the limits of the numerical precision .vectors and describe the data and model parameters respectively .noise in the data is represented by , i.e. for exact but unknown data .we assume components of are independently sampled from a gaussian distribution and have known diagonal covariance matrix , denoted by . given and aim is to compute an approximate solution for .discrete ill - posed problems of the form may be obtained by discretizing linear ill - posed problems such as fredholm integral equations of the first kind and arise in many research areas including image deblurring , geophysics , etc .the presence of the noise in the data and the ill - conditioning of means that regularization is needed in order to compute an approximate solution for .probably the most well - known method is that of tikhonov regularization in which is estimated as here is the weighted data fidelity term and is a regularization term .the product predicts the data , is a regularization matrix and allows specification of a given reference vector of _ a priori _ information for the model . in is an unknown regularization parameter which trades - off between the data fidelity and regularization terms .the noise in the measurements is whitened when where approximates the standard deviation of the noise in the datum . introducing , , and shifting by the prior information through , yields under the assumption that the null spaces of and do not intersect is explicitly dependent on and is given by is well - known that when the matrix is invertible , the alternative but equivalent formulation uses yielding , with right preconditioned matrix and regularized inverse , although equivalent analytically , numerical techniques to solve and differ . for small scale problems , for example , we may solve using the generalized singular value decomposition ( gsvd ) , e.g. , for the matrix pair $ ] , but would use the singular value decomposition ( svd ) of for , e.g. , as given in [ svd ] .the solutions depend on the stability of these underlying decompositions , as well as the feasibility of calculating .still , the use of the svd or gsvd is not viable computationally for large scale problems unless the underlying operators possess a specific structure .for example , if the underlying system matrix , and associated regularization matrix are expressible via kronecker decompositions , e.g. , then the gsvd decomposition can be found via the gsvd for each dimension separately .here we consider the more general situation and turn to consideration of iterative krylov methods to estimate the solutions of and .forthwith we assume for simplicity of notation and the initial discussion that we solve the system , equivalent to using , weighting of and by , and right preconditioning of by , dependent on the context , i.e. we solve with , and .specifically , we assume that the components of the error are independently sampled from a normal distribution with variance , . in principleiterative methods such as conjugate gradients ( cg ) , or other krylov methods , can be employed to solve for large scale problems in the presence of noise .results presented in demonstrate , however , that minres and gmres should not be used as regularizing krylov iterations due to the early transfer of noise to the krylov basis .recently , there has also been some interest in the lsmr modification of lsqr which is an implementation based on minres , . here, because our goal is to understand how to find regularization parameters for a well - studied reduced problem , we use the well - known golub - kahan bidiagonalization ( gkb ) , also known as the lsqr iteration which has been well - studied in the context of projected solutions of the least squares problem and for which the noise regularizing properties of the iteration are better understood , .effectively the gkb projects the solution of the inverse problem to a smaller subspace , say of size . applying steps of the gkb on matrix with initial vector , of norm , and defining to be the unit vector of length with a in the first entry , lower bidiagonal matrix and column orthonormal matrices , generated such that , see , for , we define the full , , and projected , , residuals via by the column orthonormality of theoretically , therefore , an estimate for the solution with respect to a reduced subspace may be found by solving the normal equations for the projected problem and then projecting back to the full problem . by the courant fischer minimax theorem and noting it is immediate that the eigenvalues of , the ritz values , are interlaced by those of and are bounded above and below by the largest and smallest non zero eigenvalues of , ( * ? ? ?* section 5 ) .likewise , the singular values , , of interlace the singular values of , and the singular vectors of are approximated , via and , for the singular value decomposition , given sufficient precision , e.g. .thus , for large enough , dependent on the spectrum for , the normal equations for the projected system of equations inherit the ill - conditioning of the normal equations for and regularization is also needed for the noise - contaminated projected problem , , despite the regularizing impact of the krylov iterations . by the column orthonormality of , we have . explicitly introducing regularization parameter , distinct from in order to emphasize regularization on the projected problem , yields the projected tikhonov problem the solution of has two equivalent forms, it will be helpful for theoretical analysis to give both , dependent on whether derived directly for , or without factoring out in in practice , one uses to find via the svd for , under the assumption that , noting that an explicit solution for is immediately available , see e.g. [ svd ] .as already observed in , the regularized lsqr algorithm now poses the problem of both detecting the appropriate number of steps as well as of finding the optimal parameter .one method of regularization is simply to avoid the introduction of the regularizer in and find an optimal at which to stop the iteration , equivalently regarding the lsqr iteration itself as sufficiently regularizing .here we assume that regularization is needed , and thus with respect to the regularization , it is evident that the problem may be considered in two ways , namely regularize and project , or project and then regularize , e.g. .if the regularization is applied for each step of the reduction , the method is regarded as a hybrid of the two techniques , e.g. .the problem of first determining the appropriate size for the projected space is discussed in e.g. and more recently for large scale geophysical inversion in .on the other hand , for a given the krylov subspace for is the same for all , i.e. the krylov subspace is invariant to shifts , which is useful for determining , .although the solutions obtained from the regularize then project , and project then regularize , for a given and are equivalent , ( * ? ? ?* theorem 3.1 ) which also points to ( * ? ? ?* p 301 ) , this does not mean that for given finding for the subspace problem will provide that is optimal for the full problem , . determining to which degree certain regularization techniques provide a good estimate for from the subspace problem estimate , and the conditions under which this will hold , is the topic of this work and is the reason we denote regularization parameter on the subspace by distinct from .we should note that in our discussion we explicitly assume invertibility of the regularization operator in order to allow the right preconditioning of .practically we note that in applying the gkb reduction , applications of forward operations should be performed via solves for the systems to find .typically is sparse and structured , and such solves are efficient given a potential initial factorization for .the gkb also requires forward operations with which are again efficiently implemented via solution solves with system matrix .here we assume that such efficient solves are possible , and do not address this aspect of algorithmic development .we also refer to recent work in in which the most often used differential operators , themselves not invertible , are replaced by invertible versions by use of suitable boundary conditions . when is not invertible , the projected tikhonov problem lacks the immediate preservation of the regularization norm , yielding the subspace problem which unfortunately requires projecting the subspace solution back to the full space via , which is avoided in . on the other handthe regularizer may be achieved , as noted in , by finding the factorization , yielding by the column orthogonality of .the formulae to immediately find for small use in this case the gsvd , see e.g. .having now set the background for our investigation , we reiterate that a main goal of this work is to theoretically analyze in which cases determining from the projected problem will effectively regularize the full problem . the presented analysis is independent of whether the originating problem is over or under determined . for the full problemthe question of determining an optimal parameter is well - studied , see e.g. , for a discussion of methods including the morozow discrepancy principle ( mdp ) , the l - curve ( lc ) , generalized cross validation ( gcv ) and unbiased predictive risk estimation ( upre ) .the use of the mdp , lc and gcv is also widely discussed for the projected problem , particularly starting with the work of kilmer et al , and continued in .further , extensions for windowed regularization , and hence multi - parameter regularization , are also applied for the projected problem .our attention is initially on the use of the upre , for which we find a useful connection between the full and projected formulations .this is not so immediately clear , particularly for the gcv and is , we believe , the reason why a weighted gcv ( wgcv ) was required in . in our work we are also able to heuristically explain the weighting parameter in the wgcv .we stress that the approach assumes that the projected system is calculated with full reorthogonalization , a point often overlooked in many discussions , although it is apparent than many references implicitly make this assumption .although our analysis should also be relevant for the case of windowed regularization , this is not a topic for this paper , and will be considered in future work .instead we extend the hybrid approach for use with an iteratively reweighted regularizer ( irr ) , which sharpens edges within the solution , , hence demonstrating that edge preserving regularization can be applied in the context of regularized lsqr solutions of the least squares problem on a projected subspace .the paper is organized as follows .the regularization parameter estimation techniques of interest are presented in section [ sec : parameter estimation ] .the discussion in section [ sec : parameter estimation ] is validated with one dimensional simulations in section [ sec : simulationoned ] .image restoration problems presented in section [ sec : simulationtwod ] illustrate the relevance for the two dimensional case .we then go further and demonstrate the use of irr , an approach for approximating the total variation regularization , in section [ sec : irr ] . finally in section [ sec : walnut ] we also illustrate the algorithms in the context of sparse tomographic reconstruction of a walnut data set , , demonstrating the more general use of the approach beyond deblurring of noisy data .our conclusions are presented in section [ conclusions ] .it is of particular interest that our analysis applies for both over and under determined systems of equations and is thus potentially of future use for other algorithms also in which alternative regularizers are imposed and also require repeated tikhonov solves at each step .further , this work extends our analysis of the upre in the context of underdetermined but small scale problems in , and demonstrates that irr can be applied for projected solutions .although it is well - known that the mdp always leads to an over estimation of the regularization parameter , e.g. , it is still a widely used method for many applications , and is thus an important baseline for comparison . on the other hand ,the upre is less well - accepted but often leads to a better estimation of the regularization parameter , e.g. . in order to use any specific regularization parameter estimation method for the projected problem it is helpful to understand the derivation on the full problem . in the discussion that follows we explicitly assume that lsqr is implemented using sufficient precision , namely with full reorthogonalization of the columns of and so that holds .the predictive error , , for the solution , is defined by where is the influence matrix , and compares with the full residual in both equations the first term is deterministic , whereas the second is stochastic , through the assumption that is a random vector . for completeness, we give the trace lemma e.g. ( * ? ? ?* lemma 7.2 ) , as required for the following discussion . for deterministic vector , random vector with diagonal covariance matrix , matrix , and expectation operator using to denote the trace of matrix .applying to both and with the assumption that , due to whitening of noise , and using the symmetry of the influence matrix , we obtain where is the defined to be the expected value of the risk of using the solution .although the first term on the right hand side in each case can not be obtained , we may use in .thus using linearity of the trace and eliminating the first term in the right hand side of gives the upre estimator for the optimal parameter typically , is found by evaluating for a range of , for example by the svd see e.g. [ appb ] , with the minimum found within that range of parameter values , as suggested in for the gcv .see also e.g. ( * ? ? ?* appendix , ( a.6 ) ) for the formulae for calculating the functional in terms of the svd of matrix . for the projected case we consider two different approaches for minimizing the predictive risk using the solution of the projected problem .first observe that from and by the column orthogonality of , we have the residual , with respect to the solution of the projected problem explicitly depending on the regularization parameter , is now given by where , consistent with the definition of the influence matrix .similarly the predictive error is given by comparing with , and with , gives the upre functional for finding the regularization parameter for the solution for the full problem with the solution found with respect to the projected subspace expanding gives where the last equality follows from the cycle property of the trace operator for consistently sized matrices .hence can be evaluated without reprojecting the solution for every back to the full problem .when estimated by upre , the optimal for the solution on the projected space can be found from the projected solution alone .it remains to question whether has any relevance with respect to the projected solution , i.e. does this appropriately regularize the projected solution , otherwise it may not be appropriate to find to minimize this functional on the subspace .observe for , right hand side vector consists of a deterministic and stochastic part , , where for white noise vector and column orthogonal , is a random vector of length with covariance matrix .thus from the derivation of the upre for the full problem defined by system matrix , right hand side and white noise vector , we may immediately write down the upre for the projected problem with system matrix , right hand side and white noise vector . in particular, defining and comparing with , it is immediate that minimizing to minimize the risk for the projected solution , also minimizes the risk for the full solution , with respect to the given subspace .the shift by as compared to is irrelevant in the minimization of the functional .note that this does not immediately minimize the predictive risk ( [ optalpha2 ] ) for the full problem , i.e. , because is needed with respect to solutions in and not just restricted to . by the linearity and cycle properties of the trace operator and in exact arithmetic the large singular values of a good approximation of the large singular values of , ( * ? ? ?* section 9.3.3 ) .thus suppose is such that the first singular values are well - approximated by those of , and that the ill - conditioning of is effectively captured so that there is clear separation between and .then for regularizing the full problem , for which , , and with filter factor , , we have comparing and , with in , we see that we may interpret the determination of as giving a good estimate for if .further , if provides a good estimate for we may interpret the determination of as giving a good estimate for in the case in which the filter factors in the tikhonov regularization are determined for the truncated singular value decomposition ( tsvd ) of , with truncation at .this observation follows theorem 3.2 which connects the use of the tsvd of for the solution with the solution obtained using the tsvd of . to summarize : if is such that for , so that and approximates , then when obtained using the uprefurther , the estimate is found without projecting the solution back to the full space , namely by minimizing . the premise of the mdp , , to find is the assumption that the norm of the residual , follows a distribution with degrees of freedom , .heuristically , the rationale for this choice is seen by re - expressing so that if has been found as a good estimate for then the residual should be dominated by the whitened error vector . for white noise distributed as a distribution with degrees of freedom , from which , with variance .thus we seek a residual such that using a newton root - finding method , see [ appb ] , where we take safety parameter to handle the well - known over smoothing of the mdp .alternatively , we note , where the size of depends on the percentiles of the cumulative distribution with degrees of freedom . the larger less confidence we have in the distribution for , and of as an approximation to . for the projected residual , where follows a distribution with degrees of freedom .this suggests setting a number of other suggestions for a projected discrepancy principle have been presented in the literature , but all imply using dependent on the noise level of the full problem , e.g. , with .it is reported in , however , that while the theory predicts choosing , numerical experiments support reducing .alternatively this may be seen as reducing the degrees of freedom , instead of reducing , consistent with .we deduce that the interpretation for finding the regularization parameter based on the statistical property of the projected residual in contrast to the full residual should be important in determing the size of . for the mdp the degrees of freedom change from to when the residual is calculated on the full space as compared to the projected space .thus is not a good approximation for when obtained using as a guide for the actual size of the projected residual . if the full problem is effectively singular , so that for , the degrees of freedom for the full problem are reduced and again . unlike the upre and mdp , the method of generalized cross validation ( gcv ) for finding the regularization parameter does not require any estimate of the noise level .it is , however , a statistical technique based on leave one out validation and has been well - studied in the context of tikhonov regularization , .the optimal parameter is found as the minimizer of the functional ignoring constant scaling of by .the use of the gcv for finding the optimal parameter for the projected problem , as well for finding the subspace parameter , has also received attention in the literature , .the obvious implementation for is the exact replacement in using the projected system as indicated in .it was recognized in ( * ? ? ?* section 5.4 ) , however , that this formulation , tends to lead to solutions which are over smoothed .there it was suggested instead to use the weighted gcv dependent on parameter experiments illustrated that should be smaller for high noise cases , but in all cases is required to avoid the potential of a zero in the denominator . although the choice for is argued heuristically , and an adaptive algorithm to find is given , no theoretical analysis for finding an optimal is discussed .moreover , there is apparently no study of the use of for projection of underdetermined problems .consider now the two denominators in and .first of all it is not difficult to show from , with the not very restrictive requirement , hence . thus picking to minimize the projected gcvwill not minimize the full gcv term .for the weighted gcv , however , moreover , with for and factoring for and in and , respectively , gives the scaled denominators ignoring constant scaling the denominators are equilibrated by taking this result suggests that we need in order for to estimate found with respect to the projected space .if is such that for , and then when obtained using the wgcv .the estimate is found without projecting the solution back to the full space .note that without reorthogonalization of the columns of and clustering of the singular values means we should not expect the equilibration of the denominators to yield the correct weighting parameter .it is interesting that this reorthogonalization was regarded as less significant in , although it is clear from our discussion that it is useful for suggesting the choice .all formulae apply using the svd for replacing that for matrix .the upre functional is given by the mdp functional is given by for the projected case replaces .using the svd for the wgcv functional is given by with this reduces to the expression for the projected gcv , .to illustrate the discussion in section [ sec : parameter estimation ] we investigate the properties of the projected system matrices in the context of the solution of ill - posed one dimensional problems with known solutions . because the regularization parameter estimation techniques rely on the determination of the subspace and on the properties of the spectra of we look at the spectra and at methods to estimate . in all experiments we use matlab 2014b with examples from the regularization tool box and the code` bidiag_gk ` associated with the software for the paper , for finding the factorization of matrix .for a given problem defined by without noise , noisy data are obtained as for noise level and with the column of error matrix with columns sampled from a random normal distribution using the matlab function ` randn` .examples presented here use the test problems ` phillips ` , ` shaw ` and ` gravity ` from the regularization toolbox , , all of which are discretizations of fredholm integral equations of the first kind . to show the impact of the ill - posedness of the problem we take problem ` phillips ` for which the picard condition does not hold ,problem ` shaw ` which is severely ill - posed , and problem ` gravity ` that depends on a parameter determining the conditioning of the problem , here we use for severe ill - conditioning and which is better conditioned .simulations for over and under sampled data are obtained by straightforward modification of the relevant functions in . in the examples we only show , illustrating the results for the under sampled cases , with samples . for the given method for calculating the noise levelwe observe that the signal to noise ratio for the data , given by is independent of the test problem .in particular and .the condition of each problem depends on the condition of the matrix , see table [ conditiontable ] , and thus for the same choice of noise propagates differently for each test problem ..condition of the test matrices [ cols="^,^,^,^,^",options="header " , ] these results are substantiated by consideration of figure [ figrelerrs ] .it is well - known that the choice of estimator is non - trivial with different methods performing better under different conditions , these results do not contradict that conclusion .overall we deduce that gcv may work well when the subspace is detected by minimizing and also for problems not satisfying the picard condition .we consider two image deblurring problems , ` grain ` and ` satellite ` , both of size from restoretools .restore tools also provides overloaded matrix operations for calculation of matrix vector products and , where describes a point spread function blurring operation .the calculation of the factorization can be immediately obtained once the object is defined as a psf operator with the overloaded matrix operations . as indicated by the results for the one dimensional simulations it is important to note that all presented results use reorthogonalization in obtaining the factorization .our main point here is to first demonstrate the use of the regularization techniques pmdp , wgcv and upre for increasing , and then to examine a stabilizing technique using an irr , section [ sec : irr ] .results without irr are presented for completeness in section [ sec : lsqr ] and with irr in section [ sec : resirr ] .for contrast with the results presented in we use noise levels and in which corresponds to noise levels and , respectively , in with , yielding . these correspond to bsnr and as calculated by . for immediate comparison with indicate the results using the noise level rather than . in figure [ fig2dexample ]we give the true solution , blurred and noisy data and the point spread function .one can observe in fact that even a noise level of is quite low , the main problem here being the blurring .+ in finding the restorations for the data indicated in figure [ badgrain ] and [ badsatellite ] we note first that the matrices for the psfs indicated in figures [ psfgrain ] and [ psfsatellite ] do not satisfy the picard condition .as illustrated in figure [ rhotwod ] , does not show the increase within the shown range of as is clear for with large obtained for the one dimensional examples in figure [ fig : rho ] .still , for ` grain ` does attain a minimum within this range and then exhibits a gradual increase .this suggests that noise is entering the data after the minimum and that one may use where again we advance steps under the assumption that noise enters after the minimum .when the picard condition is not satisfied we may also use to find . in figure [ rhotwod ]the vertical lines indicate the positions of , and . for ` grain `, becomes quickly independent of the number of terms used , already stabilizing at with just terms , with no change even out to a maximum size of in the calculation . for ` satellite ` is less stable only reaching when terms are used in the estimation , but increasing to if terms are required .stability in the choice of with respect to also follows lack of stability in choice of , suggesting that it is preferable to use . in our experimentswe have deduced that it is important to examine the characteristic shape of in determining the optimal choice for the size of the subspace , and will show results using , and .the range for the regularization parameter is also important as is indicated through the windowing approach based on .because there is no clear distinction between the singular values as they decay , we use a single window defined by and apply a filtered truncated svd for the solution which is dominant for the first terms , ie with filter factors for and for . with , . in our resultswe use , , and impose , for the range of in finding the optimal regularization parameter for each of the regularization parameter techniques . to find the _ optimal _ solution , denoted by min in the legends , solutions are found using sampled at points logarithmically on this range and scaled by the mean of the standard deviation of the noise in the data , consistent with the inverse covariance scaling of the problems , and consistent with the range for the regularization parameter used in . in assessing the solutions we adopt the approach in and use both the relative error ( re ) and the mean structured similarity ( mssm ) of the obtained images , , for which larger values correspond to better solutions .the re and mssm for all cases are illustrated in figure [ fig2derrorsstep1 ] .the results are consistent with the literature in terms of the semi convergence behavior of the lsqr and the difficulty for both the mdp and the gcv in estimating a useful regularization parameter . on the other hand ,the results with wgcv , pdmd and upre are consistent with each other and verify the analysis in section [ sec : regest ] , providing a stable solution for increasing .the solutions do not achieve the minimal error of the projected solution without regularization , which depends on knowing the optimal for stability .solutions found at the noted as compared to the _ optimal _ solution with minimum error are illustrated in figure [ fig2dsolutionsgrain_step1 ] , demonstrating that the restorations are inadequate at this level of noise .+ iteratively reweighted regularization provides a cost effective approach for sharpening images e.g. , and has been introduced and applied for focusing geophysical inversion , in this context denoted as minimum support regularization , .regularization operator is replaced by a solution dependent operator , initialized with , yielding iterative solution . for where is a focusing parameter which assures that is invertible .immediately and moves entries for which away from .given that is invertible , we can use with system matrix , to obtain the iterative solution , .furthermore , it is straightforward to modify the algorithm for calculation of the gkb factorization for matrix still using the overloaded matrix operations for matrix multiplication by and , and noting that operations with the diagonal matrix are simple component - wise products .the update given by is equivalent to regularizing the system of equations , here dropping the dependence on iteration , suppose that , for and , then for and we may solve the reduced system where is but with column removed for and all other columns scaled by the relevant diagonal entries from .matrix is of size where and vector is vector with entries removed .we can then solve for in yielding the update , where is obtained from with the same diagonal entries removed .the update for is therefore obtained using with entries , for and , for .therefore , to avoid any need to discuss the choice of , we simply use and obtain the gkb factorization for the reduced system with system matrix .the approach for the iteration requires some explanation as to how the range of is obtained at irr iterations , and requires consideration of , which depends on the subspace size from the prior step , and current subspace size .thus calculation of , and are all dependent on as well as and , ie given a specific subspace size at the minimum and maximum sizes to use at step need to be specified . because the update costs for irr should be kept minimal , the subspace size is maintained less that from the previous step , i.e. we pick . because we anticipate that further noise enters with increasing , we expect , which can be determined by examination of . we will examine the choices for the case with noise .for each iteration the range for is constrained using the current singular values , by , where at step , and for the irr updates .having already noted that the pmdp , wgcv and upre give consistent solutions , while gcv and mdp generally lead to larger errors , the experiments reported with irr are given for the projected , _ optimal _ and upre solutions only . to demonstrate the effectiveness of the irr for stabilizing the solution obtained using tikhonov regularization with upre and to describe a manual approach for determining the sizes of the optimal subspaces we examine the process first for problem ` grain ` and then ` satellite ` with noise .function at the first step does not differ significantly from the case with noise , shown in figure [ rhotwod ] .we reiterate that the calculation of depends on the maximum subspace considered , here we use .because the update costs for irr should be kept minimal , the subspace size is maintained less that from the first step , here .figure [ grainrhostepk ] shows for the choices of in the legend .it is clear that is almost independent of for the first steps , but that noise enters for .this is also reflected in the re in figure [ grainre ] , the re stabilizes for increasing , and decreases for the first three steps of irr , but increases at step .cropped images are shown in figure [ grainstep3]-[grainstep4 ] . at contrast is increased and some small features not present without the irr become apparent .an equivalent process is detailed for problem ` satellite ` with noise , and for maximum subspace .using gives a choice very close to , but depends very much on identifying which peaks should be included , here we imposed .the plot of with increasing in figure [ satelliterhostepk ] show that noise enters the solution sooner for smaller , but that the various choices for demonstrate similar characteristics , and suggest that no more than steps of irr may be needed . from these plots of select , yielding a subspace selection of for the first update and in subsequent updates with the subspace is maintained small just of size .the relative errors for the two simulations are detailed in table [ tab : errortwodfive ] , in comparison to the _ optimal _ relative error .these results show that the stabilization leads to results which are comparable to those that are _ optimal _ but practically unknown .moreover , it is clear that one may not conclude that finding using is preferable to using or .provided that the solutions are stabilized with the irr , improvements in the solutions are obtained in a limited number of steps using iterative reweighting .further , effective irr steps can be obtained using relatively small subspaces for the iterative updates .iteration & & & min for upre&overall min + & & & & + & & & & + & & & & + & & & & + & + iteration & & & min for upre&overall min + & & & & + & & & & + & & & & + & & & & + the solutions at the first step are illustrated in figure [ fig : twodfive ] , demonstrating that using leads to best solutions in one case , and in the other , although effectively the quality is comparable .the graphs of with increasing in figures [ grainrhostepk ] - [ satelliterhostepk ] indicate that the properties of can be used to determine effective termination of the irr , based on the iteration when noise enters into .our experience has shown that the optimal solution in terms of image quality is achieved not at the step before noise enters in but two steps before . in figure [ fig2derrorsgrain ]we contrast the relative errors obtained by the upre , projected and _ optimal _solutions for two steps of irr as compared to the first step .then in figure [ fig2dmeasuresgrain ] we illustrate how the re and the mssm change with the iteration count .the vertical lines in figures [ grainreten]-[satelliessim ] demonstrate that using or makes little difference to the quality of the solution when measured with respect to relative error or mssm .example solutions are given in figure [ fig2dsolutionsgrain_irrstep ] . + to contrast the success of the regularization parameter estimation techniques in the context of a projection problem , we present results for the reconstruction of projection data obtained from tomographic x ray data of a walnut , used for edge preserving reconstruction in and with the description of the data described in .the data are available at .datasets ` datan ` correspond to resolution in the image , and use projections , corresponding to sampling .data are provided with , and .we use resolution with projections , and then downsampled to projections , projections and projections , ie angles , , and .( results with resolutions and are comparable ) .results are presented using the solution at for the projected solution without regularization , and regularized using upre and gcv for comparison , figures [ walnut16460]-[walnut16415 ] , where we do not show results for projections , for which the solutions are almost perfect due to the apparent limited noise in the provided data .the impact of using a reduced number of projections is first evident with just projections . in all the presented results the parameters are determined automatically , after manually picking from manual consideration of the plot for , see figure [ walnutphi ] .all other parameters are estimated in the same way as for the image restoration cases . for increasing sparsity for resolution for the walnut data . ] the results in figure [ walnut16460]-[walnut16415 ] , which show results for one set of data at increasing sparsity , compare with ( * ? ? ?* figure 6.6 and figure 6.7 ) , which give results with resolution for and , respectively , and angle separation , , and .results there use selected choices for the regularization parameter based on a sparsity argument with prior information and seek to support the use of the sparsity argument for reconstruction of sparse data sets , although exhibiting the rather standard total variation blocky structures when applied for truly sparsely sampled data .our results show robust reconstructions with the automatically determined solutions , after first examining the plot for .irr generates marginally improvements in qualitative solutions , more so for the projected case without regularization . to show the impact of the correct choice of on the solution we show a set of results at iteration using in figure [ walnut164tmin18 ] , with the positive constraint .there the projected solution is already significantly noise contaminated for all levels of sparsity , while the upre yields solutions qualitatively similar to the case with . in the examples herewe do impose an additional positivity constraint on the solutions at each step , before calculating the iterative weighting matrix .the results demonstrate that the projected problem with automatic determination of can be used to reconstruct sparsely sampled tomographic data , provided that an initial estimate for is manually determined by consideration of the plot of .further , irr can stabilize the solution when has not been appropriately estimated for the non regularized solution .for the sparse data sets the solutions do not exhibit the characteristic blocky reconstructions of total variation image reconstructions , as seen in , although as there the solutions would be inadequate for precise usage .+ + + + + + + + + + +we have demonstrated that regularization parameter estimation by the method of upre can be effectively applied for regularizing the projected problem .our results also explain the use of the weighting parameter in the wgcv , as well as the reduced safety parameter in the mdp when applied for the projected problem .further , edge preserving regularization via the iteratively reweighted regularizer can be applied to stabilize regularized solutions of the projected problem .our results suggest manual estimation of a minimal subspace size can then lead to useful estimates for an optimal projected space , with the use of the irr leading to improvements in the solutions when is found by different methods , including the use of , and , hence making the determination of this less crucial in providing an acceptable solution. future work on this topic should include extending the windowed regularization parameter techniques for finding a multiply weighted gcv for projected regularization , and use of more general iteratively reweighted regularizers accounting for edges in more than one direction in conjunction with the projected solutions .these are topics for further study .suppose the svd of matrix , , is given by , where the singular values are ordered and occur on the diagonal of with zero columns ( when ) or zero rows ( when ) , and , and are orthogonal matrices , .then for the projected case , i.e. , and the expression still applies with replacing , replacing , replacing and in .00 bjrck a 1986 _ numerical methods for least squares problems _ society for industrial and applied mathematics philadelphia pa chung j m easley g and oleary d p 2011 windowed spectral regularization of inverse problems _ siam j. sci .comput . _ * 33 * 6 3175 - 3200 chung j m nagy j and oleary d p 2008 a weighted gcv method for lanczos hybrid regularization _ etna _ ,* 28 * , 149 - 167 chung j m kilmer m e and oleary d p 2015 a framework for regularization via operator approximation _siam j. sci .* 37 * 2 b332-b359 donatelli m and reichel l 2014 square smoothing regularization matrices with accurate boundary conditions _ j computational and applied mathematics _* 272 * 334 - 349 fong d c - l and saunders m a 2011 lsmr : an iterative algorithm for sparse least - squares problems _siam j. sci .comput . _ * 33 * 5 , 2950 - 2971 golub g h heath m and wahba g 1979 generalized cross validation as a method for choosing a good ridge parameter_ technometrics _ * 21 * 2 215 - 223 .golub g h and van loan c 1996 _ matrix computations _ john hopkins press baltimore 3rd ed .hmlinen k harhanen l kallonen a kujanp a niemi e and siltanen s 2015 tomographic x - ray data of a walnut arxiv:1502.04064v1 , http://www.fips.fi/dataset.php .hmlinen k kallonen a kolehmainen v lassas m niinimki k and siltanen s 2013 sparse tomography , _ siam journal of scientific computing _ * 35 * 3 , b644- b665 hanke m and hansen p c 1993 regularization methods for large scale problems _ surveys math .indust . _ * 3 * 253 - 315 hansen p c 1998 _ rank - deficient and discrete ill - posed problems : numerical aspects of linear inversion _siam monographs on mathematical modeling and computation * 4 * philadelphia hansen p c 2007 regularization tools : a matlab package for analysis and solution of discrete ill - posed problems version 4.0 for matlab 7.3 , _ numerical algorithms _ * 46 * , 189 - 194 , and http://www2.imm.dtu.dk/~pcha/regutools/ hansen p c and jensen t k 2008 noise propagation in regularizing iterations for image deblurring _ etna _ * 31 * 204 - 220 hansen p c nagy j g and oleary d p 2006 _ deblurring images matrices spectra and filtering _siam philadelphia hntynkov i plesinger m and strakos , z 2009 the regularizing effect of the golub - kahan iterative bidiagonalization and revealing the noise level in the data _ bit numerical mathematics _ *49 * 4 669 - 696 hochstenbach m e and reichel l 2010 an iterative method for tikhonov regularization with general linear regularization operator _j. integral equations appl . _* 22 * 463 - 480 kilmer m e and oleary d p 2001 choosing regularization parameters in iterative methods for ill - posed problems _ siam journal on matrix analysis and applications _ * 22 * 1204 - 1221 morozov v a 1966 on the solution of functional equations by the method of regularization _ sov .dokl . _ * 7 * 414 - 417 nagy j g palmer k and perrone l 2004 iterative methods for image deblurring : an object oriented approach _ numerical algorithms _ * 36 * 73 - 93 neelamani r choi h and baraniuk r g 2004 forward : fourier - wavelet regularized deconvolution for ill - conditioned systems _ ieee transactions on signal processing _ * 52 * 2 418 - 433 paige c c and saunders m a 1981 towards a generalized singular value decomposition _siam journal on numerical analysis _ * 18 * 3 398 - 405 paige c c and saunders m a 1982 lsqr : an algorithm for sparse linear equations and sparse least squares _ acm trans .math . software _* 8 * 43 - 71 paige c c and saunders m a 1982 algorithm 583 lsqr : sparse linear equations and least squares problems _ acm trans .math . software _ * 8 * 195 - 209 portniaguine o and zhdanov m s 1999 focusing geophysical inversion images _ geophysics _ * 64 * 874 - 887 paoletti v hansen p c hansen m f and maurizio f 2014 a computationally efficient tool for assessing the depth resolution in large - scale potential - field inversion _ geophysics _ * 79 * 4 a33a38 reichel l sgallari f and ye q 2012 tikhonov regularization based on generalized krylov subspace methods _ appl . numer ._ , * 62 * 1215 - 1228 renaut r a hnetynkov i and mead j l 2010 regularization parameter estimation for large scale tikhonov regularization using a priori information _ computational statistics and data analysis _ * 54 * 12 3430 - 3445 doi:10.1016/j.csda.2009.05.026 vatankhah s ardestani v e and renaut r a 2014 automatic estimation of the regularization parameter in 2-d focusing gravity inversion : application of the method to the safo manganese mine in the northwest of iran _ journal of geophysics and engineering _ * * * 11 * 045001 vatankhah s ardestani v e and renaut r a 2015 application of the principle and unbiased predictive risk estimator for determining the regularization parameter in 3-d focusing gravity inversion _ geophysical j international _ * 200 * 265 - 277 doi : 10.1093/gji / ggu397 vatankhah s renaut r a and ardestani v e 2014 regularization parameter estimation for underdetermined problems by the principle with application to 2d focusing gravity inversion _ inverse problems _ * 30 * 085002 vogel c r 2002 _ computational methods for inverse problems _ siam frontiers in applied mathematics siam philadelphia u.s.a. wohlberg b and rodriguez p 2007 an iteratively reweighted norm algorithm for minimization of total variation functionals _ ieee signal processing letters _ * 14 * 948951 wang z bovik a c sheikh h r simoncelli e p 2004 image quality assessment : from error visibility to structural similarity _ ieee trans . image process ._ * 13 * 600 - 612 www.cns.nyu.edu/~lcv/ssim zhdanov m s 2002 _ geophysical inverse theory and regularization problems _ elsevier amsterdam .
tikhonov regularization for projected solutions of large - scale ill - posed problems is considered . the golub - kahan iterative bidiagonalization is used to project the problem onto a subspace and regularization then applied to find a subspace approximation to the full problem . determination of the regularization parameter using the method of unbiased predictive risk estimation is considered and contrasted with the generalized cross validation and discrepancy principle techniques . examining the unbiased predictive risk estimator for the projected problem , it is shown that the obtained regularized parameter provides a good estimate for that to be used for the full problem with the solution found on the projected space . the connection between regularization for full and projected systems for the discrepancy and generalized cross validation estimators is also discussed and an argument for the weight parameter in the weighted generalized cross validation approach is provided . all results are independent of whether systems are over or underdetermined , the latter of which has not been considered in discussions of regularization parameter estimation for projected systems . numerical simulations for standard one dimensional test problems and two dimensional data for both image restoration and tomographic image reconstruction support the analysis and validate the techniques . the size of the projected problem is found using an extension of a noise revealing function for the projected problem . furthermore , an iteratively reweighted regularization approach for edge preserving regularization is extended for projected systems , providing stabilization of the solutions of the projected systems with respect to the determination of the size of the projected subspace . * keywords : * large - scale inverse problems , golub - kahan bidiagonalization , regularization parameter estimation , unbiased predictive risk estimator , discrepancy principle , generalized cross validation , iteratively reweighted schemes
this paper is motivated by an open question on a system of interacting locally regulated diffusions . in , a sufficient condition for local extinction is established for such a system . in general , however , there is no criterion available for global extinction , that is , convergence of the total mass process to zero when started in finite total mass .the method of proof for the local extinction result in is a comparison with a mean field model which solves where is a standard brownian motion and where are suitable functions satisfying .this mean field model arises as the limit as ( see theorem 1.4 in for the case ) of the following system of interacting locally regulated diffusions on islands with uniform migration }}}\,dt\\ & + h{{{\bigl(x_t^n(i)\bigr)}}}dt + \sqrt{2g{{{\bigl(x_t^n(i)\bigr)}}}}\,db_t(i)\qquad i=0,\ldots , n-1 . \end { split}\ ] ] for this convergence , may be assumed to be independent and identically distributed with the law of being independent of .the intuition behind the comparison with the mean field model is that if there is competition ( modeled through the functions and in ) among individuals and resources are everywhere the same , then the best strategy for survival of the population is to spread out in space as quickly as possible .the results of cover translation invariant initial measures and local extinction . for general and ,not much is known about extinction of the total mass process .let the solution of be started in , .we prove in a forthcoming paper under suitable conditions on the parameters that the total mass converges as .in addition , we show in that paper that the limiting process dominates the total mass process of the corresponding system of interacting locally regulated diffusions started in finite total mass .consequently , a global extinction result for the limiting process would imply a global extinction result for systems of locally regulated diffusions. in this paper we introduce and study a model which we call _ virgin island model _ and which is the limiting process of as .note that in the process an emigrant moves to a given island with probability .this leads to the characteristic property of the virgin island model namely every emigrant moves to an unpopulated island .our main result is a necessary and sufficient condition ( see below ) for global extinction for the virgin island model .moreover , this condition is fairly explicit in terms of the parameters of the model .now we define the model . on the -th islandevolves a diffusion with state space given by the strong solution of the stochastic differential equation where is a standard brownian motion .this diffusion models the total mass of a population and is the diffusion limit of near - critical branching particle processes where both the offspring mean and the offspring variance are regulated by the total population .later , we will specify conditions on and so that is well - defined .for now , we restrict our attention to the prototype example of a feller branching diffusion with logistic growth in which , and with .note that zero is a trap for , that is , implies for all .mass emigrates from the -th island at rate and colonizes unpopulated islands .a new population should evolve as the process .thus , we need the law of excursions of from the trap zero . for this , define the set of excursions from zero by , \,\chi_t=0{\ensuremath{\;\;\forall\;}}t\in(-\infty,0]\cup[t_0,\infty)\bigr\}\ ] ] where is the first hitting time of .the set is furnished with locally uniform convergence . throughout the paper , and denote the set of continuous functions and the set of cdlg functions , respectively , between two intervals .furthermore , define the _ excursion measure _ is a -finite measure on .it has been constructed by pitman and yor as follows : under , the trajectories come from zero according to an entrance law and then move according to the law of .further characterizations of are given in , too .for a discussion on the excursion theory of one - dimensional diffusions , see . we will give a definition of later .next we construct all islands which are colonized from the -th island and call these islands the first generation .then we construct the second generation which is the collection of all islands which have been colonized from islands of the first generation , and so on .figure [ f : excursion_tree ] illustrates the resulting tree of excursions . for the generation - wise construction ,we use a method to index islands which keeps track of which island has been colonized from which island .an island is identified with a triple which indicates its mother island , the time of its colonization and the population size on the island as a function of time . for ,let be a possible -th island .for each and , define which we will refer to as the set of all possible islands of the -th generation with fixed -th island .this notation should be read as follows .the island has been colonized from island at time and carries total mass at time .notice that there is no mass on an island before the time of its colonization .the island space is defined by denote by the colonization time of island if for some .furthermore , let be a set of poisson point processes on with intensity measure } } } = a{{{\bigl(\chi(t - s)\bigr)}}}\,dt\otimes { { q}_y}(d\psi)\quad \iota\in{\ensuremath{{{\ensuremath{\mathcal i}}}}}.\ ] ] for later use , let .we assume that the family is independent for every .the virgin island model is defined recursively generation by generation .the -th generation only consists of the -th island the -st generation , , is the ( random ) set of all islands which have been colonized from islands of the -th generation the set of all islands is defined by the total mass process of the virgin island model is defined by our main interest concerns the behaviour of the law of as .the following observation is crucial for understanding the behavior of as .there is an inherent branching structure in the virgin island model .consider as new `` time coordinate '' the number of island generations .one offspring island together with all its offspring islands is again a virgin island model but with the path on the -th island replaced by an excursion path .because of this branching structure , the virgin island model is a multi - type crump - mode - jagers branching process ( see under `` general branching process '' ) if we consider islands as individuals and as type space .we recall that a single - type crump - mode - jagers process is a particle process where every particle gives birth to particles at the time points of a point process until its death at time , and are independent and identically distributed .the literature on crump - mode - jagers processes assumes that the number of offspring per individual is finite in every finite time interval . in the virgin island model , however , every island has infinitely many offspring islands in a finite time interval because is an infinite measure .the most interesting question about the virgin island model is whether or not the process survives with positive probability as .generally speaking , branching particle processes survive if and only if the expected number of offspring per particle is strictly greater than one , e.g. the crump - mode - jagers process survives if and only if >1 ] ) , this is just the virgin island model with replaced by a feller branching diffusion , i.e. , , .it would be interesting to know whether existence and uniqueness of such stochastic integral equations still hold if the excursion measure of the feller branching diffusion is replaced by .models with competition have been studied by various authors .mueller and tribe ( 1994 ) and horridge and tribe ( 2004 ) investigate an one - dimensional spde analog of interacting feller branching diffusions with logistic growth which can also be viewed as kpp equation with branching noise .bolker and pacala ( 1997 ) propose a branching random walk in which the individual mortality rate is increased by a weighted sum of the entire population .etheridge ( 2004 ) studies two diffusion limits hereof .the `` stepping stone version of the bolker - pacala model '' is a system of interacting feller branching diffusions with non - local logistic growth .the `` superprocess version of the bolker - pacala model '' is an analog of this in continuous space .hutzenthaler and wakolbinger , motivated by , investigated interacting diffusions with local competition which is an analog of the virgin island model but with mass migrating on instead of migration to unpopulated islands .the following assumption guarantees existence and uniqueness of a strong -valued solution of equation , see e.g. theorem iv.3.1 in .assumption [ a : a1 ] additionally requires that is essentially linear .[ a : a1 ] the three functions , and are locally lipschitz continuous in and satisfy .the function is strictly positive on .furthermore , and satisfy the linear growth condition where denotes the maximum of and .in addition , holds for all and for some constants .the key ingredient in the construction of the virgin island model is the law of excursions of from the boundary zero .note that under assumption [ a : a1 ] , zero is an absorbing boundary for , i.e. implies for all . as zerois not a regular point , it is not possible to apply the well - established it excursion theory .instead we follow pitman and yor and obtain a -finite measure to be called _ excursion measure _ on ( defined in ) . for this, we additionally assume that hits zero in finite time with positive probability .the following assumption formulates a necessary and sufficient condition for this ( see lemma 15.6.2 in ) . to formulate the assumption ,we define note that is a scale function , that is , holds for all , see section 15.6 in .[ a : a2 ] the functions , and satisfy for some . note thatif assumption [ a : a2 ] is satisfied , then holds for all .pitman and yor construct the excursion measure in three different ways one being as follows .the set of excursions reaching level has -measure .conditioned on this event an excursion follows the diffusion conditioned to converge to infinity until this process reaches level . from this time on the excursion follows an independent unconditioned process .we carry out this construction in detail in section [ sec : excursions_from_a_trap_of_one_dimensional_diffusions ] .in addition pitman and yor describe the excursion measure `` in a preliminary way as '' where the limit indicates weak convergence of finite measures on away from neighbourhoods of the zero - trajectory .however , they do not give a proof .having identified as the limit in will enable us to transfer explicit formulas for to explicit formulas for .we establish the existence of the limit in in theorem [ thm : existence_excursion_measure ] below . for this , let the topology on be given by locally uniform convergence . furthermore ,recall from , the definition of from and the definition of from .[ thm : existence_excursion_measure ] assume [ a : a1 ] and [ a : a2 ] .then there exists a -finite measure on such that for all bounded continuous for which there exists an such that whenever . for our proof of the global extinction result for the virgin island model , we need the scaling function in to behave essentially linearly in a neighbourhood of zero .more precisely , we assume to exist in . from definition of is clear that a sufficient condition for this is given by the following assumption .[ a : s_bar ] the integral has a limit in as .it follows from dominated convergence and from the local lipschitz continuity of and that assumption [ a : s_bar ] holds if is finite .in addition , we assume that the expected total emigration intensity of the virgin island model is finite .lemma [ l : finite_man_hours ] shows that , under assumptions [ a : a1 ] and [ a : a2 ] , an equivalent condition for this is given in assumption [ a : finite_man_hours ] .[ a : finite_man_hours ] the functions , and satisfy for some and then for all . we mention that if assumptions [ a : a1 ] , [ a : a2 ] and [ a : finite_man_hours ] hold , then the process hits zero in finite time almost surely ( see lemma [ l : finite_time_extinction ] and lemma [l : finite_man_hours ] ) .furthermore , we give a generic example for , and namely , , with .the assumptions [ a : a1 ] , [ a : a2 ] , [ a : s_bar ] and [ a : finite_man_hours ] are all satisfied if and if .assumption [ a : a2 ] is not met by , , and because then , and condition fails to hold .next we formulate the main result of this paper .theorem [ thm : extinction_vim ] proves a nontrivial transition from extinction to survival . for the formulation of this result ,we define which is well - defined under assumption [ a : s_bar ] .note that .define the excursion measure and recall the total mass process from .[ thm : extinction_vim ] assume [ a : a1 ] , [ a : a2 ] , [ a : s_bar ] and [ a : finite_man_hours ] .then the total mass process started in dies out ( i.e. , converges in probability to zero as ) if and only if if fails to hold , then converges in distribution as to a random variable satisfying for all and some .the constant is the unique strictly positive fixed - point of a function defined in lemma [ l : fixed_point ] . in the critical case , that is , equality in, converges to zero in distribution as .however , it turns out that the expected area under the graph of is infinite .in addition , we obtain in theorem [ thm : expected_man_hours ] the asymptotic behaviour of the expected area under the graph of up to time as . for this , define and similarly with .if assumptions [ a : a1 ] , [ a : a2 ] , [ a : s_bar ] and [ a : finite_man_hours ] hold , then is finite for fixed ; see lemma [ l : finite_man_hours ] . furthermore , under assumptions [ a : a1 ] , [ a : a2 ] , [ a : s_bar ] and [ a : finite_man_hours ] , by the dominated convergence theorem .[ thm : expected_man_hours ] assume [ a : a1 ] , [ a : a2 ] , [ a : s_bar ] and [ a : finite_man_hours ] . if the left - hand side of is strictly smaller than one , then the expected area under the path of is equal to for all .otherwise , the left - hand side of is infinite . in the critical case , that is , equality in , where the right - hand side is interpreted as zero if the denominator is equal to infinity . in the supercritical case ,i.e. , if fails to be true , let be such that then the order of growth of the expected area under the path of up to time as can be read off from for all .the following result is an analog of the kesten - stigum theorem , see . in the supercritical case, converges to a random variable as .in addition , is not identically zero if and only if the -condition holds .we will prove a more general version hereof in theorem [ thm : general : xlogx ] below .unfortunately , we do not know of an explicit formula in terms of , and for the left - hand side of . aiming at a condition which is easy to verify , we assume instead of that the second moment is finite . in assumption [ a : finite_man_hours_squared ], we formulate a condition which is slightly stronger than that , see lemma [ l : q_explicit_man_hours ] below .[ a : finite_man_hours_squared ] the functions , and satisfy for some and then for all .[ thm : xlogx ] assume [ a : a1 ] , [ a : a2 ] , [ a : s_bar ] and [ a : finite_man_hours_squared ] .suppose that fails to be true ( supercritical case ) and let be the unique solution of .then in the weak topology and .theorem [ thm : existence_excursion_measure ] will be established in section [ sec : excursions_from_a_trap_of_one_dimensional_diffusions ] . note that section [ sec : excursions_from_a_trap_of_one_dimensional_diffusions ] does not depend on the sections [ sec : random_characteristics]-[sec : convergence_supercritical ] .we will prove the survival and extinction result of theorem [ thm : extinction_vim ] in two steps .in the first step , we obtain a criterion for survival and extinction in terms of .more precisely , we prove that the process dies out if and only if the expression in is smaller than or equal to one . in this step, we do not exploit that is the excursion measure of .in fact , we will prove an analog of theorem [ thm : extinction_vim ] in a more general setting where is replaced by some -finite measure and where the islands are counted with random characteristics .see section [ sec : random_characteristics ] below for the definitions .the analog of theorem [ thm : extinction_vim ] is stated in theorem [ thm : general : extinction ] , see section [ sec : random_characteristics ] , and will be proven in section [ sec : extinction_vim ] . the key equation for its proof is contained in lemma [ l : tree_structure ] which formulates the branching structure in the virgin island model . in the second step ,we calculate an expression for in terms of and .this will be done in lemma [l : q_explicit_man_hours ] .theorem [ thm : extinction_vim ] is then a corollary of theorem [ thm : general : extinction ] and of lemma [ l : q_explicit_man_hours ] , see section [ sec : proof_main_theorems ] .similarly , a more general version of theorem [ thm : expected_man_hours ] is stated in theorem [ thm : general : expected_man_hours ] , see section [ sec : random_characteristics ] below .the proofs of theorem [ thm : expected_man_hours ] and of theorem [ thm : general : expected_man_hours ] are contained in section [ sec : proof_main_theorems ] and section [ sec : proof_of_theorem_expected_man_hours ] , respectively . as mentioned in section [ sec : introduction ] , a rescaled version of converges in the supercritical case .this convergence is stated in a more general formulation in theorem [ thm : general : xlogx ] , see section [ sec : random_characteristics ] below .the proofs of theorem [ thm : xlogx ] and of theorem [ thm : general : xlogx ] are contained in section [ sec : proof_main_theorems ] and in section [ sec : convergence_supercritical ] , respectively .in the proof of the extinction result of theorem [ thm : extinction_vim ] , we exploit that one offspring island together with all its offspring islands is again a virgin island model but with a typical excursion instead of on the -th island . for the formulation of this branching property, we need a version of the virgin island model where the population on the -th island is governed by .more generally , we replace the law of the first island by some measure and we replace the excursion measure by some measure . given two -finite measures and on the borel--algebra of , we define the _ virgin island model with initial island measure and excursion measure _ as follows .define the random sets of islands , , and through the definitions , , and with and replaced by and , respectively .a simple example for and is where is a random variable and is the dirac measure on the path . then the virgin island model coincides with a crump - mode - jagers process in which a particle has offspring according to a rate poisson process until its death at time .furthermore , our results do not only hold for the total mass process but more generally when the islands are counted with random characteristics .this concept is well - known for crump - mode - jagers processes , see section 6.9 in .assume that , , are separable and nonnegative processes with the following properties .it vanishes on the negative half - axis , i.e. for . informally speaking our main assumption on that it does not depend on the history .formally we assume that furthermore , we assume that the family is independent for each and is measurable . as a short notation , define for . with this , we define and say that is a _ virgin island process counted with random characteristics _ . instead of ,we write for a path and note that is measurable .a prominent example for is the deterministic random variable . in this case, is the total mass of all islands at time .notice that defined in is a special case hereof , namely .another example for is .then is the total mass at time of all islands which have been colonized in the last time units .if , then . as in section[ sec : main_results ] , we need an assumption which guarantees finiteness of .[ a : general : finite_moments ] the function is continuous and there exist such that for all .furthermore , for every the analog of assumption [ a : finite_man_hours ] in the general setting is the following assumption .[ a : general : finite_man_hours ] both the expected emigration intensity of the -th island and of subsequent islands are finite : in section [ sec : main_results ] , we assumed that hits zero in finite time with positive probability .see assumption [ a : a2 ] for an equivalent condition .together with [ a : finite_man_hours ] , this assumption implied almost sure convergence of to zero as . in the general setting , we need a similar but somewhat weaker assumption .more precisely , we assume that converges to zero in distribution both with respect to and with respect to .[ a : convergence_to_zero ] the random processes and the measures and satisfy for all . having introduced the necessary assumptions , we now formulate the extinction and survival result of theorem [ thm : extinction_vim ] in the general setting .[ thm : general : extinction ] let be a probability measure on and let be a measure on .assume [ a : general : finite_moments ] , [ a : general : finite_man_hours ] and [ a : convergence_to_zero ] .then the virgin island process counted with random characteristics with -th island distribution and with excursion measure dies out ( i.e. , converges to zero in probability ) if and only if in case of survival , the process converges weakly as to a probability measure with support in which puts mass on the point where is the unique strictly positive fixed - point of the assumption on to be a probability measure is convenient for the formulation in terms of convergence in probability . for a formulation in the case of a -finite measure , see the proof of the theorem in section [ sec : extinction_vim ] .next we state theorem [ thm : expected_man_hours ] in the general setting . for its formulation ,define and similarly with replaced by .[ thm : general : expected_man_hours ] assume [ a : general : finite_moments ] and [ a : general : finite_man_hours ] . if the left - hand side of is strictly smaller than one and if both and are integrable , then }}}\nu(d\chi ) = \int_0^\infty \!\!\!f^\nu(s)ds + \frac{\int_0^\infty f^q(s)\,ds\,\int \int_0^\infty a{{{\bigl(\chi_s\bigr)}}}\,ds\nu(d\chi ) } { 1-\int { { { \bigl(\int_0^\infty a{{{\bigl(\chi_s\bigr)}}}\,ds\bigr)}}}q(d\chi)}\ ] ] which is finite and strictly positive .otherwise , the left - hand side of is infinite .if the left - hand side of is equal to one and if both and are integrable , }}}\nu(d\chi ) = \frac{\int_0^\infty f^q(s)\,ds\,{\ensuremath{{\displaystyle \cdot}}}\int \int_0^\infty a{{{\bigl(\chi_s\bigr)}}}\,ds\,\nu(d\chi ) } { \int_0^\infty s\int a{{{\bigl(\chi_s\bigr)}}}q(d\chi)\,ds } <\infty\ ] ] where the right - hand side is interpreted as zero if the denominator is equal to infinity . in the supercritical case , i.e. , if fails to be true , let be such that additionally assume that is continuous a.e . with respect to the lebesgue measure , and that as .then the order of convergence of the expected total intensity up to time can be read off from }}}\nu(d\chi ) = \frac{1}{{{\ensuremath{\alpha}}}}\,{{\ensuremath{{\displaystyle \lim_{t { \rightarrow}\infty}}}}}e^{-{{\ensuremath{\alpha}}}t}\int { { \ensuremath{\mathbf{e}}}}{{{\bigl[v_t^{\phi,\chi , q}\bigr]}}}\nu(d\chi)\ ] ] and from }}}\nu(d\chi ) = \frac{\int_0^\infty e^{-{{\ensuremath{\alpha}}}s}f^q(s)\,ds \,{\ensuremath{{\displaystyle \cdot}}}\int_0^\infty e^{-{{\ensuremath{\alpha}}}s}\int a{{{\bigl(\chi_s\bigr)}}}\nu(d\chi)\,ds } { \int_0^\infty s e^{-{{\ensuremath{\alpha}}}s}\int a{{{\bigl(\chi_s\bigr)}}}q(d\chi)\,ds}.\ ] ] for the formulation of the analog of the kesten - stigum theorem , denote by the right - hand side of with replaced by .furthermore , define for every path . for our proof of theorem [ thm : general : xlogx ] , we additionally assume the following properties of . [ a : for_skdist ] the measure satisfies for every and }}}q(d\chi)<\infty,\quad \sup_{t\geq0}\int{{\ensuremath{\mathbf{e}}}}{{{\bigl(\phi_\chi^2(t)\bigr ) } } } q(d\chi)<\infty.\ ] ] [ thm : general : xlogx ] let be a probability measure on and let be a measure on .assume [ a : general : finite_moments ] , [ a : general : finite_man_hours ] , [ a : convergence_to_zero ] and [ a : for_skdist ] .suppose that ( supercritical case ) and let be the unique solution of .then in the weak topology where is a nonnegative random variable .the variable is not identically zero if and only if where .if holds , then }}}\nu(d\chi ) , { { \ensuremath{\mathbf{p}}}}{{{\bigl(w=0\bigr)}}}=\int{{{\bigl[e^{-q\int_0^\infty a{{{(\chi_s)}}}\,ds}\bigr]}}}\nu(d\chi ) \end{split}\ ] ] where is the unique strictly positive fixed - point of . comparing with, we see that . consequently , the virgin island process conditioned on not converging to zero grows exponentially fast with rate as .we mentioned in the introduction that there is an inherent branching structure in the virgin island model .one offspring island together with all its offspring islands is again a virgin island model but with a typical excursion instead of on the -th island . in lemma[ l : tree_structure ] , we formalize this idea . as a corollary thereof , we obtain an integral equation for the modified laplace transform of the virgin island model in lemma [ l : integral_laplace_trafo ] which is the key equation for our proof of the extinction result of theorem [ thm : extinction_vim ] .recall the notation of section [ sec : introduction ] and of section [ sec : random_characteristics ] .[ l : tree_structure ] let .there exists an independent family of random variables which is independent of and of such that and such that for all .write and .define for where for . comparing and with, we see that define for and for summing over we obtain for this is equality .independence of the family follows from independence of and from independence of .it remains to prove . because of assumption the random characteristics only depends on the last part of .therefore summing over results in and finishes the proof . in order to increase readability ,we introduce the following suggestive symbolic abbreviation } } } : = \int{{\ensuremath{\mathbf{e}}}}f{{{\bigl ( v_t^{\phi,\chi , q}\bigr)}}}\nu(d\chi ) \quad t\geq0 , f\in{{\ensuremath{\mathbf{c}}}}{{{\bigl([0,\infty),[0,\infty)\bigr)}}}.\ ] ] one might want to read this as `` expectation '' with respect to a non - probability measure . however , is not intended to define an operator .the following lemma proves that the virgin island model counted with random characteristics as defined in is finite .[ l : general : finite_moments ] assume [ a : general : finite_moments ] .then , for every , } } } < \infty.\ ] ] furthermore , if then there exists a constant such that }}}}\\ & \leq c_t{{{\bigl(1+\sup_{t\leq t}\int { { \ensuremath{\mathbf{e}}}}{{{\bigl(\phi_\chi^2(t)\bigr)}}}\ , ( \nu+q)(d\chi ) + \int{{{\bigl(\int_0^t a(\chi_s)ds\bigr)}}}^2\nu(d\chi)\bigr ) } } } \end { split}\ ] ] for all and the right - hand side of is finite in the special case . we exploit the branching property formalized in lemma [ l : tree_structure ] and apply gronwall s inequality . recall from the proof of lemma [ l : tree_structure ] . the two equalities and imply } } } = \int { { \ensuremath{\mathbf{e}}}}\phi_\chi(t)\,\nu(d\chi)\leq \sup_{s\leq t } \int { { \ensuremath{\mathbf{e}}}}\phi_\chi(s)\,\nu(d\chi)\ ] ] for and for } } } = \int { { \ensuremath{\mathbf{e}}}}{{{\bigl[\sum_{(s,\psi)\in\pi^{\chi } } { { \ensuremath{\mathbf{e}}}}{{{\bigl[v_{t - s}^{(n-1),\phi,\psi , q}\bigr]}}}\bigr]}}}\nu(d\chi)\\ & = \int { { { \biggl(\int_0^t\int { { \ensuremath{\mathbf{e}}}}{{{\bigl[v_{t - s}^{(n-1),\phi,\psi , q}\bigr]}}}q(d\psi ) a{{{(\chi_s)}}}ds\biggr)}}}\,\nu(d\chi)\\ & \leq \sup_{u\leq t}\int a{{{(\chi_u)}}}\nu(d\chi)\int_0^t { { \ensuremath{\mathbf{i}}}}{{{\bigl[v_{s}^{(n-1),\phi , q , q}\bigr ] } } } \,ds . \end { split}\ ] ] using assumption [ a : general : finite_moments ] induction on shows that all expressions in and in are finite in the case . summing over we obtain } } }\leq \int{{\ensuremath{\mathbf{e}}}}\phi_\chi(u)\nu(d\chi ) + \int_0^t\sum_{n=0}^{n_0}{{\ensuremath{\mathbf{i}}}}{{{\bigl[v_s^{(n),\phi , q , q}\bigr]}}}\int a{{{(\chi_{t - s})}}}\nu(d\chi)\,ds \end { split}\ ] ] for . in the special case gronwalls inequality implies }}}\leq \sup_{u\leq t}\int{{\ensuremath{\mathbf{e}}}}\phi_\chi(u)q(d\chi ) { \ensuremath{{\displaystyle \cdot}}}\exp{{{\bigl ( t \sup_{u\leq t}\int a{{{(\chi_u)}}}q(d\chi)\bigr)}}}.\ ] ] summing over , inserting into and letting we see that follows from assumption [ a : general : finite_moments ] . for the proof of , note that with and imply for some .in addition the two equalities and together with independence imply for and for in the special case induction on together with shows that all involved expressions are finite . a similar estimate as in leads to }}}\nu(d\chi ) -\int\bigl({{\ensuremath{\mathbf{e}}}}\sum_{n=0}^{n_0 } v_t^{(n),\phi,\chi , q}\bigr)^2\nu(d\chi)}\\ & = \int { \operatorname{var}}{{{\bigl(\phi_\chi(t)\bigr)}}}+{{\ensuremath{\mathbf{e}}}}{{{\biggl(\sum_{(s,\psi)\in\pi^\chi } { \operatorname{var}}{{{\bigl(\sum_{n=1}^{n_0}v_{t - s}^{(n-1),\phi,\psi , q}\bigr)}}}\biggr)}}}\,\nu(d\chi)\\ & = \int { \operatorname{var}}{{{\bigl(\phi_\chi(t)\bigr)}}}+\int_0^t { { { \bigl(a(\chi_s ) \int { \operatorname{var}}{{{\bigl(\sum_{n=0}^{n_0 - 1}v_{t - s}^{(n),\phi,\psi , q}\bigr ) } } } q(d\psi)\bigr ) } } } ds\,\nu(d\chi)\\ & \leq \int{{\ensuremath{\mathbf{e}}}}{{{\bigl(\phi_\chi^2(t)\bigr)}}}\nu(d\chi)+ \int_0^t \int { { \ensuremath{\mathbf{e}}}}{{{\bigl[\bigl(\sum_{n=0}^{n_0}v_{s}^{(n),\phi,\psi , q}\bigr)^2\bigr ] } } } q(d\psi ) ds { \ensuremath{{\displaystyle \cdot}}}\sup_{u\leq t}\int a(\chi_u)\,\nu(d\chi ) .\end { split}\ ] ] in the special case gronwall s inequality together with leads to }}}q(d\chi)}\\ & \leq{{{\biggl(\int 3{{\ensuremath{\mathbf{e}}}}{{{\bigl(\phi_\chi^2 ( t)\bigr ) } } } + { { \ensuremath{\tilde{c}}}}_t\bigl(\int_0^t a(\chi_s)\,ds\bigr)^2q(d\chi)\biggr ) } } } \exp{{{\bigl(\sup_{u\leq t } \int a(\chi_u)\,q(d\chi)t\bigr ) } } } \end{split}\ ] ] which is finite by assumption [ a : general : finite_moments ] and assumption .inserting into and letting finishes the proof . in the following lemma, we establish an integral equation for the modified laplace transform of the virgin island model .recall the definition of from .[ l : integral_laplace_trafo ] assume [ a : general : finite_moments ] .the modified laplace transform }}} ] reads as in the subcritical case , and are integrable .theorem 5.2.9 in applied to with replaced by implies letting in , dominated convergence and imply inserting results in . in the critical case , similar arguments lead to the last equality follows from with replaced by and corollary 5.2.14 of with , and .note that the assumption of this corollary is not necessary for this conclusion .recall the definition of from and the notation from . as we pointed out in section[ sec : main_results ] , the expected total emigration intensity of the virgin island model plays an important role .the following lemma provides us with some properties of the modified laplace transform of the total emigration intensity .these properties are crucial for our proof of theorem [ thm : general : extinction ] .[ l : fixed_point ] assume [ a : general : finite_man_hours ] . then the function is concave with at most two fixed - points .zero is the only fixed - point if and only if denote by the maximal fixed - point .then we have for all : if for -a.a . , then and zero is the only fixed - point .for the rest of the proof , we assume w.l.o.g . that .the function has finite values because of , , and assumption [ a : general : finite_man_hours ] .concavity of is inherited from the concavity of , . using dominated convergence together with assumption [ a : general : finite_man_hours ], we see that in addition , dominated convergence together with assumption [ a : general : finite_man_hours ] implies }}}q(d\chi)\,\quad z\geq0.\ ] ] hence , is strictly concave .thus , has a fixed - point which is not zero if and only if . the implications and follow from the strict concavity of .the method of proof ( cf .section 6.5 in ) of the extinction result for a crump - mode - jagers process is to study an equation for .the laplace transform converges monotonically to as , .furthermore , converges monotonically to the extinction probability as .taking monotone limits in the equation for results in an equation for the extinction probability .in our situation , there is an equation for the modified laplace transform as defined in below .however , the monotone limit of as might be infinite .thus , it is not clear how to transfer the above method of proof .the following proof of theorem [ thm : extinction_vim ] directly establishes the convergence of the modified laplace transform .recall from lemma [ l : fixed_point ] . in the first step , we will prove } } } \to q{\ensuremath{\qquad(\text{as } t\to\infty})}\ ] ] for all . set .it follows from lemma [l : general : finite_moments ] that is bounded for every finite .lemma [ l : integral_laplace_trafo ] with replaced by provides us with the fundamental equation }}}q(d\chi)\quad{\ensuremath{\;\;\forall\;}}t\geq0.\ ] ] based on , the idea for the proof of is as follows .the term vanishes as .if converges to some limit , then the limit has to be a fixed - point of the function }}}q(d\chi).\ ] ] by lemma [ l : fixed_point ] , this function is ( typically strictly ) concave .therefore , it has exactly one attracting fixed - point. furthermore , this fact forces to converge as .we will need finiteness of .looking for a contradiction , we assume . then there exists a sequence with such that .we estimate }}}q(d\chi)\\ & \leq k{{{(l_{t_n}\!+1)}}}+\int \exp{{{\bigl(- \int_0^\infty a{{{\bigl(\chi_s\bigr ) } } } l_{t_n}\,ds\bigr ) } } } { { { \bigl(1-{{\ensuremath{\mathbf{e}}}}e^{-{{\ensuremath{\lambda}}}\phi_\chi(t_n)}\bigr)}}}q(d\chi)\\ & \leq k{{{(l_{t_n}\!+1)}}}+\int { { { \bigl(1-{{\ensuremath{\mathbf{e}}}}e^{-{{\ensuremath{\lambda}}}\phi_\chi(t_n)}\bigr)}}}q(d\chi ) .\end { split}\ ] ] the last summand converges to zero by assumption [ a : convergence_to_zero ] and is therefore bounded by some constant .inequality leads to the contradiction the last equation is a consequence of and the assumption .next we prove using boundedness of .let be a sequence such that .then a calculation as in results in } } } q(d\chi)\\ & \qquad+{{\ensuremath{{\displaystyle \limsup_{n { \rightarrow}\infty}}}}}\int { { { \bigl(1-{{\ensuremath{\mathbf{e}}}}e^{-{{\ensuremath{\lambda}}}\phi_\chi(t_n)}\bigr)}}}q(d\chi ) .\end { split}\ ] ] the last summand is equal to zero by assumption [ a : convergence_to_zero ] .the first summand on the right - hand side of is dominated by which is finite by boundedness of and by assumption [ a : general : finite_man_hours ] . applying dominated convergence, we conclude that is bounded by }}}q(d\chi ) = k{{{\bigl(l_\infty\bigr)}}}.\ ] ] thus , lemma [ l : fixed_point ] implies .assume and suppose that .let be such that as and . by lemma [ l : fixed_point ], there is an and a such that .we estimate }}}q(d\chi)\\ & \geq\int{{{\bigl[1-\exp{{{\bigl ( - c\int_0^{t_{n_0}}a{{{\bigl(\chi_s\bigr ) } } } l_{t_n}\,ds\bigr)}}}\bigr]}}}q(d\chi)\quad{\ensuremath{\;\;\forall\;}}n > n_0 . \end { split}\ ] ] using dominated convergence , the assumption results in the contradiction } } }q(d\chi)\\ & = c\int{{{\bigl(\int_0^{t_{n_0 } } a{{{\bigl(\chi_s\bigr)}}}\,ds \bigr)}}}q(d\chi)>1 .\end { split}\ ] ] in order to prove , let be such that .an estimate as above together with dominated convergence yields }}}q(d\chi)\\ & = \int{{{\bigl[1-\exp{{{\bigl ( -\int_{0}^\infty a{{{\bigl(\chi_s\bigr)}}}{{\ensuremath{{\displaystyle \liminf_{t { \rightarrow}\infty}}}}}\ , l_t\,ds\bigr)}}}\bigr]}}}q(d\chi ) = k(m ) . \end { split}\ ] ] therefore ,lemma [ l : fixed_point ] implies , which yields . finally , we finish the proof of theorem [ thm : general : extinction ] . applying lemma [ l : integral_laplace_trafo ] , we see that } } } -\int{{{\bigl[1-\exp{{{\bigl(-q\int_0^\infty a(\chi_s)\,ds\bigr)}}}\bigr]}}}\nu(d\chi)\bigr|}}}}\\ & \leq\int\exp{{{\bigl(-\int_0^\infty l_{t - s}a{{{(\chi_s)}}}\,ds\bigr ) } } } { { \ensuremath{\mathbf{e}}}}{{{\bigl[1- e^{-{{\ensuremath{\lambda}}}\phi_\chi(t)}\bigr]}}}\nu(d\chi)\\ & \qquad\,+\left|\int{{{\bigl[1-\exp{{{\bigl(-\int_0^\infty l_{t - s } a{{{(\chi_s)}}}\,ds\bigr)}}}\bigr]}}}\nu(d\chi)\right.\\ & \qquad\ , -\left.\int{{{\bigl[1-\exp{{{\bigl(-q\int_0^\infty a{{{(\chi_s)}}}\,ds\bigr)}}}\bigr ] } } } \nu(d\chi)\right| .\end { split}\ ] ] the first summand on the right - hand side of converges to zero as by assumption [ a : convergence_to_zero ] . by the first step , as .hence , by the dominated convergence theorem and assumption [ a : general : finite_man_hours ] , the left - hand side of converges to zero as .as is a probability measure by assumption , we conclude this implies theorem [ thm : general : extinction ] as the laplace transform is convergence determining , see e.g. lemma 2.1 in .our proof of theorem [ thm : general : xlogx ] follows the proof of doney ( 1972 ) for supercritical crump - mode - jagers processes .some changes are necessary because the recursive equation differs from the respective recursive equation in .parts of our proof are analogous to the proof in which we nevertheless include here for the reason of completeness .lemma [ l : lemma52 ] and lemma [ l : convergence_if_xlogx_fails ] below contain the essential part of the proof of theorem [ thm : general : xlogx ] .for these two lemmas , we will need auxiliary lemmas which we now provide .we assume throughout this section that a solution of equation exists .note that this is implied by [ a : general : finite_man_hours ] and .recall the definition of from .for , define }}}q(d\chi)\ ] ] for .[ l : h_contracting ] the operator is contracting in the sense that for all .the lemma follows immediately from and from the definition of .[ l : h_nondecreasing ] the operator is nondecreasing in the sense that for all if for all .the lemma follows from being increasing in for every . for every measurable function , define } } } q(d\chi ) . \end { split}\ ] ] for and where , .if is a function of one variable , then we set where for , .[ l : barh_nondecreasing ] the operator is nondecreasing in the sense that for all and if for all , .the assertion follows from the basic fact that is nondecreasing .[ l : eta_nondecreasing ] assume [ a : general : finite_man_hours ] . let be the identity map .the function is nonnegative and nondecreasing .furthermore , .recall the definition of from . by equation, we have .thus , is nonnegative .furthermore , follows from the dominated convergence theorem and assumption [ a : general : finite_man_hours ] .let . then the inequality follows from for all .the following lemma , due to athreya , translates the -condition into an integrability condition on . for completeness, we include its proof .[ l : properties_of_eta ] assume [ a : general : finite_man_hours ] .let be the function defined in .then for some and then all , if and only if the -condition holds .by monotonicity of ( see lemma [ l : eta_nondecreasing ] ) , the two quantities in are finite or infinite at the same time .fix . using fubini s theorem and the substitution , we obtain }}}{{{\bigl(a_{{\ensuremath{\alpha}}}\bigr)}}}^2\,d{{\ensuremath{\lambda}}}\biggr]}}}dq\\ & = \int{{{\biggl[a_{{\ensuremath{\alpha}}}\int_0^{ca_{{\ensuremath{\alpha}}}}\frac{v-1+e^{-v}}{v^2 } \,dv\biggr]}}}dq . \end { split}\ ] ] it is a basic fact that as . in the following two lemmas , we consider uniqueness and existence of a function which satisfies : where is as in lemma [ l : fixed_point ] .notice that the zero function does not satisfy ( c ) .first , we prove uniqueness . [l : uniqueness_psi ] assume [ a : general : finite_man_hours ] and .if and satisfy , then .notice that .define for and note that by ( c ) . from lemma[ l : h_contracting ] , we obtain for where is a probability measure because solves equation .let , , be independent variables with distribution and note that .we may assume that because implies for .iterating inequality , we arrive at the convergence in follows from the weak law of large numbers . [ l : existence_psi ] assume [ a : general : finite_man_hours ] and .there exists a solution of if and only if the -condition holds .assume that holds .define , for and for and .recall and from the proof of lemma [ l : uniqueness_psi ] .note that because of .arguments as in the proof of lemma [ l : uniqueness_psi ] imply where for . since by lemma [ l : eta_nondecreasing ] and we see that .in addition , we conclude from that . by lemma [ l : h_nondecreasing ] , this implies inductively for , .let .we need to prove that . clearly , so we can choose with .then define .it follows from , , lemma [ l : eta_nondecreasing ] and lemma [l : properties_of_eta ] that for all thus , is a cauchy sequence in ] . using ( b ) , for ] let be such that \bigr)}}}<1 ] this implies that .therefore , by lemma [ l : properties_of_eta ] , the -condition holds .recall , , and from , , and , respectively . asbefore , let .define and for and .the following two lemmas follow lemma 5.1 and lemma 5.2 , respectively , in .[ l : lemma51 ] assume [ a : general : finite_moments ] , [ a : general : finite_man_hours ] , [ a : for_skdist ] and .if the -condition holds , then as . inserting the definitions and of and , respectively , into, we see that }}}\geq 0 \qquad { { \ensuremath{\lambda}}},t>0\ ] ] is nonnegative where , .insert the recursive equations and for and , respectively , into to obtain }}}q(d\chi)\\ & = \int_0^t{{{\bigl [ \frac{m^q(t - s)}{e^{{{\ensuremath{\alpha}}}(t - s)}{\bar{m } } } -\frac{1}{{{\ensuremath{\lambda}}}e^{-{{\ensuremath{\alpha}}}s } } l_{t - s}{{{\bigl(\frac{{{\ensuremath{\lambda}}}e^{-{{\ensuremath{\alpha}}}s}}{e^{{{\ensuremath{\alpha}}}(t - s)}{\bar{m}}}\bigr)}}}\bigr ] } } } \mu^q_{{\ensuremath{\alpha}}}(ds)\\ & \quad + \frac{1}{{{\ensuremath{\lambda}}}}\int_0^t l_{t - s}{{{\bigl(\frac{{{\ensuremath{\lambda}}}e^{-{{\ensuremath{\alpha}}}s}}{e^{{{\ensuremath{\alpha}}}(t - s)}{\bar{m}}}\bigr ) } } } \mu^q(ds)\\ & \qquad\quad-\frac{1}{{{\ensuremath{\lambda}}}}\int{{{\bigl[1- \exp{{{\bigl(-\int_0^t a{{{\bigl(\chi_s\bigr ) } } } l_{t - s}{{{\bigl(\frac{{{\ensuremath{\lambda}}}e^{-{{\ensuremath{\alpha}}}s}}{e^{{{\ensuremath{\alpha}}}(t - s)}{\bar{m}}}\bigr)}}}\,ds\bigr)}}}\bigr]}}}q(d\chi)\\ & \quad + \int { { \ensuremath{\mathbf{e}}}}{{{\bigl[\frac{1-\exp{{{\bigl(-{{\ensuremath{\lambda}}}\frac{\phi_\chi(t)}{e^{{{\ensuremath{\alpha}}}t}{\bar{m}}}\bigr)}}}}{{{\ensuremath{\lambda}}}}\bigr ] } } } { { { \bigl[1- \exp{{{\bigl(-\int_0^t a{{{\bigl(\chi_s\bigr ) } } } l_{t - s}{{{\bigl(\frac{{{\ensuremath{\lambda}}}}{e^{{{\ensuremath{\alpha}}}t}{\bar{m}}}\bigr)}}}\,ds\bigr)}}}\bigr]}}}q(d\chi)\\ & \quad+ \frac{1}{{{\ensuremath{\lambda}}}}\int { { \ensuremath{\mathbf{e}}}}{{{\biggl[\frac{{{\ensuremath{\lambda}}}\phi(t,\chi)}{e^{{{\ensuremath{\alpha}}}t}{\bar{m}}}-1 + \exp{{{\bigl(-\frac{{{\ensuremath{\lambda}}}\phi(t,\chi)}{e^{{{\ensuremath{\alpha}}}t}{\bar{m}}}\bigr)}}}\biggr ] } } } q(d\chi)\\ & = : \int_0^t d({{\ensuremath{\lambda}}}e^{-{{\ensuremath{\alpha}}}s},t - s)\mu^q_{{\ensuremath{\alpha}}}(ds ) + \frac{1}{{{\ensuremath{\lambda}}}}\bar{h}_{{\ensuremath{\alpha}}}{{{\bigl((t,{{\ensuremath{\lambda}}})\mapsto l_t{{{\bigl(\frac{{{\ensuremath{\lambda}}}}{e^{{{\ensuremath{\alpha}}}t}{\bar{m}}}\bigr)}}}\bigr ) } } } + t_1+t_2 \end { split}\ ] ] where and are suitably defined . inequality implies where is a finite constant .the last inequality is a consequence of theorem [ thm : general : expected_man_hours ] , equation , with replaced by .lemma [ l : barh_nondecreasing ] together with implies using , inequality and , , we see that the expressions and are bounded above by }}}q(d\chi ) \leq c_2{{\ensuremath{\lambda}}}\\ t_2&\leq\frac{{{\ensuremath{\lambda}}}}{2}\int{{\ensuremath{\mathbf{e}}}}{{{\bigl(\frac{\phi_\chi(t)}{e^{{{\ensuremath{\alpha}}}t}{\bar{m}}}\bigr)}}}^2 q(d\chi ) \leq c_3 { { \ensuremath{\lambda}}}\end { split}\ ] ] for all where are finite constants which are independent of and .such constants exist by assumption [ a : for_skdist ] .taking supremum over ] .then , by monotonicity of , for all . hence , is bounded by where and .iterate this inequality to obtain now we need to prove .looking at and using , we see that is bounded by }}} ] , see , implies by the law of large numbers , we know that for large a.s . hence , where .therefore , the -condition holds by lemma [ l : properties_of_eta ] .this finishes the proof .assume that the -condition holds .insert into and use assumption [ a : convergence_to_zero ] to obtain } } } { \xrightarrow{t { \rightarrow}\infty}}\int{{{\biggl[\exp{{{\bigl(-\int_0^\infty \psi({{\ensuremath{\lambda}}}e^{-{{\ensuremath{\alpha}}}s } ) a(\chi_s)\,ds\bigr)}}}\biggr ] } } } \nu(d\chi)\ ] ] for . for this , we applied the dominated convergence theorem together with assumption [ a : general : finite_man_hours ] .denote the right - hand side of by and note that is continuous and satisfies .a standard result , e.g. lemma 2.1 in , provides us with the existence of a random variable such that for all .this proves the weak convergence as the laplace transform is convergence determining .note that }}}\nu(d\chi)\ ] ] by the dominated convergence theorem .furthermore , }}}\,\nu(d\chi).\ ] ] if the -condition fails to hold , then }}}\to 0 ] .hence , the sequence of paths converges locally uniformly to the path almost surely as .therefore , the dominated convergence theorem implies putting and together , we arrive at which proves the theorem .we will employ lemma [ l : conditioned_on_t_eps ] to calculate explicit expressions for some functionals of .for example , we will prove in lemma [ l : q_explicit_man_hours ] together with lemma [ l : finite_man_hours ] that provided that assumptions [ a : a1 ] , [ a : a2 ] and [ a : finite_man_hours ] hold .equation shows that condition and condition are equivalent .the following lemmas prepare for the proof of .[ l : mit_t_b ] assume [ a : a1 ] and [ a : a2 ] .let have compact support in . furthermore , let the continuous function be nonnegative and nondecreasing .then } } } { \xrightarrow{y{\rightarrow}0}}\int{{{\biggl[{{{\bigl ( \int_0^{t_b}\psi(s)f(\chi_s)\,ds\bigr)}}}^m\biggr]}}}{\bar{q}_y}(d\chi)\ ] ] for every and .w.l.o.g .assume .let be such that and let .using lemma [ l : conditioned_on_t_eps ] , we see that the left - hand side of is equal to } } } = \frac{1}{{\bar{s}}({{\ensuremath{\varepsilon}}})}{{\ensuremath{\mathbf{e}}}}^y{{{\biggl[{{{\bigl(\int_0^{t_b}\psi(s ) f({{\ensuremath{\hat{y}}}}^{{{\ensuremath{\varepsilon}}}}_s)\,ds\bigr)}}}^m\biggr]}}}\\ & = \frac{1}{{\bar{s}}({{\ensuremath{\varepsilon}}})}{{\ensuremath{\mathbf{e}}}}^0{{{\biggl[{{{\bigl(\int_{t_y}^{t_b}\psi(s - t_y ) f({{\ensuremath{\hat{y}}}}^{{{\ensuremath{\varepsilon}}}}_{s})\,ds\bigr)}}}^m\biggr ] } } } { \xrightarrow{y{\rightarrow}0}}\int{{{\biggl(\int_0^{t_b}\psi(s ) f(\chi_s)\,ds\biggr)}}}^m{\bar{q}_y}(d\chi ) .\end { split}\ ] ] the second equality is the strong markov property of and the change of variable . for the convergence, we applied the monotone convergence theorem . the explicit formula on the right - hand side of originates in the explicit formula below , which we recall from the literature .[ l : explicit_man_hours ] assume [ a : a1 ] and [ a : a2 ] . if or , then for all .see e.g. section 15.3 of karlin and taylor .let be a markov process with cdlg sample paths and state space which is a polish space .for an open set , denote by the first exit time of from the set .notice that is a stopping time . for , define }}},\ y\in e , m\in{{\ensuremath{\mathbbm{n}}}}_0,\ ] ] for a given function . in the following lemma , we derive an expression for for which lemma [ l : explicit_man_hours ] is applicable .[ l : conversion ] let be a time - homogeneous markov process with cdlg sample paths and state space which is a polish space .let be as in with an open set and with a function .then } } } \!\!\!\!&=&\!\!\!\!{{\ensuremath{\mathbf{e}}}}^y{{{\bigl(\int_0^{\tau}2 f({{\ensuremath{\tilde{y}}}}_s)w_1({{\ensuremath{\tilde{y}}}}_s)\,ds\bigr ) } } } \label{eq : wzz } \end{aligned}\ ] ] for all .let be fixed .for the proof of , we apply fubini to obtain the last equality follows from fubini and a change of variables .the stopping time can be expressed as with a suitable path functional .furthermore , satisfies for .therefore , the right - hand side of is equal to }}}\bigr)}}}\,dr = { { \ensuremath{\mathbf{e}}}}^y{{{\bigl(\int_0^{\tau}w_1({{\ensuremath{\tilde{y}}}}_r)\,dr\bigr)}}}. \end { split}\ ] ] the last but one equality is the markov property of .this proves .for the proof of , break the symmetry in the square of to see that is equal to }}}dr\bigr)}}}}\\ & = 2\int_0^{\infty}{{\ensuremath{\mathbf{e}}}}^y{{{\bigl({{\ensuremath{\mathbbm{1}}}}_{r<\tau}f({{\ensuremath{\tilde{y}}}}_r ) { { \ensuremath{\mathbf{e}}}}^{{{\ensuremath{\tilde{y}}}}_r}{{{\bigl[\int_0^{\tau}f({{\ensuremath{\tilde{y}}}}_{s})\,ds\bigr]}}}\bigr)}}}\,dr = { { \ensuremath{\mathbf{e}}}}^y{{{\bigl(\int_0^{\tau}2f({{\ensuremath{\tilde{y}}}}_s)w_1({{\ensuremath{\tilde{y}}}}_s)\,ds\bigr)}}}. \end { split}\ ] ] this finishes the proof .we will need that dies out in finite time .the following lemma gives a condition for this .recall .[ l : finite_time_extinction ] assume [ a : a1 ] and [ a : a2 ] .let .then the solution of equation hits zero in finite time almost surely if and only if .if , then converges to infinity as on the event almost surely . on the event , we have that almost surely .the last inequality follows from lemma 15.6.2 of and assumption [ a : a2 ] .therefore , theorem 2 of jagers implies that , with probability one , either hits zero in finite time or converges to infinity as . with equation ,we obtain this proves the assertion .the following lemma makes assumption [ a : finite_man_hours ] more transparent .it proves that [ a : finite_man_hours ] holds if and only if the expected area under is finite .l : finite_man_hours ] assume [ a : a1 ] and [ a : a2 ] .assumption [ a : finite_man_hours ] holds if and only if if assumption [ a : finite_man_hours ] holds , then and for all and with .let be the constants from [ a : a1 ] . in equation ,let and apply monotone convergence to obtain } } } \frac{{\bar{s}}(y\wedge z)}{g(z){\bar { s}}(z)}\bigr ) } } } \,dz .\end { split}\ ] ] hence , if assumption [ a : finite_man_hours ] holds , then assumption [ a : a2 ] implies that the right - hand side of is finite because , .therefore , the left - hand side of with replaced by is finite .together with , this implies that does not converge to infinity with positive probability as .thus lemma [ l : finite_time_extinction ] implies and equation implies .now we prove that assumption [ a : finite_man_hours ] holds if the left - hand side of with replaced by is finite .again , and lemma [ l : finite_time_extinction ] imply . using monotonicity of , we obtain for the right - hand side is finite because with replaced by is finite .therefore , assumption [ a : finite_man_hours ] holds .[ l : bs_durch_y_bounded ] assume [ a : a1 ] , [ a : s_bar ] and let . if , then it suffices to prove because is locally bounded in and by assumption [ a : s_bar ] . by assumption [ a : a1 ] , for all and a constant .let .thus , the last inequality follows from , .consequently , the proof of the second equation in is similar to the proof of the lemma of lhospital . from , we conclude which implies this finishes the proof .now we prove equation .recall .define and for .if , then is the monotone limit of the right - hand side of as .[ l : q_explicit_man_hours ] assume [ a : a1 ] , [ a : a2 ] and .let .then for .if [ a : finite_man_hours ] holds and if is bounded , then is finite for .if [ a : finite_man_hours_squared ] holds and if is bounded , then is finite for .choose with compact support in for every such that as .fix and .lemma [ l : mit_t_b ] proves that } } } = \int{{{\bigl(\int_0^{t_b } f_{{\ensuremath{\varepsilon}}}{{{(\chi_s)}}}\,ds\bigr)}}}^m { \bar{q}_y}{{{(d\chi)}}}. \end { split}\ ] ] let be as in with replaced by and replaced by . fix .lemma [ l : conversion ] and lemma [l : explicit_man_hours ] provide us with an expression for the left - hand side of equation .hence , the last equation follows from dominated convergence and assumption [ a : a2 ] .note that the hitting time as for every continuous path . by lemma [ l : explicit_man_hours ] andthe monotone convergence theorem , as .let , and apply monotone convergence to arrive at equation .similar arguments prove . instead of , consider which is implied by lemma [ l : mit_t_b ] .furthermore , instead of applying lemma [l : explicit_man_hours ] to equation , apply equation together with equation . forthe rest of the proof , assume that is bounded by .let be the constants from [ a : a1 ] .note that .consider the right - hand side of .if , then the integral over is finite by assumption [ a : finite_man_hours ] . if , then the integral over is finite by assumption [ a : finite_man_hours_squared ] .the integral over is finite because of [ a : a2 ] and ,\ ] ] where is a finite constant .the last inequality in follows from lemma [ l : bs_durch_y_bounded ] .the convergence of theorem [ thm : existence_excursion_measure ] also holds for , fixed , if is a bounded function . for this, we first estimate the moments of .[ l : finite_moments_y ] assume [ a : a1 ] .let be a solution of equation and let be finite .then , for every , there exists a constant such that }}}\leq c_t{y},\quad { { \ensuremath{\mathbf{e}}}}^y{{{\bigl[\sup_{t\leq t } { y_t}^n\bigr]}}}\leq c_t({y}+{y}^n ) \end { split}\ ] ] for all and every stopping time .the proof is fairly standard and uses it s formula and doob s -inequality .[ l : functional_fixed_t ] assume [ a : a1 ] , [ a : a2 ] and [ a : s_bar ] . let be a continuous function such that for some , some constant and for all . if , then }}}\ ] ] is bounded in .choose with compact support in for every such that pointwise as . by theorem [ thm : existence_excursion_measure ], the left - hand side of converges to the left - hand side of as by the monotone convergence theorem .hence , the first equality in follows from if the limits and can be interchanged . for this, we prove the second equality in .let .the -diffusion is a strong markov process .thus , by , } } } = { { \ensuremath{{\displaystyle \lim_{y { \rightarrow}0}}}}}{{\ensuremath{\mathbf{e}}}}^y{{{\bigl[\frac{f(y_t^{\uparrow})}{{\bar{s}}(y_t^{\uparrow})}{{\ensuremath{\mathbbm{1}}}}_{t < t_b}\bigr]}}}}\\ & = { { \ensuremath{\mathbf{e}}}}^0{{{\bigl[{{\ensuremath{{\displaystyle \lim_{y { \rightarrow}0}}}}}\frac{f(y_{t+t_y}^{\uparrow})}{{\bar{s}}(y_{t+t_y}^{\uparrow } ) } { { \ensuremath{\mathbbm{1}}}}_{{t+t_y}<t_b}\bigr ] } } } = { { \ensuremath{\mathbf{e}}}}^0{{{\bigl[\frac{f(y_t^{\uparrow})}{{\bar{s}}(y_t^{\uparrow})}{{\ensuremath{\mathbbm{1}}}}_{t < t_b}\bigr]}}}. \end { split}\ ] ] the second equality follows from the dominated convergence theorem because of right - continuity of the function implies the last equality in .now we let in and apply monotone convergence to obtain } } } = { { \ensuremath{\mathbf{e}}}}^0{{{\bigl[\frac{f(y_t^{\uparrow})}{{\bar{s}}(y_t^{\uparrow})}{{\ensuremath{\mathbbm{1}}}}_{t < t_\infty}\bigr]}}}.\ ] ] the following estimate justifies the interchange of the limits and }}}\bigr|}}}}\\ & \leq c_f{{\ensuremath{{\displaystyle \lim_{b { \rightarrow}\infty}}}}}\sup_{y\leq 1}\frac{1}{{\bar{s}}(y ) } { { \ensuremath{\mathbf{e}}}}^y { { { \bigl[y_t\vee y_t^n { { \ensuremath{\mathbbm{1}}}}_{\sup_{s\leq t}y_s \geqb}\bigr]}}}\\ & \leq c_f{{\ensuremath{{\displaystyle \lim_{b { \rightarrow}\infty}}}}}\frac{1}{b}\sup_{y\leq 1}\frac{y}{{\bar{s}}(y ) } \sup_{y\leq 1}\frac{1}{y } { { \ensuremath{\mathbf{e}}}}^y{{{\bigl [ \sup_{s\leq t}{{{\bigl(y_s^2+y_s^{n+1}\bigr)}}}\bigr]}}}=0 . \end { split}\ ] ] the last equality follows from and from lemma [ l : finite_moments_y ] . putting and together , we get } } } = { { \ensuremath{{\displaystyle \lim_{b { \rightarrow}\infty}}}}}\,{{\ensuremath{{\displaystyle \lim_{y { \rightarrow}0}}}}}\frac{1}{{\bar{s}}(y)}{{\ensuremath{\mathbf{e}}}}^y { { { \bigl[f(y_t){{\ensuremath{\mathbbm{1}}}}_{t < t_b}\bigr ] } } } = { { \ensuremath{\mathbf{e}}}}^0{{{\bigl[\frac{f(y_t^{\uparrow})}{{\bar{s}}(y_t^{\uparrow})}{{\ensuremath{\mathbbm{1}}}}_{t < t_\infty}\bigr]}}}. \end { split}\ ] ] note that is bounded in because of and lemma [ l : bs_durch_y_bounded ] .we finish the proof of the first equality in by proving that the limits and on the right - hand side of interchange . } } } = { { \ensuremath{{\displaystyle \lim_{{{\ensuremath{\varepsilon}}}{\rightarrow}0}}}}}\,{{\ensuremath{\mathbf{e}}}}^0{{{\bigl[\frac{f(y_t^{\uparrow})-f_{{{\ensuremath{\varepsilon}}}}(y_t^{\uparrow})}{{\bar{s}}(y_t^{\uparrow } ) } { { \ensuremath{\mathbbm{1}}}}_{t < t_\infty}\bigr ] } } } = 0 . \end { split}\ ] ] the first equality is with replaced by .the last equality follows from the dominated convergence theorem .the function converges to for every as .note that almost surely for .integrability of follows from finiteness of .we have settled equation in lemma [ l : q_explicit_man_hours ] ( together with lemma [ l : finite_man_hours ] ) .a consequence of the finiteness of this equation is that . in the proof of the extinction result for the virgin island model, we will need that converges to zero as .this convergence will follow from equation if is globally upward lipschitz continuous .we already know that this function is bounded in by lemma [ l : functional_fixed_t ] .[ l : integrand_ist_nullfolge ] assume [ a : a1 ] , [ a : a2 ] and [ a : s_bar ] .let .if , then we will prove that the function is globally upward lipschitz continuous .the assertion then follows from the finiteness of with replaced by and with .recall , and from the proof of lemma [ l : finite_moments_y ] . from and it s lemma , we obtain for and where . letting and then , we conclude from the dominated convergence theorem , lemma [ l : finite_moments_y ] and lemma [ l : functional_fixed_t ] that }}}\,dr \leq { { \ensuremath{\tilde{c}}}}c_s{{\ensuremath{|t - s|}}}\ ] ] for some constant .the last inequality follows from lemma [ l : bs_durch_y_bounded ] .inequality implies upward lipschitz continuity which finishes the proof .we will derive theorem [ thm : extinction_vim ] from theorem [ thm : general : extinction ] and theorem [ thm : expected_man_hours ] from theorem [ thm : general : expected_man_hours ] .thus , we need to check that assumptions [ a : general : finite_moments ] , [ a : general : finite_man_hours ] and [ a : convergence_to_zero ] with , and hold under [ a : a1 ] , [ a : a2 ] , [ a : s_bar ] and [ a : finite_man_hours ] . recall that and .assumption [ a : general : finite_moments ] follows from lemma [ l : finite_moments_y ] and lemma [ l : functional_fixed_t ] .lemma [ l : finite_man_hours ] and lemma [ l : q_explicit_man_hours ] imply [ a : general : finite_man_hours ] .lemma [ l : finite_time_extinction ] together with lemma [ l : finite_man_hours ] implies that hits zero in finite time almost surely . the second assumption in [ a : convergence_to_zero ]is implied by lemma [ l : integrand_ist_nullfolge ] with and assumption [ a : finite_man_hours ] . by theorem [ thm : general : extinction ] , we now know that the total mass process dies out if and only if condition is satisfied . however , by lemma [ l : q_explicit_man_hours ] with and , condition is equivalent to condition .this proves theorem [ thm : extinction_vim ] for an application of theorem [ thm : general : expected_man_hours ] , note that and are integrable by lemma [ l : finite_man_hours ] and lemma [ l : q_explicit_man_hours ] , respectively .in addition , lemma [ l : finite_man_hours ] and lemma [ l : q_explicit_man_hours ] show that similar equations hold for and . moreover , the denominators in and are equal by lemma [ l : q_explicit_man_hours ] , equation , together with lemma [ l : finite_man_hours ] .therefore , equations and follow from equations and , respectively . in the supercritical case , holds because of and lemma [ l : integrand_ist_nullfolge ] with together with assumption [ a : finite_man_hours ] .furthermore , lemma [ l : functional_fixed_t ] together with lemma [ l : bs_durch_y_bounded ] and the dominated convergence theorem implies continuity of . therefore , theorem [ thm : general : expected_man_hours ] implies which together with reads as .theorem [ thm : xlogx ] is a corollary of theorem [ thm : general : xlogx ] . for this , we need to check [ a : for_skdist ] .the expression in is finite because of lemma [ l : functional_fixed_t ] with and assumption [ a : finite_man_hours_squared ] .assumption [ a : a1 ] provides us with for all and some .thus , which is finite by [ a : finite_man_hours_squared ] .lemma [ l : integrand_ist_nullfolge ] with and lemma [ l : finite_moments_y ] show that is bounded in .furthermore , hlder s inequality implies }}}{\bar{q}_y}(d\chi)\bigr)}}}^2 \leq\int \chi_t^2\,{\bar{q}_y}(d\chi ) \int{{{\bigl(\int_0^\infty a(\chi_s)\,ds\bigr)}}}^2{\bar{q}_y}(d\chi)\ ] ] which is bounded in because of lemma [l : q_explicit_man_hours ] with , and because of assumption [ a : finite_man_hours_squared ] .therefore , we may apply theorem [ thm : general : xlogx ] . note that the limiting variable is not identically zero because of the right - hand side is finite because of lemma [ l : q_explicit_man_hours ] with , and because of assumption [ a : finite_man_hours_squared ] .
a continuous mass population model with local competition is constructed where every emigrant colonizes an unpopulated island . the population founded by an emigrant is modeled as excursion from zero of an one - dimensional diffusion . with this excursion measure , we construct a process which we call virgin island model . a necessary and sufficient condition for extinction of the total population is obtained for finite initial total mass .
cooperative behavior is essential for the maintenance of public resources and for their preservation for future generations .however , human cooperation is often threatened by the lure of short - term advantages that can be accrued only by means of freeriding and defecting .bowing to such temptations leads to an unsustainable use of common resources , and ultimately such selfish behavior may lead to the `` tragedy of the commons '' .there exist empirical and theoretical evidence in favor of the fact that our climate is subject to exactly such a social dilemma . andrecent research concerning the climate change has revealed that it is in fact the risk of a collective failure that acts as perhaps the strongest motivator for cooperative behavior .the most competent theoretical framework for the study of such problems , inspired by empirical data and the fact that failure to reach a declared global target can have severe long - term consequences , is the so - called collective - risk social dilemma .as the name suggests , this evolutionary game captures the fact discovered in the experiments that a sufficiently high risk of a collective failure can significantly elevate the chances for coordinating actions and for altogether avoiding the problem of vanishing public goods .recent research concerning collective - risk social dilemmas has revealed that complex interaction networks , heterogeneity , wealth inequalities as well as migration can all support the evolution of cooperation ( for a comprehensive review see ) . moreover , sanctioning institutions can also promote public cooperation . more specifically , it has been shown that a decentralized , polycentric , bottom - up approach involving multiple institutions instead of a single , global one provides significantly better conditions both for cooperation to thrive as well as for the maintenance of such institutions in collective - risk social dilemmas .voluntary rewards have also been shown to be effective means to overcome the coordination problem and to ensure cooperation , even at small risks of collective failure .the study of collective - risk social dilemmas can thus inform relevantly on the mitigation of global challenges , such as the climate change , but it is also important to make further steps towards more realistic and sophisticated models , as outlined in the recent review by pacheco et al . and several enlightening commentaries that appeared in response . herewe consider the collective - risk social dilemma , where in addition to the standard personal endowments , players own additional assets that are prone to being lost if the collective target is not reached . indeed , individual asset has been considered in the behavioral experiments regarding the collective - risk social dilemma .however , different from the experimental study that investigates the interaction between wealth heterogeneity and meeting intermediate climate targets , we here explore in detail whether and how the so - called risky assets provide additional incentives for individuals to cooperate in well - mixed and structured populations .it is important to emphasize that , within our setup , individuals might lose more from a failed collective action than they can gain if the same action is successful .naturally , this constitutes an important feedback for the selection of the most appropriate strategy . a simple example from real life to illustrate the relevance of our approach is as follows : imagine farmers living around a river that often floods .the farmers needs to invest into a dam to prevent the floods from causing damage .if the farmers cooperate and successfully build the dam , they will be able to enjoy the harvest .however , if the farmers fail to build the dam , they will lose not only the harvest , but they will also incur property damage to their fields , houses and stock .further to the motivation of our research , it is also often the case that individuals have limited investment capabilities , which they have to carefully distribute among many groups . in other words, individuals may participate in several collective - risk social dilemmas , for example in each with a constant contribution .rationally , however , individuals tend to allocate their asset into groups so as to avoid , or at least minimize , potential losses based on the information concerning risk in the different groups . to account for these realistic scenarios , we consider the collective - risk social dilemma with risky assets in finite and infinite well - mixed populations , as well as in structured populations .we first explore how the introduction of risky assets affects the evolutionary dynamics in well - mixed populations , where we observe new stable and unstable mixed steady states , whereby the stable mixed steady state converges to full cooperation in dependence on the risk .subsequently , we turn to structured populations , where the distribution of assets amongst the groups where players are members becomes crucial . in general, we will show that the introduction of risky assets can promote the evolution of cooperation even at low risks , both in well - mixed and in structured populations , and by doing so thus contributes to the resolution of collective - risk social dilemmas .we first consider the simplest collective - risk social dilemma game with constant individual assets . from a well - mixed population , players are chosen randomly to form a group for playing the game . in the group ,each player with the amount of asset can choose to cooperate with strategy or defect with strategy .cooperators contribute a cost to the collective target while defectors contribute nothing . if all the contributions within the group either reach or exceed the collective target , each player within the group obtains the benefit , such that the net payoff is .however , if the collective target is not reached , all the players within the group lose their investment and asset with probability , such that the net payoff is then , while with probability the payoff remains the same if the collective target is reached .based on these definitions , the payoff of player with strategy in a group with cooperators is \nonumber \\&-&ra[1-\theta(j - t)]-cs_y,\end{aligned}\ ] ] where if and otherwise .we emphasize that an individual will suffer from a risk to lose everything ( the investment and the asset ) it has , if the collective target is not reached in the minimal model .this is in line with the original definition of the collective - risk social dilemma .different from the original model , however , in our case the asset together with the investment can be more than the expected benefit of mutual cooperation . as argued in the introduction , such scenarios do exist in reality , and as we will show in the results section , the risky assets influence significantly the evolutionary dynamics in both well - mixed and structured populations .we also refer to the appendix for details with regards to the performed analysis .we here extend the collective - risk social dilemma game with risky assets to be played on the square lattice with periodic boundary conditions , where players are arranged into overlapping groups of size such that everyone is connected to its four nearest neighbors .accordingly , each individual belongs to five different groups and it participates in five collective - risk games .concerned for the loss of its assets , each individual aims to transfer these assets into those groups that have a lower probability to fail to reach the collective target . with the information at hand from the previous round of the game , player at time transfers the asset into the group centered on player according to ^{\alpha}}{\sum_{n\in g_y}[1-r_n(t-1)]^{\alpha}},\ ] ]where if at time the number of cooperators and otherwise , and is the allocation strength of the asset . here , we mainly consider , given that players generally prefer to allocate their asset into a relatively safe environment .we note that means allocating the asset equally into all the groups without taking into account the information about risk .accordingly , we will refer to this allocation scheme as uniform or equal .conversely , means that individuals allocate their assets only into the most successful groups. we will refer to this as the fully rational allocation scheme .lastly , for , we have the so - called bounded rational allocation of assets . in agreement with the above definitions ,the payoff of player at time with strategy and being member of the group that is centered on player is +b(1-r)\{1-\theta[n_c^m(t)-t]\ } \nonumber \\&-&ra_y^m(t)\{1-\theta[n_c^m(t)-t]\}-cs_y(t).\end{aligned}\ ] ] the total payoff at time is then simply the accumulation of payoffs from each of the five individual groups where player is member , given as .after the accumulation of payoffs as described above , each player is allowed to learn from one randomly chosen neighbor with a probability given by the fermi function ^{-1},\ ] ] where in agreement with the settings in finite well - mixed populations ( see appendix for details ) , we use the intensity of selection .further with regards to the simulations details , we note that initially each player is designated either as a cooperator or defector with equal probability , and it equally allocates its asset into all the groups in which it is involved .monte carlo simulations are carried out for the population on the square lattice .we emphasize there exist ample evidence , especially for games that are governed by group interactions , in favor of the fact that using the square lattice suffices to reveal all the relevant evolutionary outcomes . because the system may reach a stationary state where cooperators and defectors coexist in the finite structured population in the absence of mutation , we determine the fraction of cooperators when it becomes time - independent .the final results were obtained over independent initial conditions to further increase accuracy , and their robustness has been tested on populations ranging from to in size .we begin by showing the results obtained in well - mixed populations , when individual risky assets are incorporated into the collective - risk social dilemma as described by the minimal model . in infinite well - mixed populations , according to the replicator equation , it can be observed that only the presence of a high risk leads to two additional , interior equilibria beside the two boundary equilibria ( stable ) and ( unstable ) in the standard collective - risk social dilemma ( appendix a ) . the unstable interior equilibrium , if it exists , divides the range ] . we thus compute ,\ ] ] from where it follows that , since , has a unique internal root at when .moreover , for and for .accordingly , is a unique interior maximum of .solving the equation ] , eq .( [ eq1 ] ) has two interior equilibria , denoted by and with . since for and for , is an unstable equilibrium and is a stable equilibrium .2 . when ] , eq .( [ eq1 ] ) has no interior equilibria . when or , however , for \geq 1 ] , eq .( [ eq1 ] ) has only one interior equilibrium .note that \}^{1/(n-1)} ] is unstable for since .for studying the evolutionary dynamics in finite well - mixed populations , we consider a population of finite size . herethe average payoffs of cooperators and defectors in the population with cooperators are respectively given by and next , we adopt the pair - wise comparison rule to study the evolutionary dynamics , based on which we assume that player adopts the strategy of player with a probability given by the fermi function ^{-1},\ ] ] where is the intensity of selection that determines the level of uncertainty in the strategy adoption process . without loosing generality, we use throughout this work . with these definitions ,the probability that the number of cooperators in the population increases or decreases by one is }]^{-1}.\ ] ] following previous research , we further introduce the mutation - selection process into the update rule , and compute the stationary distribution as the key quantity that determines the evolutionary dynamics in finite well - mixed populations . we note that , in the presence of mutations , the population will never fixate in any of the two possible absorbing states . thus , the transition matrix of the complete markov chain is {(z+1)\times(z+1)}^{t},\ ] ] where if , otherwise , and .accordingly , the stationary distribution of the population , that is the average fraction of the time the population spends in each of the states , is given by the eigenvector of the eigenvalue of the transition matrix .in the results section , fig .[ fig1](b ) is obtained by using .this research was supported by the national natural science foundation of china ( grant no . and no . ) , the national 973 program of china ( grant no .2013cb329404 ) , and the slovenian research agency ( grants j1 - 4055 and p5 - 0027 ) . m.p . also acknowledges funding by the deanship of scientific research ( dsr ) , king abdulaziz university , under grant no .( 76 - 130 - 35-hici ) .
in the collective - risk social dilemma , players lose their personal endowments if contributions to the common pool are too small . this fact alone , however , does not always deter selfish individuals from defecting . the temptations to free - ride on the prosocial efforts of others are strong because we are hardwired to maximize our own fitness regardless of the consequences this might have for the public good . here we show that the addition of risky assets to the personal endowments , both of which are lost if the collective target is not reached , can contribute to solving the collective - risk social dilemma . in infinite well - mixed populations risky assets introduce new stable and unstable mixed steady states , whereby the stable mixed steady state converges to full cooperation as either the risk of collective failure or the amount of risky assets increases . similarly , in finite well - mixed populations the introduction of risky assets enforces configurations where cooperative behavior thrives . in structured populations cooperation is promoted as well , but the distribution of assets amongst the groups is crucial . surprisingly , we find that the completely rational allocation of assets only to the most successful groups is not optimal , and this regardless of whether the risk of collective failure is high or low . instead , in low - risk situations bounded rational allocation of assets works best , while in high - risk situations the simplest uniform distribution of assets among all the groups is optimal . these results indicate that prosocial behavior depends sensitively on the potential losses individuals are likely to endure if they fail to cooperate .
a caricature of markets , from the point of view of financial engineering , is the following single period asset pricing framework : there are only two times ( today ) and ( tomorrow ) .the world at can be in any of states and let be the probability that state occurs .there are risky assets whose price is one at and is at , if state materializes .there is also a risk - less asset ( bond ) which also costs one today and pays one tomorrow , in all states .prices of assets are assumed given at the outset .portfolios of assets can be built in order to transfer wealth from one state to the other .a portfolio is a linear combination with weights , on the riskless and risky assets .the value of the portfolio at is which is the price the investor has to pay to buy at .the return of the portfolio , i.e. the difference between its value at and at , is given by the construction of arbitrage pricing theory relies on the following steps summarizing , the logic of financial engineering is : assuming that markets are arbitrage free , the price of any contingent claim , no matter how exotic , can be computed .this involves some consideration of risk , as long as markets are incomplete .but if one can assumes that markets are complete , then prices can be computed in a manner which is completely independent of risk .note indeed that the probability distribution over plays no role at all in the above construction .it is also worth to remark that asset return do not depend , by construction , on the type of portfolios which are traded in the market . a complete, arbitrage free market is the best of all possible worlds .when markets expand in both complexity and volumes , one is tempted to argue that this is indeed a good approximation of real financial markets .why are markets arbitrage free ? because , according to the standard folklore , otherwise `` everybody would jump in [ ... ] affecting the prices of the security '' .now let us assume that prices are affected not only when `` everybody jumps in '' but anytime someone trades , even though the effect is very small , for individual trades . in the simplified picture of the market we shall discuss below, prices depend on the balance between demand and supply ; if demand is higher than supply return is positive , otherwise it is negative .demand comes from either an exogenous state contingent process or is generated from the derivative market in order to match contracts . herederivatives are simply contracts which deliver a particular amount of asset in a given state ( e.g. an umbrella if it rains , nothing otherwise ) .even if markets are severely incomplete , financial institutions which we shall call banks for short in what follows will match the demand for a particular derivative contract if that turns out to be profitable , i.e. if the revenue they extract from it exceeds a risk premium . competition in the financial industry brings on one side to smaller risk premia and on the other to a wider diversity of financial instruments being marketed in the stock market .these two effects are clearly related because the increase in financial complexity measured by the number of different financial instruments makes the market less incomplete and hence it reduces the spread in the prices of derivatives .when financial complexity is large enough the market becomes complete because each claim can be replicated by a portfolio of bond and derivatives . beyond this point, there is an unique value for the price of each derivative , which is the one computed from apt .though the story is different , the conclusion seems to be the same : efficient , arbitrage - free , complete markets .the holy grail of financial engineering . actually , the road to efficient , arbitrage - free, complete markets can be plagued by singularities which arise upon increasing financial complexity .we shall illustrate this point within the stylized one period framework discussed above .take the one period framework as above , and let be the probability that state materializes .imagine there is a single risky asset and a risk - less asset .again we shall implicitly take discounted processes , so we set the return of the risk - less security to zero .the price of the risky asset is at and , in state .however , rather than defining at the outset the return of the asset in each state , we assume it is fixed by the law of demand and supply .banks develop and issue financial instruments . in this simplified world , a financial instrument is a pair where is the cost payed to the bank at by the investor and is the amount of risky asset it delivers in state at .we imagine there is a demand units of derivative if the price is less than and zero otherwise . ] for each of possible financial instruments , .the return of the asset at in each state is given by where is assumed to arise from investors excess demand whereas the second term is generated by banks hedging derivative contracts ) .the most transparent one , for our purposes , is that of assuming a finite liquidity of the underlying asset in a market maker setting such as that of ref . . between the time ( ) when derivatives are signed and the maturity ( ) , banks have to accumulate a position in the underlying for each contract they have sold ( ) . at the same time, an exogenous excess demand is generated by e.g. fundamental trading or other unspecified investment .the coefficient between returns and excess demand e.g. the market depth is assumed to be one here , for simplicity of notation .market clearing between demand and supply can also be invoked to motivate eq .( [ return ] ) . though that conforms more closely with the paradigms of mathematical economics , it introduces unnecessary conceptual complication within the stylized picture of the market given here .likewise , we impose absence of arbitrages in the market by assuming that there exist an equivalent martingale measure such that =0 ] . here can be interpreted as a risk premium related to the risk aversion of banks . indeed ,as long as the market is incomplete , there is not an unique price for derivatives .even if the prices were fixed on the basis of an equivalent martingale measure , as in section [ dynpricing ] , it is not possible , for banks , to eliminate risk altogether .financial markets are quite complex , with all sorts of complicated financial instruments .this situation is reproduced in our framework by assuming that demand and derivatives are drawn independently at random .furthermore we take the limit .this situation clearly defies analytical approaches for a specific realization .remarkably , however , it is possible to characterize the statistical behavior of typical realizations of such large complex markets . in order to do that ,some comments on the scaling of different quantities , with the number of states , is in order .the interesting region is the one where the number of derivatives is of the same order of the number of states ( ) . in this regime we expect the market to approach completeness . for this reason we introduce the variable . assuming that be a finite random variable in the limit , requires to be a finite random variable e.g. normal with mean and variance .likewise , we shall take as random variables with zero average and variance .indeed , the second term of eq .( [ return ] ) with is of the order of , which is finite in the limit we are considering to be of order one , but introduce a coefficient in eq .( [ return ] ) as a finite market depth . ] .it is convenient , in the following discussion , to introduce the parameter .\ ] ] where is the expected price of instrument , at .the dependence on in the equation above is motivated by the fact that the variance of and of the second term in eq .( [ utility ] ) , is of order .this implies that the relevant scale for the r.h.s . of eq .( [ epsilon ] ) is of order .the parameter encodes the risk premium which banks for selling derivative . in the next section ,we take the simplifying assumption that does not depend on , in order to illustrate the generic behavior of the model . in the next section, we shall discuss the case where depends on .actually , encodes the risk premium that banks require for trading derivatives and hence should depend on the volatility of the realized returns and on the degree of market completeness . while keeping this in mind, we will consider as an independent parameter with the idea that it decreases from positive values when the volatility of the market decreases or on the route to market completeness .we shall return to this point below .we consider , in this section , the situation where banks chose the supply so as to maximize their profit , considering returns as given .the statistical properties corresponding to this state the so - called competitive equilibrium can be related to those of the minima of a global function , by the following statement : _the competitive market equilibrium is given by the solution of the minimization of the function _ _ over the variables , where is given in terms of by eq .( [ return ] ) ._ notice that , if is negligible , this results states that a consequence of banks maximizing their utility , is that return s volatility the first term of eq.([h ] ) is reduced .actually , with , only those derivatives which decrease volatility are traded ( ) .this is an `` unintended consequence '' of banks profit seeking behavior , i.e. a feature which emerges without agents aiming at it .a proof of the statement above follows by direct inspection of the first order conditions and the observation that .we first observe that as a function of any of the is a convex function , hence it has a single minimum , for .imagine to be the minimum of . if it must be that if , it is not convenient for banks to sell derivative , which is consistent with zero supply ( ) .likewise , if the first order condition yields , which is the condition under which banks should sell as many derivatives as possible . if instead , then which is consistent with perfect competition among banks . from a purely mathematical point of view, we notice that is a quadratic form which depends on variables , through the linear combinations given by the returns .it is intuitively clear that , as increases , the minimum of becomes more and more shallow and , for large , it is likely that there will be directions in the space of ( i.e. linear combinations of the ) along which is almost flat .such quasi - degeneracy of the equilibria corresponds to the emergence of symmetries discussed in the introduction .the location of the equilibrium is likely to be very sensitive to perturbation along these directions .these qualitative conclusions can be put on firmer grounds by the theoretical approach which we discuss next . )^2\right] ] over functions of the variable . loosely speaking, embodies the interaction through the market of the `` representative '' derivative with all other derivatives .any quantity , such as the average supply of derivatives \ ] ] can be computed from the solution .figure [ figh ] plots the volatility )^2\right] ] also decreases , keeping a bounded sharpe ratio /\sqrt{\sigma}=\bar d/\sqrt{g+\delta} ] in numerical simulations of a system with for different values of ( points ) .lines refer to the theoretical prediction in the approximation of independent variables .inset : total volatility in numerical simulations for ( for and for ) and ( for and for ) as a function of .,width=340 ] at odds with the competitive equilibrium setting , in the present case the volatility of returns also acquires a contribution from the fluctuations of the variables , which is induced by the random choice of the state .hence , we can distinguish two sources of fluctuations in returns the first depending on the stochastic realization of the state , the second from fluctuations in the learning dynamics . here is the average return of the asset in state , and it can be shown that it converges to its competitive equilibrium value , .indeed the contribution of to the volatility )^2\right] ] instead shows a singular behavior which reflects the discontinuity of across the line for .our theoretical approach also allows us to estimate this contribution to the fluctuations under the assumption that the variables are statistically independent . as fig .[ figv ] shows , this theory reproduces remarkably well the results of numerical simulations for but it fails dramatically for in the region close to .the same effect arises in minority games , where its origin has been traced back to the assumption of independence of the variables .this suggests that in this region , the supplies of different derivatives are strongly correlated .this effect has a purely dynamical origin and it is reminiscent of the emergence of persistent correlations arising from trading in the single asset model of ref . .we have considered as a fixed parameter up to now . actually , in a more refined model , should depend on and it should be fixed endogenously in terms of the incentives of investors and banks . at the level of the discussion given thus far , it is sufficient to say that the scale of incentives of banks and investors , and hence of , is fixed by the average return or by the volatility .both these quantities decrease as increases , so it is reasonable to conclude that should be a decreasing function of , in any model where it is fixed endogenously . in other words , as the financial complexity ( ) increases , the market should follow a trajectory in the parameter space , which approaches the critical line , . in the next section ,we address this problem in an approximate manner , extending our analysis to the case where depends on .the model considered above can be improved in many ways . for example , the price of derivatives can be determined according to apt , thus making the market arbitrage free both in the underlying and in derivatives . in this sectionwe will explore this direction in the simplest possible case where the equivalent martingale measure is given .in particular we will show that this dynamical pricing mechanism is somehow equivalent to an asset dependent risk premia framework . in order to get some insights, we will then generalize the model considered in the previous sections to the case of a quenched distribution of risk premia .this assumes that derivative prices change over much longer time - scales than those of trading . in the previous sections prices of derivativeswere fixed by the exogenous parameter and no role was explicitly played by the risk neutral measure . in this sectionwe consider the case where prices of derivatives are directly computed according to apt .the framework is the same as before , but prices are computed through the risk neutral measure as =\sum_\omega q^\omega a_i^\omega(1+r^\omega)\ ] ] instead of being fixed by the external parameter . as before ,the emm is fixed exogenously .again the return of the risky asset is assumed to depend both on an exogenous demand and on the demand induced by the trading of derivatives the supply of derivatives ] ) .however , we can observe that the value beyond which the volatility approaches zero depends on the width of the risk premium distribution , and it increases with .the sharp transition previously observed in the behavior of the average supply for large becomes smooth as soon as the risk premium distribution has a finite width ( see figure ( [ fig : eps1 ] ) ) .the sharpness of the crossover in the behavior of as a function of is enhanced as the demand for derivatives increases .this signals a transition from a supply limited equilibrium , where the main determinant of the supply of derivatives is banks profits , to a demand limited equilibrium , where is mostly limited by the finiteness of the investors demand . in order to make this point explicit ,it is instructive to investigate the case of unbounded supply ( ) , because the transition becomes sharp in this limit .the representative derivative , in this case , is described by and , as before , ,\ ] ] (1+\chi)}{\sqrt{g+\delta+\sigma_{\epsilon}^2 ( 1+\chi)^2}}.\ ] ] in contrast with the bounded case and similarly to the case , the system displays a phase separation in the plane between a region in which the average supply remains finite and a region in which the volume of the traded assets diverges ( see left panel of figure [ fig : phase_space ] ) .introducing a distribution of risk premia with variance has then very different effects depending on whether the supply is bounded or not . in the first case acts as a regularizer preventing the occurrence of a sharp phase transition . in the second case ,namely , entails a deformation of the critical line that tends to flatten along the line as grows ( see right panel of figure [ fig : phase_space ] ) .this characterization allows to get some insights on the case in which prices are dynamically generated .+ as we showed before , in such a situation the system is characterized by a distribution of effective risk premia which becomes broader as market increases in complexity . hence , upon increasing , one expects a transition from a situation where volumes are limited by the profitability of derivative trading , to one where the demand is saturated .we presented here a very stylized description of an arbitrage free market , showing that also whithin the ideal context of apt the uncontrolled proliferation of financial intruments can lead to large fluctuations and instability in the market .interestingly , within our simplified model , the proliferation of financial instruments drives the market through a transition to a state where volumes of trading rapidly expand and saturate investors demand . in order for these kind of models to be more realistic ,some improvements certainly are needed .for instance , the demand for derivatives can be derived from a state dependent utility function for consumption at .this could also be a way to obtain an endogenous risk neutral measure , one of the main problems in the picture of dynamical prices considered above being that the risk neutral measure was fixed exogenously .we believe that , while accounting for these effects can make the model more appealing , the collective behavior of the model will not significantly change . indeed , the qualitative behavior discussed above is typical of a broad class of models and mainly depends on the proliferation of degeneracies in the equilibria of the model .the focus of the present paper is on theoretical concepts .their relevance for real markets requires quantitative estimates of the parameters .given the abstract nature of the model , this appears to be a non - trivial task which is an interesting avenue of future research .it has been recently suggested that market stability appears to have the properties of a public good .a public good is a good _ i ) _ whose consumption by one individual does not reduce its availability for consumption by others ( non - rivalry ) and _ ii ) _ such that no one can be effectively excluded from using the good ( non - excludability ) . at the level of the present stylized description, the expansion in the repertoire of traded assets introduces an externality which drives the market to unstable states .this suggests that systemic instability may be prevented by the introduction of a tax on derivative markets , such as that advocated long ago for foreign exchange markets by tobin , or by the introduction of `` trading permits '' , similar to those adopted to limit carbon emissions .the stabilizing effect of a tobin tax has already been shown within a model of a dynamic market which is mathematically equivalent to the one presented here . in summary, this paper suggests that the ideal view of the markets on which financial engineering is based is not compatible with market stability . the proliferation of financial instruments makes the market look more and more similar to an ideal arbitrage - free , efficient and complete market .but this occurs at the expense of market stability .this is reminiscent of the instability discussed long ago by sir robert may which develops in ecosystems upon increasing bio - diversity . for ecologiesthis result is only apparently paradoxical .indeed the species which populate an ecosystem can hardly be thought of as being drawn at random , but are rather subject to natural selection .indeed , on evolutionary time scales stability can be reconciled with bio - diversity , as shown e.g. in ref .the diversity in the ecosystem of financial instruments has , by contrast , been increasing at a rate much faster than that at which selective forces likely operate .in contrast with the axiomatic equilibrium picture of apt , on which financial engineering is based , the model discussed here provides a coherent , though stylized , picture of a financial market as a system with interacting units . in this picture ,concepts such as no - arbitrage , perfect competition , market efficiency or completeness arise as emergent properties of the aggregate behavior , rather than being postulated from the outset .we believe that such an approach can potentially shed light on the causes of and conditions under which liquidity crises , arbitrages and market crashes occur .mm gratefully acknowledge discussions with j .- p .bouchaud , d. challet , a. consiglio , m. dacorogna , i. kondor and m. virasoro .interaction with c. hommes , specially in relation to ref . , has been very inspiring .this work was supported by the complexmarkets e.u. strep project 516446 under fp6 - 2003-nest - path-1 .the problem is then the one of finding the ground state of the hamiltonian is carried out in the usual manner .let s write down the partition function where we have used the shorthand to indicate integrals on the variables in the index .here it is understood that all variables are integrated from to and all variables and are integrated over all the real axis , with a factor for each .the next step is to replicate this , i.e. to write -i \sum_i a_i^\omega \sum_a s_{i , a}\xi^\omega_a } \\ & = & \int \!\frac{d\vec{u}}{2\pi } { \rm tr}_{\vec{s},\vec{\xi } } e^{-\beta\sum_{i , a}g(s_{i , a})}\prod_{\omega = 1}^\omega e^{-\frac{1}{2\beta}\sum_a\left(u_aq^\omega+ \xi_a^\omega\right)^2-id^\omega \sum_a \xi^\omega_a -i\sum_i a_i^\omega \sum_a s_{i , a}\xi^\omega_a } \end{aligned}\ ] ] where the sums on runs over the replicas . in the second equation above ,we have performed the integrals over .we can now perform the average over the random variables which will be assumed to be normal with zero average and variance .this yields where the average is taken over gaussian variables with mean and variance and the symbol stands for integration over all the independent entries of the matrix .in order to evaluate the latter we use a standard delta function identity bringing into play the matrix conjugated to the overlap matrix . the -average is evaluated as follows }\int_{-\infty}^{+\infty}\prod_i \left[d\epsilon_i e^{- \frac{(\epsilon_i-\overline{\epsilon})^2}{2\sigma_{\epsilon}^2}-\beta\sum_{a}g(s_{i , a } ) } \right]\\ & = & \int \!d \mathbf{r } { \rm tr}_{\vec{s } } e^{\sum_{a\le b } r_{ab } [ \omega g_{ab}-\sum_is_{i , a}s_{i , b } ] -\beta\sum_{i , a}\overline{\epsilon } s_i^a+\frac{\omega \beta^2\sigma_{\epsilon}^2}{2}\sum_{a , b } g_{a , b}},\end{aligned}\ ] ] where the quadratic term arising from the gaussian integration has been replaced by the overlap matrix .taking the replica symmetric ( rs ) ansatz for and the -independent part of the exponent in the integrand takes the form where we have neglected terms of order in view of the limit .this yields } w_{\overline{\epsilon}}[\nu , r]^n,\ ] ] where we have defined =\left[{\rm tr}_{\vec{s } } e^{-\beta\sum_{a}\left[\overline{\epsilon } s_a+\frac{\nu}{2}s^2_a\right ] + \frac{1}{2}\left(\beta r\sum_a s_a\right)^2}\right]^n.\ ] ] in the limit this quantity can be evaluated as follows & = e^{n \log \left [ { \rm tr}_{\vec{s } } e^{-\beta\sum_{a}\left[\overline{\epsilon } s_a+\frac{\nu}{2}s^2_a\right ] + \frac{1}{2}\left(\beta r\sum_a s_a\right)^2}\right]}\\ & = \exp\left\{n \log \left [ { \rm tr}_{\vec{s } } e^{-\beta\sum_{a}\left[\overline{\epsilon } s_a+\frac{\nu}{2}s^2_a\right ] } \big\langle e^{z\beta r\sum_a s_a}\big\rangle_z \right ] \right\}\end{aligned}\ ] ] where we have performed a hubbard - stratonovich transformation in order to decouple the variables introducing the average over the gaussian variable . clearly : = \exp\left\{n\log\big\langle\left(\int_0 ^ 1 ds e^{-\beta [ s^2+s(\overline{\epsilon}-zr)]}\right)^m\big\rangle_z\right\}.\ ] ] exploiting the usual identity valid for , we finally obtain = \exp\left\{n m\big\langle\log\int_0 ^ 1 ds e^{-\beta [ s^2+s(\overline{\epsilon}-zr)]}\big\rangle_z\right\}.\ ] ] inserting ( [ w ] ) into ( [ iofg ] ) we eventually get + \frac{n}{\beta}\left\langle\log\int_0^\infty\!ds e^{-\beta[\overline{\epsilon } s+\nu s^2/2-rsz ] } \right\rangle_z\right\}}\ ] ] after inserting ( [ iofgfinal ] ) into ( [ zqm ] ) we have to perform the integrals in . for each &=\int\!d\vec{\xi } e^{-\frac{1}{2\beta}\sum_a\left(u_aq^\omega+\xi_a\right)^2-id^\omega \sum_a \xi_a -\frac{1}{2}\sum_{a , b}\xi_a \xi_b g_{a , b}}\end{aligned}\ ] ] the matrix , in the rs ansatz has two distinct eigenvalues : therefore the determinant of is clearly }.\ ] ] the expression ( [ det ] ) assists performing the gaussian integral in ( [ j ] ) as =e^{-\frac{1}{2}\log\left(1+\beta m\frac{g}{1+\chi}\right)-\frac{m } { 2}\log(1+\chi ) + \frac{\beta m}{2(1+\chi+m\beta g ) } ( uq^\omega + id^\omega)^2-\frac{\beta m}{2 } ( uq^\omega)^2}\ ] ] where we have rescaled as and considered for the form .taking the average over in the limit we use the fact that which in our case yields having taken for an exponential distribution with average . hence \nonumber\\ & ~ & + \frac{n}{\beta}\left\langle\log\int_0^\infty\!ds e^{-\beta[\overline{\epsilon } s+\nu s^2/2-rsz ] } \right\rangle_z.\end{aligned}\ ] ] we first observe that the saddle point equation for yields .then we take the limit which gives -n \left\langle\min_{s\ge 0 } \left[\overline{\epsilon } s+\frac{\nu}{2 } s^2-rsz\right ] \right\rangle_z\ ] ] the saddle points equations , obtained differentiating ( [ fofbeta ] ) with respect to the order parameters and sending to , read where , since , in our case , the supply is limited to .the above set of equations can then be recasted in the form \\ \chi & = \frac{n{\mathbb{e}}_z[s_zz](1+\chi)}{\sqrt{g+\delta+\sigma_{\epsilon}^2(1+\chi)^2}}.\end{aligned}\ ] ] the above calculation can also be performed , in order to probe the solution , introducing an auxiliary field coupled to the returns .this allows also to easily compute the average and the fluctuations of returns by deriving with respect to the logarithm of the free energy and then setting : show here how it is possible to find the critical line .let us consider the case , so that , with . from the saddle pont equations we get (z_0)}{1-ni_2(z_0)}\ ] ] wherewe have defined .inserting now this expression for into equation ( [ r ] ) and explointing we obtain that using ( [ chi ] ) can be writen as where .if we now look for a solution in the case the above equation reduces to and we also have that .these equations define the critical line . in particularit is clear at this level that the dependence on the parameters of the risk premia distribution enters trough the ratio 99 w. e. buffet , berkshire hathaway annual report 2002 .monetary fund , _ global financial stability report _ ( april 2008 ) .monetary fund , _ global financial stability report _ ( october 2008 ) .m. m. dacorogna , r. genay , u. mller , r. b. olsen , o. v. pictet , _ an introduction to high - frequency finance _ , chap .v , academic press , london ( 2001 ) .bouchaud , j. d. farmer and f. lillo , _ how markets slowly digest changes in supply and demand _ , preprint arxiv:0809.0822 osler , c. l. , 2006 .macro lessons from microstructure. international journal of finance and economics 11 , 55 - 80 .brock , c.h .hommes , f.o.o .wagener , _ more hedging instruments may destabilize markets _ cendef working paper 08 - 04 university of amsterdam ( 2008 ) . g. raffaelli , m. marsili , jstat l08001 ( 2006 ) ; m. marsili , g. raffaelli , and b. ponsot submitted to j. ec .dyn . control ( 2008 ) .bouchaud , _ economics need a scientific revolution _ , nature , vol 455 ,( 2008 ) k. r. sircar and g. papanicolaou , appl .fin . * 5 * , 45 - 82 ( 1998 ) ; m avellaneda , md lipkin , a market - induced mechanism for stock pinning , quantitative finance , * 3 * , 417 - 425 ( 2003 ) ; m. jeannin , g. iori , d. samuel , _ the pinning effect : theory and a simulated microstructure model _, quantitative finance ( forthcoming , 2008 ) ; s.x.y .pearson , a.m. poteshman , j. fin .econ . * 78 * , 48 - 87 ( 2005 ) .r. c. merton and z. bodie _ design of financial systems : towards a synthesis of function and structure _ , j. of investment management , 3 , ( 2005 ) , 1 - 23 robert j. shiller , _ the subprime solution : how today s global financial crisis happened , and what to do about it _ , princeton university press ( 2008 ) .a. de martino and m. marsili , j. phys .a , * 39 * , r465-r540 ( 2006 ) .d. challet and m. marsili phys .e 68 , 036132 ( 2003 ) .t. lux and m. marchesi , nature * 397 * , 498 - 500 ( 1999 ) .s. r. pliska , _ introduction to mathematical finance : discrete time models _ , ( blackwell , oxford , 1997 ) .m. marsili , ssrn e - print 141597 , 2009 ( available at http://ssrn.com/abstract=1415971 ) .m. wyart , j.p .bouchaud , j. ec . behavior and organization ,* 63 * , 1 ( 2007 ) . j. d. farmer , s. joshi , j. ec .behav . and organization , * 49 * , 149 - 171 ( 2002 ) .r. m. may , nature 238 ( 1972 ) , 413 r. mehrotra , v. soni and s. jain , _ diversity begets stability in an evolving network _ , preprint arxiv : nlin.ao/07051075 .morris , s. and shin , h.s ., _ financial regulation in a system context _ , brookings papers on economic activities , washington , * 11 * ( 2008 ) .tobin j , _ `` the new economics one decade older '' _ , the eliot janeway lectures on historical economics in honour of joseph schumpeter 1972 ( princeton university press , princeton , us ) ; _ eastern economic journal _ * iv * 153 - 159 ( 1978 ) g. bianconi , t. galla , m. marsili and p. pin , _ effects of tobin taxes in minority game markets _ to appear in j. econ . behavior and organization ( 2009 ) .d. w. montgomery , journal of economic theory * 5 * , 395 - 418 ( 1972 ) .
we contrast arbitrage pricing theory ( apt ) , the theoretical basis for the development of financial instruments , with a dynamical picture of an interacting market , in a simple setting . the proliferation of financial instruments apparently provides more means for risk diversification , making the market more efficient and complete . in the simple market of interacting traders discussed here , the proliferation of financial instruments erodes systemic stability and it drives the market to a critical state characterized by large susceptibility , strong fluctuations and enhanced correlations among risks . this suggests that the hypothesis of apt may not be compatible with a stable market dynamics . in this perspective , market stability acquires the properties of a common good , which suggests that appropriate measures should be introduced in derivative markets , to preserve stability . `` in my view , derivatives are financial weapons of mass destruction , carrying dangers that , while now latent , are potentially lethal '' . at the time of warren buffet warning markets were remarkably stable and the creativity of financial engineers was pushing the frontiers of arbitrage pricing theory to levels of unprecedented sophistication . it took 5 years and a threefold increase in size of credit derivative markets , for warren buffet `` time bomb '' to explode . paradoxically , this crisis occurred in a sector which has been absorbing human capital with strong scientific and mathematical skills , as no other industry . apparently , this potential was mostly used to increase the degree of sophistication of mathematical models , leaving our understanding lagging behind the increasing complexity of financial markets . recent events raise many questions : * the sub - prime mortgage market was approximately 5% of the us real estate market , and the risk was well diversified across the whole world . why did such a small perturbation cause so large an effect ? * why are stock prices affected by a crisis in derivative markets ? * why did correlations between risks grow so large during the recent crisis , with respect to their _ bona - fide _ estimates before the crisis ? these and other questions find no answer in the theories of mathematical economics . indeed , real markets look quite different with respect to the picture which arbitrage pricing theory ( apt ) gives of them . the problem is that apt is not merely a theoretical description of a phenomenon , as other theories in natural sciences . it is the theory on which financial engineering is based . it enters into the functioning of the system it is describing , i.e. it is part of the problem itself . the key function of financial markets is that of allowing inter - temporal exchanges of wealth , in a state contingent manner . taking this view , apt makes it possible to give a present monetary value to future risks , and hence to price complex derivative contracts . in order to do this , apt relies on concepts , such as perfect competition , market liquidity , no - arbitrage and market completeness , which allows one to neglect completely the feedback of trading on market s dynamics . these concepts are very powerful , and indeed apt has been very successful in stable market conditions . in addition , the proliferation of financial instruments provides even further instrument for hedging and pricing other derivatives . so the proliferation of financial instruments produces precisely that arbitrage - free , complete market which is hypothesized by apt . in theory . in practice , markets are never perfectly liquid . the very fact that information can be aggregated into prices , requires that prices respond to trading ( see e.g. for evidence on fx markets or for equity markets ) . in other words , it is because markets are illiquid that they can aggregate information into prices . liquidity indeed is a matter of time scale and volume size . this calls for a view of financial markets as interacting systems . in this view , trading strategies can affect the market in important ways . both theoretical models and empirical research , show that trading activity implied by derivatives affects the underlying market in non - trivial ways . furthermore , the proliferation of financial instruments ( arrow s securities ) , in a model with heterogeneous agents , was found to lead to market instability . the aim of this paper is to contrast , within a simple framework , the picture of apt with a dynamical picture of a market as an interacting system . we show that while the introduction of derivatives makes the market more efficient , competition between financial institutions naturally drives the market to a critical state characterized by a sharp singularity . close to the singularity the market exhibits the three properties alluded to above : _ 1 ) _ a strong susceptibility to small perturbations and _ 2 ) _ strong fluctuations in the underlying stock market . furthermore _ 3 ) _ while correlations across different derivatives is largely negligible in normal times , correlations in the derivative market are strongly enhanced in stress times , when the market is close to the critical state . in brief , this suggests that the hypothesis of apt may not be compatible with the requirement of a stable market . capturing the increasing complexity of financial markets in a simple mathematical framework is a daunting task . we draw inspiration from the physics of disordered systems , in physics , such as inhomogeneous alloys which are seen as systems with random interactions . likewise , we characterize the typical properties of an ensemble of markets with derivatives being drawn from a given distribution . our model is admittedly stylized and misses several important features , such as the risks associated with increased market exposure in stress conditions or the increase in demand of financial instruments in order to replicate other financial instruments . however , it provides a coherent picture of collective market phenomena . but it is precisely because these models are simple that one is able to point out why theoretical concepts such as efficient or complete markets and competitive equilibria have non - trivial implications . the reason being that these conditions hold only in special points of the phase diagram where singularity occurs ( phase transitions ) . it is precisely when markets approach these ideal conditions that instabilities and strong fluctuations appear . loosely speaking , this arises from the fact that the market equilibrium becomes degenerate along some directions in the phase space . in a complete , arbitrage - free market , the introduction of a derivative contract creates a symmetry , as it introduces perfectly equivalent ways of realizing the same payoffs . fluctuations along the direction of phase space identified by symmetries can grow unbounded . loosely speaking , the financial industry is a factory of symmetries , which is why the proliferation of financial instruments can cause strong fluctuations and instabilities . in this respect , the study of competitive equilibria alone can be misleading . what is mostly important is their stability with respect to adaptive behavior of agents and the dynamical fluctuations they support and generate . the rest of the paper is organized as follows : the next section recalls the basics of apt in a simple case , setting the stage for the following sections . then we introduce the model and discuss its behavior . we illustrate the generic behavior of the model in a simple case where the relevant parameters are the number of different derivatives and the risk premium . in this setting we first examine the properties of competitive equilibria and then discuss the fluctuations induced within a simple adaptive process , by which banks learn to converge to these equilibria . then we illustrate a more general model with a distribution of risk premia . this confirms the general conclusion that , as markets expand in complexity , they approach a phase transition point , as discussed above . the final section concludes with some perspectives and suggests some measures to prevent market instability .
we thank prof . l. huang for helpful discussions . this work was supported by aro under grant w911nf-14 - 1 - 0504 , and the nsf of china under grants nos .11575072 , 11135001 and 11275003 .zgh and ycl devised the research project .jqz , zgh , and rs performed numerical simulations .jqz , zgh , ycl , and zxw analyzed the results and wrote the paper .* competing financial interests * : the authors declare no competing financial interests . as a function of the pinning fraction for scale - free networks of different connection densities .* the average degree of the networks for simulation are , , , to in ( a - d ) , respectively , and the value of the pinning pattern indicator ranges from to for each panel . the results are averaged over realizations for scale - free networks of size and degree exponent . ] as a function of the pinning fraction for scale - free networks of varying degrees of heterogeneity .* the scaling exponents of the networks are , , , and in ( a - d ) , respectively , and the value of the pinning pattern indicator ranges from to for each panel .the results are averaged over realizations for scale - free networks of size and average degree . ]is obtained from the transition matrix eq .( [ eq : t20pin2 ] ) for .the value of the pinning pattern indicator is set as , , and in ( a - d ) , respectively , and the pinning fraction ranges from to . ] in comparison with the simulation results . *the system has size and power - law degree distribution with scaling exponent .the theoretical prediction does not depend on the value of the average degree . in direct simulations ,the values of the average degree are , , , and . in each case, the simulation result is averaged over network realizations .the value of the pinning pattern indicator is set to for systems with different degree scaling exponents . *the system has size and power - law degree distribution with different values of the degree exponent : ( a - d ) , , , , respectively . in each case, the value of the pinning pattern indicator ranges from to . ] and denote nonzero optimal pinning fraction given by eq .( [ eq : optimal_rho ] ) .the scale - free networks have the degree exponents , , , and , respectively . the response function is for ( corresponding to ) .( b ) contour map of in the parameter space of and for scale - free networks with . in the lower - left region below the boundary ( white dashed line ), nonzero solution of can not be obtained .( c ) optimal pinning fraction as a function of for scale - free networks .the analytical results from eq .( [ eq : optimal_rho ] ) ( red solid curve ) and the simulation results ( black open squares ) agree well with each other .the red arrow marks the theoretical prediction of the boundary , where nonzero solutions exist on the left side .( d ) for er random networks , as a function of . theoretical results from eq . ([ eq : optimal_rho ] ) ( red open circle ) and simulation results ( black open squares ) are shown .the boundaries and obtained theoretically ( pointed to by solid arrows ) , respectively , stand for the constraint in eqs .( [ eq : re_func_condition ] ) and ( [ eq : disc_net_condition2 ] ) . in ( c ) and ( d ) , the value of varies but is set to .the scale - free and random networks used in the simulations have and . ] for different pinning patterns . *( a - c ) simulation results of from scale - free networks for , , and , which correspond to the results of in figs .[ fig:2](a ) , [ fig:2](c ) , and [ fig:1](d ) , respectively .( d - f ) theoretical results of from eq .( [ eq : kappa ] ) for the cases shown in figs .[ fig:5](a ) , [ fig:5](c ) , and [ fig:5](d ) , respectively .the reference pinning pattern indicator is . ] and in eq .( [ eq : pi ] ) . * ( a , b ) collapse of for various values , where the reference value is in ( a ) and in ( b ) .the values of are predicted from eq .( [ eq : kappa ] ) for a scale - free network with and .( c ) the function for , , , , , and . ]
* resource allocation takes place in various types of real - world complex systems such as urban traffic , social services institutions , economical and ecosystems . mathematically , the dynamical process of complex resource allocation can be modeled as _ minority games _ in which the number of resources is limited and agents tend to choose the less used resource based on available information . spontaneous evolution of the resource allocation dynamics , however , often leads to a harmful herding behavior accompanied by strong fluctuations in which a large majority of agents crowd temporarily for a few resources , leaving many others unused . developing effective control strategies to suppress and eliminate herding is an important but open problem . here we develop a pinning control method . that the fluctuations of the system consist of intrinsic and systematic components allows us to design a control scheme with separated control variables . a striking finding is the universal existence of an optimal pinning fraction to minimize the variance of the system , regardless of the pinning patterns and the network topology . we carry out a detailed theoretical analysis to understand the emergence of optimal pinning and to predict the dependence of the optimal pinning fraction on the network topology . our theory is generally applicable to systems with heterogeneous resource capacities as well as varying control and network topological parameters such as the average degree and the degree distribution exponent . our work represents a general framework to deal with the broader problem of controlling collective dynamics in complex systems with potential applications in social , economical and political systems . * resource allocation is an essential process in many real - world systems such as ecosystems of various sizes , transportation systems ( e.g. , internet , urban traffic grids , rail and flight networks ) , public service providers ( e.g. , marts , hospitals , and schools ) , and social and economic organizations ( e.g. , banks and financial markets ) . the underlying system that supports resource allocation often contains a large number of interacting components or agents on a hierarchy of scales , and there are multiple resources available for each agent . as a result , complex behaviors are expected to emerge ubiquitously in the dynamical evolution of resource allocation . in particular , in a typical situation , agents or individuals possess similar capabilities in information processing and decision making , and they share the common goal of pursuing as high payoffs as possible . the interactions among the agents and their desire to maximize payoffs in competing for limited resources can lead to vast complexity in the system dynamics . given resource - allocation system that exhibits complex dynamics , a defining virtue of optimal performance is that the available resources are exploited evenly or uniformly by all agents in the system . in contrast , an undesired or even catastrophic behavior is the emergence of herding , in which a vast majority of agents concentrate on a few resources , leaving many other resources idle or unused . if this behavior is not controlled , the few focused resources would be depleted , possibly directing agents to a different but still small set of resources . from a systems point of view , this can lead to a cascading type of failures as resources are being depleted one after another , eventually resulting in a catastrophic breakdown of the system on a global scale . in this paper , we analyze and test an effective strategy to control herding dynamics in complex resource - allocation systems . a universal paradigm to model and understand the interactions and dynamical evolutions in many real world systems is complex adaptive systems , among which minority game ( mg ) stands out as a particularly pertinent framework for resource allocation . mg dynamics was introduced by challet and zhang to address the classic el farol bar - attendance problem conceived by arthur . in an mg system , each agent makes choice ( e.g. , or , to attend a bar or to stay at home ) based on available global information from the previous round of interaction . the agents who pick the minority strategy are rewarded , and those belonging to the majority group are punished due to limited resources . the mg dynamics has been studied extensively in the past . to analyze , understand , and exploit the mg dynamics , there are two theoretical approaches : mean field approximation and boolean dynamics . the mean field approach was mainly developed by researchers from the statistical - physics community to cast the mg problem in the general framework of non - equilibrium phase transitions . in the boolean dynamics , for any agent , detailed information about the other agents that it interacts with is assumed to be available , and the agent responds accordingly . both approaches can lead to `` better than random '' performance in resource utilization . however , herding behavior in which many agents take identical action can also take place , which has been extensively studied and recognized as one important factor contributing to the origin of complexity that leads to enhanced fluctuations and , consequently , to significant degradation in efficiency . the control strategy we analyze in this paper is the pinning method that has been studied in controlling the collective dynamics , such as synchronization , in complex networks . for the general setting of pinning control , the two key parameters are the `` pinning fraction , '' the fraction of agents chosen to hold a fixed strategy , and the `` pinning pattern , '' the configuration of plus or minus strategies assigned to the pinned agents . our previous work treated the special case of two resources of identical capacities , where the pinning pattern was such that the probabilities of agents pinned to positive or negative strategies ( to be defined later ) are equal . here , we investigate a more realistic model setting and articulate a general mathematic control framework . a striking finding is that biased pinning control pattern can lead to an optimal pinning fraction for a variety of network topologies , so that the system efficiency can be improved remarkably . we develop a theoretical analysis based on the mean - field approximation to understand the non - monotonic behavior of the system efficiency about the optimal pinning fraction . we also study the dependence of the optimal fraction on the topological features of the system , such as the average degree and heterogeneity , and obtain a theoretical upper bound of the system efficiency . the theoretical predictions are validated with extensive numerical simulations . our work represents a general framework to optimally control the collective dynamics in complex mg systems with potential applications in social , economical and political systems . + + * results * [ [ boolean - dynamics . ] ] * boolean dynamics . * + + + + + + + + + + + + + + + + + + + in the original boolean system , a population of agents compete for two alternative resources , denoted as and , which have the same accommodating capacity . similar to the mg dynamics , only the agents belonging to the _ global minority _ group are rewarded by one unit of payoff . as a result , the profit of the system is equal to the number of agents selecting the resource with attendance less than the accommodating capacity , which constitute the global - minority group . the dynamical variable of the boolean system is denoted as , the number of agents in the system at time step . the variance of about the capacity characterizes the efficiency of the system . the densities of the and agents in the whole system are and , respectively . the state of the system can be conveniently specified by the column vector . a boolean system has two states ( a binary state system ) , in which agents make decision according to the local information from immediate neighbors . the neighborhood of an agent is determined by the connecting structure of the underlying network . each agent receives inputs from its neighboring agents and updates its state according to the boolean function , a function that generates either and from the inputs . realistically , for any agent , global information about the minority choice from all other agents at the preceding time step may not be available . under this circumstance , the agent attempts to decide the global minority choice based on neighbors previous strategies . to be concrete , we assume that agent with neighbors chooses at time step with the probability ,\ ] ] and chooses with the probability , where and , respectively , are the numbers of and neighbors of at time step , with . the expressions of probabilities , however , are valid only under the assumption that the two resources have the _ same _ accommodating capacity , i.e. , . in real - world resource allocation systems , typically we have . consider , for example , the extreme case of . suppose we have for agent . in this case , rationality demands a stronger preference to the resource ( i.e. , with a higher probability ) . to investigate the issues associated with the control of realistic boolean dynamics , we define where is the response function of each agent to its local environment , i.e. , the local neighbor s configuration with and . the quantity ( or ) characterizes the contribution of the -neighbors ( or -neighbors ) to the probability for to adopt . the quantity represents the strength of _ assimilation _ effect among the neighbors , while quantifies the _ dissimilation _ effect . intuitively , the resource with a larger accommodating capacity would have a stronger assimilation effect among agents . by definition , the elements in each column in the matrix satisfy , i.e. , the total probability for an agent to choose and is unity . using the mean - field assumption that the configuration of neighbors is uniform over the whole system , i.e. , , we have that the stable solution for eq . ( [ eq : interaction ] ) satisfies , which leads to the eigenstate of as the rational response ( ) of agents to nonidentical accommodation capacities of resources will lead to the equality , i.e. , the stable fraction of the agent densities in and is simply the ratio of the capacities . the elements of can then be defined accordingly using this ratio and the condition , which characterizes a stronger preference to the resource with a larger capacity . for the specific case of identical - capacity resources , we have , and the solution reduces to the result of the original boolean dynamics . the optimal solution for the resource allocation is . a general measure of boolean system s performance is the variance of with respect to the _ capacity _ : which characterizes , over a time interval , the statistical deviations from the optimal resource utilization . a smaller value of indicates that the resource allocation is more optimal . a general phenomenon associated with boolean dynamics is that , as agents strive to join the minority group , an undesired herding behavior can emerge , as characterized by large oscillations in . our goal is to understand , for the general setting of nonidentical resource capacities , the effect of pinning control on suppressing / eliminating the herding behavior . [ [ pinning - control - scheme . ] ] * pinning control scheme . * + + + + + + + + + + + + + + + + + + + + + + + + + our basic idea to control the herding behavior is to `` pin '' certain agents to freeze their states so as to realize optimal resource allocation , following the general principle of pinning control of complex dynamical networks . let be the fraction of agents to be pinned , so the fraction of unpinned ( or free ) nodes is . the numbers of the two different types of agents , respectively , are and . the free agents make choices according to local time - dependent information , for whom the inputs from the pinned agents are fixed . the two basic quantities characterizing a pinning control scheme are the order of pinning ( the way how certain agents are chosen to be pinned ) and the pinning pattern . we adopt the degree - preferential pinning ( dpp ) strategy in which the agents are selected to be pinned according to their connectivity or degrees in the underlying network . in particular , agents of higher degrees are more likely to be pinned . this pinning strategy originated from the classic control method to mitigate the effects of intentional attacks in complex networks . the selection of the pinning pattern can be characterized by the fractions and of the pinned agents that select and , respectively , where . the quantities and are thus the _ pinning pattern indicators_. different from the previous work that investigated the specific case of ( half - half pinning pattern ) , here we consider the more general case where is treated as a variable . the pinning schemes are implemented on random networks and scale - free networks with different values of the scaling exponent in the power - law degree distribution . as we will see below , one uniform optimal pinning fraction exists for various values of the pinning pattern indicator . [ [ simulation - results . ] ] * simulation results . * + + + + + + + + + + + + + + + + + + + + + to gain insight , we first study the original boolean dynamics with and for different values of the pinning pattern indicator . the game dynamics are implemented on scale - free networks of size and of the scaling exponent with the average degree ranging from to . the dpp scheme is performed with pinning fraction and values ranging from ( i.e. , half - half pinning ) to ( i.e. , all to pinning ) . the variance versus for different values of and different degree are shown in fig . [ fig:1 ] . we see that , in general , systems with larger values of exhibit larger variance , implying that a larger deviation of from the ratio of the capacity can lead to lower efficiency in resource allocation . surprisingly , there exists a universal optimal pinning fraction ( denoted by ) about , where the variance is minimized and exhibits an opposite trend for , i.e. , larger values of result in smaller values of . the implication is that , deviations of from provide an opportunity to achieve better performance ( with smaller variances ) , due to the non - monotonic behavior of with . to understand the emergence of the optimal pinning fraction , we see from fig . [ fig:1 ] that the values of are approximately identical for different values of , which decrease with the average degree . as we will see below , in the large degree limit , the value of can be predicted theoretically ( c.f . , fig . [ fig:4 ] ) . simulations using scale - free networks of different degrees of heterogeneity also indicate the existence of the universal optimal pinning control strategy , as can be seen from the behaviors of the variance calculated from scale - free networks of different degree exponents ( fig . [ fig:2 ] ) , where smaller values of point to a stronger degree of heterogeneity of the system . we see that an optimal value of exists for all cases , which decreases only slightly with , i.e. , more heterogeneous networks exhibit larger values of the optimal pinning fraction , a phenomenon that can also be predicated theoretically ( c.f . , fig . [ fig:5 ] ) . + + * theoretical analysis * + + the phenomenon of the existence of a universal optimal pinning fraction , independent of the specific values of pinning pattern indicator , is remarkable . here we develop a quantitative theory to explain this phenomenon . to begin , we note that mg is effectively a stochastic dynamical process due to the randomness in the selection of strategies by the agents . the variance of the system , a measure of the efficiency of the system , is determined by two separated factors . the first , denoted as , is the intrinsic fluctuations of about its expected value , defined as , which can be calculated once the stable distribution of attendance is known , where can be obtained either analytically ( c.f . , fig . [ fig:3 ] ) or numerically . the second factor , denoted as , is the difference of the expected value from the capacity of the system : , which leads to a constant contribution to the variance of the system . taking into account the two factors , we can write the system variance [ defined in eq . ( [ eq : sigma ] ) ] as a direct summation of the two factors and . in contrast to the simplified case discussed in the previous works , the expected value in the dynamical process is not necessarily equal to the capacity . nonzero values of are a result of biased pinning pattern ( ) or improper response to the capacities of resources . in addition , recent studies of flux - fluctuation law in complex systems also found that , the variance of the system is determined by the two factors : the intrinsic fluctuation and the systematic external drives . [ [ stable - distribution - of - attendance . ] ] * stable distribution of attendance . * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + to quantify the process of biased pinning control , we derive a discrete - time master equation and then discuss the effect of network topology on control . [ [ discrete - time - master - equation - for - biased - pinning - control . ] ] discrete - time master equation for biased pinning control . + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + to understand the response of the boolean dynamics to pinning control with varied values of the pinning pattern indicator , we generalize our previously developed analysis . let be the probability for a _ neighbor _ of one given _ free _ agent to be pinned so that the probability of encountering a free agent is . the transition probability of the system from to can be expressed in terms of . in particular , note that the state transition is due to updating of the free agents , as the remaining agents are fixed . to simplify notations , we set , , and , for ] is the probability for a free agent to choose with the first and second terms representing the contributions of the pinned and free neighbors , respectively . in the boolean system , the values of attendance oscillate about its equilibrium value . the transition probability between the state at and can be expressed as a function of : \nonumber\\ & & \times [ { n_f\choose j - n_{p}\eta_{+ } } \times ( p_{p}\eta_{-}+p_{f}\frac{n_f - k+n_p\eta_{+}}{n_f})^{j - n_{p}\eta_{+ } } \nonumber\\ & & \times ( p_{p}\eta_{+}+p_{f}\frac{k - n_p\eta_{+ } } { n_f})^{n_{f}-(j - n_{p}\eta_{+ } ) } ] \}.\end{aligned}\ ] ] equation ( [ eq : t20pin2 ] ) takes into account the effect of pinning patterns , which was ignored previously . the resulting balance equation governing the dynamics of the markov chains becomes which is the discrete - time master equation . the stable state that the system evolves into can be defined in the matrix form as where is an matrix with elements , and is the corresponding vector of with ranging from to . the probability distribution is a binomial function with various expectation values , as shown in fig . [ fig:3 ] . in addition , the probability is zero for \cup [ n_p(1-\eta_{+}),n] ] and nonzero denominator require the function for scale - free networks , as in eq . ( [ eq : pfp_rho ] ) , increases monotonically with . figure [ fig:6](a ) displays the curves and , i.e. , both sides of eq . ( [ eq : optimal_rho ] ) . the existence of nonzero for demands for scale - free networks , diverges at . equation ( [ eq : cont_net_conditon ] ) thus holds , implying that the dpp pinning scheme has a nonzero optimal pinning fraction , leading to . however , for homogeneous networks , eq . ( [ eq : cont_net_conditon ] ) may not hold . in this case , a more specific implicit condition can be obtained from eq . ( [ eq : cont_net_conditon ] ) through the following analysis . in particular , without an analytical expression of , the derivative of with respect to can be obtained from eqs . ( [ eq : rhopk ] ) and ( [ eq : rho_k_pk ] ) : for degree preferential pinning , in the limit , the maximum degree for _ free _ agents is . we thus have which requires that the network be heterogeneous . for , we have , ensuring the existence of a nonzero value for . the contour map of the optimal pinning fraction in the parameter space of and for scale - free networks with is shown in fig . [ fig:6](b ) . the boundary associated with condition eq . ( [ eq : re_func_condition ] ) is represented by the white dashed line , where nonzero solutions of do not exist below the lower - left region . figures [ fig:6](c ) and [ fig:6](d ) show for as a function of for scale - free and random networks , respectively , where is varied and is fixed to . the theoretical prediction of [ red solid curve in ( c ) and red open circle in ( d ) ] is given by the intersections of the curves and in fig . [ fig:6](a ) . for scale - free networks , since eq . ( [ eq : cont_net_conditon ] ) holds , eq . ( [ eq : re_func_condition ] ) is the only constraint on the value of ( red dashed arrow ) , with the region at the right - hand side yielding nonzero solutions . the red solid curve in fig . [ fig:6](c ) represents the theoretical prediction , and the open squares denote the simulation results from scale - free networks of size , power - law exponent , and average degree . for random networks , the existence of nonzero solutions requires that eqs . ( [ eq : re_func_condition ] ) and ( [ eq : disc_net_condition ] ) or ( [ eq : disc_net_condition2 ] ) hold . for the poisson degree distribution , the maximum degree of the network can be calculated from we can obtain an estimate of the value of that satisfies eq . ( [ eq : disc_net_condition2 ] ) , as indicated by the blue arrow ( labeled as boundary 2 ) in fig . [ fig:6](d ) . the right - hand side of this point satisfies both eqs . ( [ eq : re_func_condition ] ) and ( [ eq : disc_net_condition2 ] ) , implying the existence of nonzero . comparison of the results from random and scale - free networks with different scaling exponents ( figs . [ fig:2 ] , [ fig:5 ] and [ fig:6 ] ) shows that , stronger heterogeneity tends to enhance the values of , which can also be seen from eq . ( [ eq : cont_net_conditon ] ) . to better understand the non - monotonic behavior of with , we provide a physical picture of the behavioral change for greater or less than . the effect of pinning control is determined by the number of edges between pinned and free agents , which are _ pinning - free edges_. for a small pinning fraction , the average effect per pinned agent on the system ( represented by the number of pinned - free edges per pinned agent ) is relatively large . however , as is increased , the average impact is reduced for two reasons : ( _ a _ ) an increase in the edges within the pinned agents community itself ( i.e. , two connected pinned agents ) , which has no effect on control , and ( _ b _ ) a decrease in the number of free agents , which directly reduces the number of pinned - free edges . consider the special case of and . for small , the pinned agents have a significant impact so that the free agents tend to overestimate the probability of winning by adopting . in this case , the expected value is smaller than , corresponding to . for highly heterogeneous systems , the average impact per pinned agent is larger for a given small value of . as is increased , the average influence per pinned agent reduces and , consequently , restores towards . for and , the system variance [ eq . ( [ eq : sigmaf1f2 ] ) ] is minimized due to , and the corresponding pinning fraction achieves the optimal value . for strongly heterogeneous systems , due to the large initial average impact caused by pinning the hub agents , the optimal pinning fraction appears in the larger region . further increase in with will lead to and , thereby introducing nonzero again and , consequently , generating an increasing trend in . [ [ collapse - of - variance . ] ] collapse of variance . + + + + + + + + + + + + + + + + + + + + + for certain networks , the variance is determined by the values of the pinning pattern indicator and the pinning fraction . our analysis so far focuses on the contribution of to the variance as the pinning fraction is increased but for fixed . it is thus useful to define a quantity related to the variance , which can be expressed in the form of separated variables . for two different values of the pinning pattern indicator , and , for a given value of , the relative weight of can be obtained from eq . ( [ eq : deltap ] ) as ^ 2 = [ \frac{f_{+|-}(\eta_{+}-1)+f_{-|+}\eta_{+ } } { f_{+|-}(\eta_{+}^{\prime}-1)+f_{-|+}\eta_{+}^{\prime}}]^2,\end{aligned}\ ] ] where is a function of both and . remarkably , the ratio depends on and but it is independent of , due to the form of separated variables in eq . ( [ eq : deltap ] ) . from the simple relationship eq . ( [ eq : lambda ] ) , we can define the relative changes in these quantities due to an increase in the value of from a _ reference value _ as and then obtain the change rate associated with and as , where is independent of . in the limit , the rate of change becomes } { \partial\eta_{+}^{\prime}}/\frac{d \ln [ x_{2 } ( \varepsilon(\eta_{+}^{\prime}))]}{d \eta_{+}^{\prime}}.\end{aligned}\ ] ] figure [ fig:7 ] shows as a function of for scale - free networks , where the value of the reference pinning pattern indicator is . to obtain the values of , we first calculate by substituting the values of , and the elements of into eqs . ( [ eq : lambda ] ) and ( [ eq : omega ] ) . we then obtain by substituting the values of into eq . ( [ eq : pi1 ] ) , with either from simulation as in figs . [ fig:1 ] and [ fig:2 ] or from theoretical analysis as in fig . [ fig:5 ] . we see that the values from simulation results of [ figs . [ fig:7](a - c ) marked by `` simulation results '' ] and theoretical prediction of [ figs . [ fig:7](d - f ) marked by `` theoretical results '' ] show the behavior in which the curves of for different values of collapse into a single one . this indicates that depends solely on the pinning fraction ; it is independent of the value of the pinning pattern indicator . extensive simulations and analysis of scale - free networks with different average degree or different degree exponent verify the generality of the collapsing behavior . from eq . ( [ eq : kappa2 ] ) , we see that the variance and the quantity are closely related . for example , a smaller value of indicates that contributes more to the variance of as is changed , and vice versa . in fig . [ fig:7 ] , corresponds to the intersecting points of the curves of with different values of shown in figs . [ fig:1 ] , [ fig:2 ] , and [ fig:5 ] . it can also be verified analytically that , the minimal point with coincides with the optimal pinning fraction at which is minimized , which is supported by simulation results in figs . [ fig:7 ] , [ fig:1 ] , [ fig:2 ] , and [ fig:5 ] . [ [ variance - in - the - form - of - separated - variables . ] ] variance in the form of separated variables . + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + from eq . ( [ eq : kappa ] ) , for a given value of the reference pinning pattern indicator , we can obtain an expression of in the form of separated variables as where is independent of the change in , and is independent of . the consequence of eq . ( [ eq : pi ] ) is remarkable , since it defines in the parameter space a function in the form of separated variables which , as compared with the original quantity , not only simplifies the description but also gives a more intuitive picture of the system behavior . specifically , for the mg dynamics , the influences of various factors on the variance or can be classified into two parts : ( i ) the function that reflects the effects of the pinning fraction and the network structure among agents ( in terms of the degree distribution , the average degree , and the scaling exponent ) , and ( ii ) the function that characterizes the impact of the pinning pattern indicator and the response of agents to resource capacities and through . figures [ fig:8](a ) and [ fig:8](b ) show the values of as a function of for and , respectively , whereas fig . [ fig:8](c ) shows for several values of . from eqs . ( [ eq : lambda ] ) and ( [ eq : omega ] ) , we see that is a quadratic function of with the symmetry axis at , which depends on the setting of response function . the second derivative of the function depends on . from the definition in eq . ( [ eq : pi1 ] ) , the variance of the system for arbitrary values of and can be obtained as \cdot\sigma^{2}(\eta_{+}^{\prime},\rho_{p}),\end{aligned}\ ] ] where specifies the reference pinning pattern . once we have the two respective curves for the two specific pinning patterns as specified by and , in the whole parameter space can be calculated accordingly . in particular , the quantities and serve as a _ holographic _ representation of the dynamical behavior of the system in the whole parameter space . in particular , one can first obtain from eqs . ( [ eq : deltap ] ) and ( [ eq : omega ] ) , and then calculate , and finally obtain the value of by substituting and into eq . ( [ eq : sigma2 ] ) . + + * discussions * + + the phenomenon of herding is ubiquitous in social and economical systems . especially , in systems that involve and/or rely on fair resource allocation , the emergence of herding behavior is undesirable , as in such a state a vast majority of the individuals in the system share only a few resources , a precursor of system collapse at a global scale . a generic manifestation of herding behavior is relatively large fluctuations in the dynamical variables of the system such as the numbers of individuals sharing certain resources . it is thus desirable to develop effective control strategies to suppress herding . an existing and powerful mathematical framework to model and understand the herding behavior is minority games . investigating control of herding in the mg framework may provide useful insights into developing more realistic control strategies for real - world systems . built upon our previous works in mg systems , in this paper we articulate , test , and analyze a general pinning strategy to control herding behavior in mg systems . a striking finding is the universal existence of an optimal pinning fraction that minimizes the fluctuations in the system , regardless of system details such as the degree of homogeneity of the resource capacities , topology and structures of the underlying network , and different patterns of pinning . this means that , generally , the efficiency of the system can be optimized for some relatively small pinning fraction . employing the mean - field approach , we develop a detailed theory to understand and predict the dynamics of the mg system subject to pinning control , for various network topologies and pinning schemes . the key observation underlying our theory is the two factors contributing to the system fluctuations : intrinsic dynamical fluctuations and systematic deviation of agents expected attendance from resource capacity . the theoretically predicted fluctuations ( quantified by the system variance ) agree with those from direct simulation . in particular , in the large degree limit , for a variety of combinations of the network and pinning parameters , the numerical results approach those predicted from our mean field theory . our theory also correctly predicts the optimal pinning fraction for various system and control settings . in real world systems in which resource allocation is an important component , resource capacities and agent interactions can be diverse and time dependent . to develop mg model to understand the effects of diversity and time dependence on herding dynamics , and to exploit the understanding to develop optimal control strategies to suppress or eliminate herding are open issues at the present furthermore , implementation of pinning control in real systems may be associated with incentive policies that provide compensations or rewards to the pinned agents . how to minimize the optimal pinning fraction then becomes an interesting issue . our results provide insights into these issues , and represent a step toward the goal of designing highly stable and efficient resource allocation systems in modern society and economy . 10 url # 1`#1`urlprefix[2]#2 [ 2][]#2 , & . _ _ * * , ( ) . . _ _ * * , ( ) . & . _ _ * * , ( ) . , , , & . _ _ * * , ( ) . & . _ _ * * , ( ) . & . _ _ * * , ( ) . _ et al . _ . _ _ * * , ( ) . _ et al . _ . _ _ * * , ( ) . , , & . _ _ * * , ( ) . , , , & . _ _ * * , ( ) . , , , & . _ _ * * , ( ) . , , & . _ _ * * , ( ) . _ _ ( , ) . . _ _ * * , ( ) . , & _ _ , vol . ( , ) . & . _ _ * * , ( ) . , , _ et al . _ . _ _ ( ) . . _ _ * * , ( ) . & . _ _ * * , ( ) . , & . _ _ * * , ( ) . , & . _ _ * * , ( ) . , & . _ _ * * , ( ) . , & . _ _ * * , ( ) . , & . _ _ * * , ( ) . . _ _ * * , ( ) . , , & . _ _ * * , ( ) . , & . _ _ * * , ( ) . , , & . _ _ * * , ( ) . , , & . _ _ * * , ( ) . , , & . _ _ * * , ( ) . , & . _ _ ( ) . , , & . _ _ * * , ( ) . , , & . _ _ * * , ( ) . , , & . _ _ * * , ( ) . & . _ _ * * , ( ) . , , & . _ _ * * , ( ) . _ _ , chap . ( , ) . & . in ( ed . ) _ _ , ( , ) . _ et al . _ . _ _ * * , ( ) . & . _ _ * * , ( ) . , & . _ _ * * , ( ) . , & . _ _ * * , ( ) . , , , & . _ _ * * , ( ) . , & . _ _ * * , ( ) . & . _ _ * * , ( ) . , & . _ _ * * , ( ) . , & . _ _ * * , ( ) . , , & . _ _ * * , ( ) . , , & . _ _ * * , ( ) . & . _ _ * * , ( ) . , & . _ _ * * , ( ) . & . _ _ * * , ( ) . & . _ _ * * , ( ) . , & . _ _ * * , ( ) . , , & . _ _ * * , ( ) . , , , & . _ _ * * , ( ) . _ et al . _ . _ _ * * , ( ) . , , & . _ _ * * , ( ) .
learning representations that are invariant to irrelevant transformations of the input is an important step towards building recognition systems automatically .invariance is a key property of some cells in the mammalian visual cortex .cells in high - level areas of the visual cortex respond to objects categories , and are invariant to a wide range of variations on the object ( pose , illumination , confirmation , instance , etc ) .the simplest known example of invariant representations in visual cortex are the complex cells of v1 that respond to edges of a given orientation but are activated by a wide range of positions of the edge .many artificial object recognition systems have built - in invariances , such as the translational invariance of convolutional network , or sift descriptors .an important question is how can useful invariant representations of the visual world be learned from unlabeled samples . in this paperwe introduce an algorithm for learning features that are invariant ( or robust ) to common image transformations that typically occur between successive frames of a video or statistically within a single frame . while the method is quite simple , it is also computationally efficient , and possesses provable bounds on the speed of inference .the first component of the model is a layer of sparse coding .sparse coding constructs a dictionary matrix so that input vectors can be represented by a linear combination of a small number of columns of the dictionary matrix .inference of the feature vector representing an input vector is performed by finding the that minimizes the following energy function where is a positive constant .the dictionary matrix is learned by minimizing averaged over a set of training samples , while constraining the columns of to have norm 1 .the first idea of the proposed method is to accumulate sparse feature vectors representing successive frames in a video , or versions of an image that are distorted by transformations that do not affect the nature of its content . where the sum runs over the distorted images .the second idea is to connect a second sparse coding layer on top of the first one that will capture dependencies between components of the accumulated sparse code vector .this second layer models vector using an _ invariant code _ , which is the minimum of the following energy function where denotes the norm of , is a matrix , and is a positive constant controlling the sparsity of . unlike with traditional sparse coding , in this methodthe dictionary matrix interacts _ multiplicatively _ with the input . as in traditional sparse coding , the matrix is trained by gradient descent to minimize the average energy for the optimal over a training set of vectors obtained as stated above .the columns of are constrained to be normalized to 1 .essentially , the matrix will connect a component of to a set of components of if these components of co - occur frequently .when a component of turns on , it has the effect of lowering the coefficients of the components of to which it is strongly connected through the matrix . to put it another way ,if a set of components of often turn on together , the matrix will connect them to a component of . turning onthis component of will lower the overall energy ( if is small enough ) because the whole set of components of will see their coefficient being lowered ( the exponential terms ) .hence , each unit of will connect units of that often turn on together within a sequence of images .these units will typically represent distorted version of a feature . the energies ( [ eq_sparse_coding ] ) and ( [ eq_fulle ] ) can be naturally combined into a single combined model of and as explained in section [ sec_model ] . therethe second layer is essentially modulating sparsity of the first layer . single model of the image is more natural . for the invariance properties we did nt find much qualitative difference and since the former has provable inference bounds we presented the results for separate training .however the a two layer model should capture the statistics of an image . to demonstrate this we compared the in - paining capability of one and two layer models and found that two layer model does better job .for these experiments , the combined two layer model is necessary .we also found that despite the assumptions of the fast inference are not satisfied for the two layer model , empirically the inference is fast in this case as well .the first way to implement invariance is to take a known invariance , such as translational invariance in images , in put it directly into the architecture .this has been highly successful in convolutional neural networks and sift descriptors and its derivatives .the major drawback of this approach is that it works for known invariances , but not unknown invariances such as invariance to instance of an object .a system that would discover invariance on its own would be desired .second type of invariance implementation is considered in the framework of sparse coding or independent component analysis .the idea is to change a cost function on hidden units in a way that would prefer co - occurring units to be close together in some space .this is achieved by pooling units close in space together .this groups different inputs together producing a form of invariance .the drawback of this approach is that it requires some sort of imbedding in space and that the filters have to arrange themselves . in the third approach ,rather then forcing units to arrange themselves , we let them learn whatever representations they want to learn and instead figure out which to pool together . in ,this was achieved by modulating covariance of the simple units with complex units .the fourth approach to invariance uses the following idea : if the inputs follow one another in time they are likely a consequence of the same cause .we would like to discover that cause and therefore look for representations that are common for all frames .this was achieved in several ways .in slow feature analysis one forces the representation to change slowly . in temporal product network one breaks the input into two representations - one that is common to all frames and one that is complementary . in idea is similar but in addition the complementary representation specifies movement . in the simplest instance of hierarchical temporal memory one forms groups based on transition matrix between states .the is a structured model of video .a lot of the approaches for learning invariance are inspired by the fact that the neo - cortex learns to create invariant representations .consequently these approaches are not focused on creating efficient algorithms . in this paper , we given an efficient learning algorithm that falls into the framework of third and fourth approaches .the basic idea is to modulate the sparsity of sparse coding units using higher level units that are also sparse .the fourth approach is implemented by using the same higher level representation for several consecutive time frames . in the form our model is similar to that of but a little simpler . in a sense comparing our model to similar to comparing sparse coding to independent component analysis .independent component analysis is a probabilistic model , whereas sparse coding attempts to reconstruct input in terms of few active hidden units .the advantage of sparse coding is that it is simpler and easier to optimize .there exist several very efficient inference and learning algorithms and sparse coding has been applied to a large number problems .it is this simplicity that allows efficient training of our model .the inference algorithm is closely derived from the fast iterative shrinkage - thresholding algorithm ( fista ) and has a convergence rate of where is the number of iterations .the model described above comprises two separately trained modules , whose inference is performed separately .however , one can devise a unified model with a single energy function that is conceptually simpler : given a set of inputs , the goal of training is to minimize .we do this by choosing one input at a time , minimizing ( [ eq_fulle ] ) over and with and fixed , then fixing the resulting , and taking step in a negative gradient direction of , ( stochastic gradient descent ) .an algorithm for finding , is given in section [ sec_theoretical ] .it consists of taking step in and separately , each of which lowers the energy .note : the functions in ( [ eq_fulle ] ) is different from that of the simple ( split ) model .the reason is that , in our experiments , either units lower the sparsity of too much , not resulting in a sparse code or the units do not turn on at all .we now describe a toy example that illustrates the main idea of the model .the input , with , is an image patch consisting of a subset of the set of parallel lines of four different orientations and ten different positions per orientation .however for any given input , only lines with the same orientation can be present , figure [ fig_lines]a ( different orientations have equal probability and for a given orientation a line of this orientation is present with probability 0.2 independently of others ) .this is a toy example of a texture .training sparse coding on this input results in filters similar to one in figure [ fig_lines]b .we see that a given simple unit responds to one particular line .the noisy filters correspond to simple units that are inactive - this happens because there are only 40 discrete inputs . in realistic data such as natural images, we have a continuum and typically all units are used .clearly , sparse coding can not capture all the statistics present in the data .the simple units are not independent .we would like to learn that that units corresponding to lines of a given orientation usually turn on simultaneously .we trained ( [ eq_fulle ] ) on this data resulting in the filters in the figure [ fig_lines]b , c .the filters of the simple units of this full model are similar to those obtained by training just the sparse coding .the invariant units pool together simple units with filters corresponding to lines of the same orientation .this makes the invariant units invariant to the pattern of lines and dependent only on the orientation .only four invariant units were active corresponding to the four groups .as in sparse coding , on a realistic data such as natural images , all invariant units become active and distribute themselves with overlapping filters as we will se below . ) .a ) randomly selected input image patches .the input patches are generated as follows .pick one of the four orientations at random .consider all lines of this orientation .put any such line into the image independently with probability 0.2 .b ) learned sparse coding filters .a given active unit responds to a particular line .c ) learned filters of the invariant units .each row corresponds to an invariant unit .the sparse coding filters are ordered according to the strength of their connection to the invariant unit .there are only four active units ( arrows ) and each responds to a given orientation , invariant to which lines of a given orientation are present . ]let us now discuss the motivation behind introducing a sequence of inputs ( ) in ( [ eq_fulle ] ) .inputs that follow one another in time are usually a consequence of the same cause .we would like to discover that cause .this cause is something that is present at all frames and therefore we are looking for a single representation in ( [ eq_fulle ] ) that is common to all the frames .another interesting point about the model ( [ eq_fulle ] ) is a that nonzero lowers the sparsity coefficient of units of that belong to a group making them more likely to become activated .this means that the model can utilize higher level information ( which group is present ) to modulate the activity of the lower layer .this is a desirable property for multi - layer systems because different parts of the system should propagate their belief to other parts . in ourinvariance experiments the results for the unified model were very similar to the results of the simple ( split ) model .below we show the results of this simple model because it is simple and because we provably know an efficient inference algorithm .however in the section [ sec_theoretical ] we will revisit the full system , generalize it to an -layer system , give an algorithm for training it , and prove that under some assumptions of convexity , the algorithm again has a provably efficient inference . in the final section we use the full system for in - paining and show that it generalizes better then a single layer system .here we discuss how to find efficiently and give the numerical results of the paper .the results for the full model ( [ eq_fulle ] ) were similar .the advantage of ( [ eq_ei_gh ] ) compared to ( [ eq_fulle ] ) is that the fast iterative shrinkage - thresholding algorithm ( fista ) applies to it directly .fista applies to problems of the form where : * is continuously differentiable , convex and lipschitz , that is .the is the lipschitz constant of .* is continuous , convex and possibly non - smooth the problem is assumed to have solution . in our case and which satisfies these assumptions ( is initialized with nonnegative entries which stay nonnegative during the algorithm without a need to force it ) .this solution converges with bound where is the value of at the - iteration and is a constant .the cost of each iteration is where is the input size and is the output size .more precisely the cost is one matrix multiplications by and by plus cost .we used the back - tracking version of the algorithm to find which contains a fixed number of operations ( independent of desired error ) .it is a standard knowledge and easy to see that the algorithm applies to the sparse coding ( [ eq_sparse_coding ] ) as well .the input to the network was prepared as follows .we converted all the images of the berkeley data - set into gray - scale images .we locally removed the mean for each pixel by subtracting a gaussian - weighted average of the nearby pixels .the width of the gaussian was pixels .then , we locally normalized the contrast by dividing each pixel by gaussian - weighted standard deviation of the nearby pixels ( with a small cutoff to prevent blow - ups ) .the width of the gaussian was also pixels .then , we picked a window in the image and , for a randomly chosen direction and magnitude , we moved it for frames and extracted them .the magnitude of the displacement was random in the range of pixels .a very large collection of such triplets of frames was extracted .we trained the sparse coding algorithm with code units in on each individual frame ( not on the concatenated frames ) .after training we found the sparse codes for each frame .there were units in the layer of invariant units . for larger a system with simple units and invariant units ,see the supplementary material .are equivalent ) .we see that invariant units typically learn to group together units with similar orientation and frequency .there are few other types of filters as well .the units in the middle and right panel correspond to each other and correspond to the units in the left panel reading panels left to right and then down . see the supplementary material for all the filters the system : input patches , simple units , invariant units . ]the results are shown in the figure [ fig_rfu ] , see caption for description .we see that many invariant cells learn to group together filters of similar orientation and frequency but at several positions and thus learn invariance with respect to translations .however there are other types of filters as well .remember that the algorithm learns statistical co - occurrence between features , whether in time or in space . ) .left panel are responses of simple units trained with sparsity in ( [ eq_sparse_coding ] ) .the right four panels are responses of invariant units trained with sparsities in ( 3 ) on the values of simple units .the x - axis of each panel is the distance of the edge from the center of the image - the in ( [ eq_edge ] ) .the y - axis is the orientation of each edge - the in ( [ eq_edge ] ) .30 cells were chosen at random in each panel .different colors correspond to different cells .the color intensity is proportional to the response of the unit .we see that sparse coding inputs respond to a small range of frequencies and positions ( the elongated shape is due to the fact that an edge of orientation somewhat different from the edge detector orientation sweeps the detector at different positions ) . on the other hand invariant cellsrespond to edges of at similar range of frequencies but larger range of positions . at high sparsitiesthe response boundaries are sharp and response regions do nt overlap .as we lower the sparsity the boundaries become more blurry and regions start to overlap . was used in ( [ eq_edge ] ) .other frequencies produced similar effect . ]the values of the weights give us important information about the properties of the system .however ultimately we are interested in how the system responds to an input .we study the response of these units to a commonly occurring input - an edge .specifically the inputs are given by the following function . where is the position of a pixel from the center of a patch , a real number specifying distance of the edge from the center and is the orientation of the edge from the axis .this is not an edge function , but a function obtained on an edge after local mean subtraction .the responses of the simple units and the invariant units are shown in the figure .[ fig_responses ] , see caption for description . as expected the sparse coding units respond to edges in a narrow range of positions and relatively narrow range of orientations .invariant cells on the other hand are able to pool different sparse coding units together and become invariant to a larger range of positions .thus the invariant units do indeed have the desired invariance properties .note that for large sparsities the regions have clear boundaries and are quite sharp .this is similar to the standard implementation of convolutional net , where the pooling regions are squares ( with clear boundaries ) .it is probably more preferable to have regions that overlap as happens at lower sparsities since one would prefer smoother responses rather then jumps across boundaries .in this section we return to the full model ( [ eq_fulle ] ) .we generalize it to an layer system , give an inference algorithm and outline the proof that under certain assumptions of convexity the algorithm has the fast convergence of fista , there is the iteration number .the basic idea of minimizing over in ( [ eq_fulle ] ) is to alternate between taking energy - lowering step in while fixing and taking energy - lowering step in while fixing .note that both of the restricted problems ( problem in fixing and problem in fixing ) satisfy conditions of the fista algorithm .this will allow us to take steps of appropriate size that are guaranteed to lower the total energy .before that however , we generalize the problem , which will reveal its structure and which does not introduce any additional complexity . consider system consisting of layers with units in the - layer with define that is all the vectors concatenated .we define two sets of functions .let be continuously differentiable , convex and lipschitz functions .there can be several such functions per layer , which is denoted by index .let be continuous and convex functions , not necessarily smooth . for conveniencewe define , , and .we define the energy of the system to be where in the second equality we drop the from the notation for simplicity .we will omit writing the for the rest of the paper .the equation ( [ eq_fulle ] ) is a special case of ( [ eq_hierarchy ] ) with , , , , , the problem in keeping other variables fixed satisfies the conditions of the fista algorithm .we can define a step in the variable to be ( in analogy to eq .( 2.6 ) ) : where the later equality holds if . heresh is the shrinkage function . in the case where is restricted to benonnegative we need to use instead of the shrinkage function .let us describe the algorithm for minimizing ( [ eq_hierarchy ] ) with respect to ( we will write it explicitly below ) .in the order from to , take the step in ( ) .repeat until desired accuracy .the s have to be chosen so that this can be assured by taking where the later denotes the lipschitz constant of its argument .otherwise , as used in our simulations , it can be found by backtracking , see below .this will assure that each steps lowers the total energy ( [ eq_hierarchy ] ) and hence the overall procedure will keep lowering it . in factthe step with such chosen is in some sense a step with ideal step size .let us now write the algorithm explicitly : * hierarchical ( f)ista . *take , some and .set , , .+ step k. ( ) .+ loop a=1:n \ { backtracking \ { + .find smallest nonnegative integer such that with set } + .compute the algorithm described above is this algorithm with the choice in the second last line .let us discuss the s . for single layer system ( ) choice is called ista and has convergence bound of .the other choice of is the fista algorithm .the convergence rate of fista is much better than that of ista , having in the denominator . for hierarchical system , the choice guarantees that each step lowers the energy .the question is whether introducing the other choice of would speed up convergence to the fista convergence rate .the trouble is that the in general the product is non - convex , which is the case for ( [ eq_fulle ] ) .for example we can readily see that if the function has more then one local minima , this convergence would certainly not be guaranteed ( imagine starting at a non - minimal point with zero derivative ) .the effect of is that of momentum and this momentum increases towards one as increases .with such a large momentum the system is in a danger of `` running around '' .it might be effective to introduce this momentum but to regularize it ( say bound it by a number smaller then one ) . in any case one can always use the algorithm with . in the special cases when all s are convex however , we give an outline of the proof that the algorithm converges with the fista rate of convergence . for this purposewe define the full step in , , to be the result of the sequence of steps eq .( [ eq_stepa ] ) from to .that is we have : .we assume that all the s are the same ( this is always possible by making all s equal the largest value ) .the core of the proof is to show the lemma 2.3 of : * lemma 2.3 : * assume that , is continuously differentiable , lipschitz , is continuous , is convex and is defined by the sequence of s of ( [ eq_stepa ] ) as described above .then for any the proof in shows that if the algorithm consists of applying the sequence of s and these s satisfy lemma 2.3 , then the algorithm converges with rate .thus we need to prove lemma 2.3 .we start with the analog of lemma 2.2 of ( ) .* lemma 2.2 : * for any , one has if and only if there exist , the subdifferential of , such that this lemma follows trivially from the definition of as in .* proof of lemma 2.3 : * define . from convexitywe have next we have the property ( [ eq_pproperty ] ). however the should be primed ( ) because the has already been updated .due to space limitations we wo nt write out all the calculations but specify the sequence of operations . the details are written out in the supplementary material .we take the first term on the left side of ( [ eq_lemma23 ] ) , and express it in its terms ( [ eq_hierarchy ] ) .then , replace the terms using the convexity equations and substitute s using the lemma 2.2 .then we take the second term of the left side of ( [ eq_lemma23 ] ) , , again express it using ( [ eq_hierarchy ] ) , and use the inequalities ( [ eq_pproperty ] ) . putting it all together ,all the gradient terms cancel and the other terms combine to give lemma 2.3 .this completes the proof .we introduced simple and efficient algorithm from learning invariant representation from unlabelled data .the method takes advantage of temporal consistency in sequential image data . in the future we plan to use the invariant features discovered by the method to hierarchical vision architectures , and apply it to recognition problems\1 ) we give details of the proof of lemma 2.3 .+ 2 ) we show all of the invariant filters for system of : 20x20 patches input patches , 1000 simple units , 400 invariant units .* lemma 2.3 : * assume that , is continuously differentiable , lipschitz , is continuous , is convex and is defined by the sequence of s in the paper . then for any * proof of lemma 2.3 : * define .we first collect the inequalities that we will need .+ from convexity we have next we have the property for step that guarantees that the energy is lowered in each step . finally we have the lemma 2.2 now we can put these equations together .the steps are : write out the left side of ( [ eq_lemma23 ] ) in terms of the definition of e. use inequalities ( [ eq_convex ] ) and ( [ eq_pproperty ] ) . eliminate s using ( [ eq_gamma ] ) . simplify . hereare the details : which is the formula ( [ eq_pproperty ] ) .note that in the line 5 and in the first term of lines 6 we shifted by one .this completes the proof .[ [ simple - unit - and - invariant - unit - filters .- alpha0.5-beta0.3 ] ] simple unit and invariant unit filters . , ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
we propose a simple and efficient algorithm for learning sparse invariant representations from unlabeled data with fast inference . when trained on short movies sequences , the learned features are selective to a range of orientations and spatial frequencies , but robust to a wide range of positions , similar to complex cells in the primary visual cortex . we give a hierarchical version of the algorithm , and give guarantees of fast convergence under certain conditions .
organizations including the census bureau , medical establishments , and internet companies collect and publish statistical information .the census bureau may , for instance , publish the result of a query such as : `` how many individuals have incomes that exceed ] , a randomized mechanism is -differentially private if changing a row of the underlying database the data of a single individual changes the probability of every mechanism output by at most an factor .larger values of correspond to greater levels of privacy .differential privacy is typically achieved by adding noise that scales with .while it is trivially possible to achieve any level of differential privacy , for instance by always returning random noise , this completely defeats the original purpose of providing useful information . on the other hand ,returning fully accurate results can lead to privacy disclosures . _the goal of this paper is to identify , for each ], a mechanism is _ -differentially private _ if the ratio lies in the interval ] for every possible output and query result .the _ - geometric mechanism _ is defined as follows .when the true query result is , the mechanism outputs . is a random variable distributed as a two - sided geometric distribution : =\frac{1-\alpha}{1+\alpha } \alpha^{|z|} ] , _ the -geometric mechanism is simultaneously optimal for every rational user . _[ thm : main ] let denote the - geometric mechanism for some database size and privacy level $ ] , and let denote an optimal remap of for the user with prior and ( monotone ) loss function . then minimizes s expected loss over all oblivious , -differentially private mechanisms with range .this is an extremely strong utility - maximization guarantee : _ every _ potential user , no matter what its side information and preferences , derives as much utility from the geometric mechanism as it does from interacting with a differentially private mechanism that is optimally tailored to .we reiterate that the prior from the utility model plays no role in the definition of privacy , which is the standard , worst - case ( over adversaries with arbitrary side - information and intent ) guarantee provided by differential privacy .we emphasize that while the geometric mechanism is user - independent ( all users see the same distribution over responses ) , different users remap its responses in different ways , as informed by their individual prior distributions and loss functions .rephrasing theorem [ thm : main ] , for every user there is an optimal mechanism for it that factors into a user - independent part the -geometric mechanism and a user - specific computation that can be delegated to the user .( see figure [ fig : factor ] . )theorem [ thm : main ] shows how to achieve the same utility as a user - specific optimal mechanism without directly implementing one .direct user - specific optimization would clearly involve several challenges .first , it would require advance knowledge or elicitation of user preferences , which we expect is impractical in most applications . andeven if a mechanism was privy to the various preferences of its users , it would effectively need to answer the same query in different ways for different users , which in turn degrades its differential privacy guarantee . in theorem [ thm : main ], the restriction to oblivious mechanisms is , in a precise sense , without loss of generality .( see section [ sec : oblivious ] . )the restriction to the range effectively requires that the mechanism output is a legitimate query result for some database ; this type of property is called `` consistency '' in the literature ( e.g. ) .differential privacy is motivated in part by the provable impossibility of absolute privacy against attackers with arbitrary side information .one interpretation of differential privacy is : no matter what prior distribution over databases a potential attacker has , its posterior after interacting with a differentially private mechanism is almost independent of whether a given user `` opted in '' or `` opted out '' of the database .below we discuss the papers in the differential privacy literature closest to the present work ; see for a recent , thorough survey of the state of the field .dinur and nissim showed that for a database with rows , answering randomly chosen subset count queries with error allows an adversary to reconstruct most of the rows of the database ( a blatant privacy breach ) ; see dwork et al . for a more robust impossibility result of the same type .most of the differential privacy literature circumvents these impossibility results by focusing on interactive models where a mechanism supplies answers to only a sub - linear ( in ) number of queries .count queries ( e.g. ) and more general queries ( e.g. ) have been studied from this perspective .blum et al . take a different approach by restricting attention to count queries that lie in a restricted class ; they obtain non - interactive mechanisms that provide simultaneous good accuracy ( in terms of worst - case error ) for all count queries from a class with polynomial vc dimension .kasiviswanathan et al . give further results for privately learning hypotheses from a given class .the use of abstract `` utility functions '' in mcsherry and talwar has a similar flavor to our use of loss functions , though the motivations and goals of their work and ours are unrelated .motivated by pricing problems , mcsherry and talwar design differentially private mechanisms for queries that can have very different values on neighboring databases ( unlike count queries ) ; they do not consider users with side information ( i.e. , priors ) and do not formulate a notion of mechanism optimality ( simultaneous or otherwise ) .finally , in recent and independent work , mcsherry and talwar ( personal communication , october 2008 ) also apply linear programming theory in the analysis of privacy mechanisms . again, their goal is different : they do not consider a general utility model , but instead ask how expected error must scale with the number of queries answered by a differentially private mechanism .this section proves theorem [ thm : main ] .the proof has three high - level steps . 1 . for a given user , we formulate the problem of determining the differentially private mechanism that minimizes expected loss as a solution to a linear program ( lp ) .the objective function of this lp is user - specific , but the feasible region is not .2 . we identify several necessary conditions met by every privacy mechanism that is optimal for some user .3 . for every privacy mechanism that satisfies these conditions ,we construct a remap such that . by assumption, a rational user employs an `` optimal remap '' of , so the mechanism induced by this map must be optimal for the user . fix a database size and a privacy level .theorem [ thm : main ] is trivially true for the degenerate cases of .so , we assume that . for every fixed user with loss function and prior ,the formulation of privacy constraints in section [ sec : dp ] together with the objective function yields the following lp whose solution is an optimal mechanism for this user . since the lp is bounded and feasible , we have the following ( e.g. ) .[ lem : lp ] every user - specific lp has an optimal solution that is a vertex .for the rest of this section , fix a user with prior and a loss function that is monotone in for every .fix a mechanism that is optimal for this user , and also a vertex of the polytope of the user - specific lp .vertices can be uniquely identified by the set of all constraints that are tight at the vertex .this motivates us to characterize the _ state of constraints _ ( slack or tight ) of mechanisms that are optimal for some user .we now begin the second step of the proof .we will view as a -matrix where rows correspond to query results ( inputs ) and columns correspond to query responses ( outputs ) .we state our necessary conditions in terms of an _ constraint _ matrix associated with the mechanism .row of the constraint matrix corresponds to rows and of the corresponding mechanism .every entry of , for , takes on exactly one of four values . if then . if , then there are three possibilities . if then . if then .otherwise . [ cols="^,^,^,^,^,^,^",options="header " , ] [ f : simmech ] we are now ready to prove theorem [ thm : imposs ] .theoremthm : imposs by the previous lemma and the definitions of and , the mechanism must have the form in figure [ f : simmech ] . because is -differentially private and as and differ in exactly two rows , columns and yield the following inequalities : , and . adding the two inequalities, we have : . because all the entries of are probabilities , we have .so it must be that . by a similar argument applied to the databases } and , we can show that .thus , the probability masses on when the underlying databases are and are and respectively . butthis violates privacy because , the two databases differ in exactly one row but the two probabilities are not within a factor of each other .we proposed a model of user utility , where users are parametrized by a prior ( modeling side information ) and a loss function ( modeling preferences ) .theorem [ thm : main ] shows that for every fixed count query , database size , and level of privacy , there is a single simple mechanism that is simultaneously optimal for all rational users .are analogous results possible for other definitions of privacy , such as the additive variant of differential privacy ( see ) ?is an analogous result possible for other types of queries or for multiple queries at once ? when users have priors over databases ( theorem [ thm : imposs ] ) , are any positive results ( such as simultaneous _ approximation _ ) achievable via a single mechanism ?we thank preston mcafee , john c. mitchell , rajeev motwani , david pennock and the anonymous referees .l. backstrom , c. dwork , and j. kleinberg .wherefore art thou r3579x ? : anonymized social networks , hidden patterns , and structural steganography . in_ proceedings of the 16th international conference on world wide web ( www ) _ , pages 181190 , 2007 .b. barak , k. chaudhuri , c. dwork , s. kale , f. mcsherry , and k. talwar .privacy , accuracy , and consistency too : a holistic solution to contingency table release . in _ proceedings of the 26th acm sigact - sigmod - sigart symposium on principles of database systems ( pods ) _ , pages 273282 , 2007 .a. blum , c. dwork , f. mcsherry , and k. nissim .practical privacy : the sulq framework . in _ proceedings of the 24th acm sigact - sigmod - sigart symposium on principles of database systems ( pods ) _ , pages 128138 , 2005 . c. dwork .differential privacy . in _ proceedings of the 33rd annual international colloquium on automata , languages , and programming ( icalp )_ , volume 4051 of _ lecture notes in computer science _ ,pages 112 , 2006 . c. dwork .differential privacy : a survey of results . in _5th international conference on theory and applications of models of computation ( tamc ) _ , volume 4978 of _ lecture notes in computer science _ , pages 119 , 2008 . c. dwork , f. mcsherry , k. nissim , and a. smith . calibrating noise to sensitivity in private data analysis . in _third theory of cryptography conference ( tcc ) _ , volume 3876 of _ lecture notes in computer science _ , pages 265284 , 2006 . c. dwork and k. nissim .privacy - preserving datamining on vertically partitioned databases . in _24th annual international cryptology conference ( crypto ) _ , volume 3152 of _ lecture notes in computer science _ , pages 528544 , 2004 .s. p. kasiviswanathan , h. k. lee , k. nissim , s. raskhodnikova , and a. smith .what can we learn privately ?in _ proceedings of the 49th annual ieee symposium on foundations of computer science ( focs ) _ , pages 531540 , 2008 .
a mechanism for releasing information about a statistical database with sensitive data must resolve a trade - off between utility and privacy . publishing fully accurate information maximizes utility while minimizing privacy , while publishing random noise accomplishes the opposite . privacy can be rigorously quantified using the framework of _ differential privacy _ , which requires that a mechanism s output distribution is nearly the same whether or not a given database row is included or excluded . the goal of this paper is strong and general utility guarantees , subject to differential privacy . we pursue mechanisms that guarantee near - optimal utility to every potential user , independent of its side information ( modeled as a prior distribution over query results ) and preferences ( modeled via a loss function ) . our main result is : for each fixed count query and differential privacy level , there is a _ geometric mechanism _ a discrete variant of the simple and well - studied laplace mechanism that is _ simultaneously expected loss - minimizing _ for every possible user , subject to the differential privacy constraint . this is an extremely strong utility guarantee : _ every _ potential user , no matter what its side information and preferences , derives as much utility from as from interacting with a differentially private mechanism that is optimally tailored to . more precisely , for every user there is an optimal mechanism for it that factors into a user - independent part ( the geometric mechanism ) followed by user - specific post - processing that can be delegated to the user itself . the first part of our proof of this result characterizes the optimal differentially private mechanism for a fixed but arbitrary user in terms of a certain basic feasible solution to a linear program with constraints that encode differential privacy . the second part shows that all of the relevant vertices of this polytope ( ranging over all possible users ) are derivable from the geometric mechanism via suitable remappings of its range .
ever since the 1920s , every wireless system has been required to have an exclusive license from the government in order not to interfere with other users of the radio spectrum . today ,with the emergence of new technologies which enable new wireless services , virtually all usable radio frequencies are already licensed to commercial operators and government entities . according to former u.s .federal communications commission ( fcc ) chair william kennard , we are facing with a spectrum drought " . on the other hand ,not every channel in every band is in use all the time ; even for premium frequencies below 3 ghz in dense , revenue - rich urban areas , most bands are quiet most of the time .the fcc in the united states and the ofcom in the united kingdom , as well as regulatory bodies in other countries , have found that most of the precious , licensed radio frequency spectrum resources are inefficiently utilized . in order to increase the efficiency of spectrum utilization ,diverse types of technologies have been deployed .cognitive radio is one of those that leads to the greatest technological gain in wireless capacity . through the detection and utilization of the spectra that are assigned to the licensed users but standing idle at certain times , cognitive radio acts as a key enabler for spectrum sharing .spectrum sensing , aiming at detecting spectrum holes ( i.e. , channels not used by any primary users ) , is the precondition for the implementation of cognitive radio . the cognitive radio ( cr )nodes must constantly sense the spectrum in order to detect the presence of the primary radio ( pr ) nodes and use the spectrum holes without causing harmful interference to the prs .hence , sensing the spectrum in a reliable manner is of vital importance and constitutes a major challenge in cr networks .however , detection is compromised when a user experiences shadowing or fading effects or fails in an unknown way . to get a better understanding of the problem ,consider the following example : a typical digital tv receiver operating in a 6 mhz band must be able to decode a signal level of at least -83 dbm without significant errors .the typical thermal noise in such bands is -106 dbm .hence a cr which is 30 dbm more sensitive has to detect a signal level of -113 dbm , which is below the noise floor . in such cases, one cr user can not distinguish between an unused band and a deep fade . in order to combat such effects, recent studies suggest collaboration among multiple cr nodes for improving spectrum sensing performance .collaborative spectrum sensing ( css ) techniques are introduced to improve the performance of spectrum sensing . by allowing different secondary users to collaborate and share their information ,pr detection probability can be greatly increased .css can be classified into two categories .the first category involves multiple users exchanging information , and the second category uses relay transmission . some recent studies on collaborative spectrum sensinginclude cooperative scheme design guided by game theory and random matrix theory , cluster - based cooperative css , and distributed rule - regulated css ; studies concentrating on css performance improvement include introducing spatial diversity techniques to combat the error probability due to fading on the reporting channel between the cr nodes and the central fusion center .there are also studies concerning other interesting aspects of css performance under different constraints .very recently , there are emerging applications of the compressive sensing concept for css . existing literature mostly focuses on the css performance examination when the centralized fusion center receives and combines _ all _ cr reports . in an channel cognitive radio network with cr nodes, the fusion center has to deal with reports and combine them wisely to form a channel sensing result .however , it is known that wireless channels are subject to fading and shadowing . when secondary users experience multi - path fading or happen to be shadowed , the reports transmitted by cr users are subject to transmission loss . as a result , in practice ,no entire report data set is available at the fusion center .besides , due to the fact that each cognitive radio can only sense a small proportion of the spectrum with limited hardware , each cr user gathers only very limited information about the entire spectrum .* contributions : * we seek to release crs from sending , and the central control unit from gathering , an excessively large number of reports , also target at the situations where there are only a few cr nodes in a large network and thus unable to gather enough sensing information for the traditional css .we propose to equip each cognitive radio node with a frequency selective filter , which linearly combines multiple channel information .the linear combinations are sent as reports to the fusion center , where the occupied channels are decoded from the reports by compressive sensing algorithms . as a consequence ,the amount of channel sensing at crs and the number of reports sent from the crs to the fusion center are both significantly reduced .following our previous work on compressive sensing , we propose two approaches to collaborative spectral sensing .the first approach is based on solving a matrix completion problem , which seeks to efficiently reconstruct a matrix ( typically low - rank ) from a relatively small number of revealed entries . in this approach ,the entries of the underlying matrix are linear combinations of channel powers .each cr node takes its local spectrum measurements , but instead of directly recording channel powers , it uses its frequency - selective filters to take _ linear combinations _ of channel powers and reports them to the fusion center .the total linear combinations taken by crs form a matrix at the fusion center . considering transmission loss , we allow the the matrix to be incomplete . we show that this matrix is low - rank and has the properties enabling its reconstruction from only a small number of its entries , and therefore , information about the complete spectrum usage can be recovered from a small number of reports from the cr nodes .this approach significantly reduces the amount of sensing and communication workload .the second approach is based on joint sparsity recovery , which is motivated by the observation that the spectrum usage information the cr nodes collect has a common sparsity pattern : each of the few occupied channels is typically observed by multiple crs .we develop a novel algorithm for joint sparsity signal recovery , which is more effective than existing algorithms in the compressive sensing literature since it can accommodate a large dynamic range of channel gains . in both approaches, every cr senses all channels ( by taking random linear projections of the powers of all channels ) , and the crs do not communicate . while they work independently , their measurements are analyzed jointly by the detection algorithms running at the fusion center .therefore , our approaches are very different from the existing collaborative spectrum sensing schemes in which different crs are assigned to different channels .our approaches move from collaborative sensing to collaborative " computation and shift coordination from the sensing phase to the post - sensing phase .our work is among the first that applies matrix completion or joint sparsity recovery to collaborative spectrum sensing in cognitive radio networks .matrix completion and joint sparsity recovery are both being intensively studied in the compressive sensing community .we present them both because it is too early at this time to make a verdict of an eventual winner .the rest of this paper is organized as follows : in section [ sec : model ] , the system model is given .the matrix completion - based algorithm for collaborative sensing is described in section [ sec : algorithm1 ] , and the joint sparsity based algorithm is described in section [ sec : algorithm2 ] .after that , in section [ sec : discussion ] we compare the two proposed approaches , discuss their computational complexity as well as filter design and dynamic update .simulation results are presented in section [ sec : simulation ] , and conclusions are drawn in section [ sec : conclusion ] .we consider a cognitive radio network with cr nodes that locally monitor a subset of channels .a channel is either occupied by a pr or unoccupied , corresponding to the states and , respectively .we assume that the number of occupied channels is much smaller than .the goal is to recover the occupied channels from the cr nodes observations .since each cr node can only sense limited spectrum at a time , it is impossible for limited crs to observe channels simultaneously . to overcome this problem, we propose the scheme depicted in fig .[ f : system_model ] .instead of scanning all channels and sending each channel s status to the fusion center , using its frequency - selective filters , a cr takes a small number of measurements that are linear combinations of multiple channels .the filter coefficients can be designed and implemented easily . in order to mix the different channel sensing information ,the filter coefficients are designed to be random numbers .then , these filter outputs are sent to the fusion center .suppose that there are frequency selective filters in each cr node sending out reports regarding the channels .for the non - ideal cases , where we have relatively less measurements , i.e. , the number of reports sent from all crs is less than the total number of channels .the sensing process at each cr can be represented by a filter coefficient matrix .let an _ diagonal _ matrix represent the states of all the channel sources using and as diagonal entries , indicating the unoccupied or occupied states , respectively .there are nonzero entries in .in addition , channel gains between the crs and channels are described in an channel gain matrix given by : where is the primary user s transmitted power , is the distance between the primary transmitter using channel and the cr node , is the propagation loss factor , and is the channel fading gain . for awgn channel , ; for rayleigh channel , follows independent rayleigh distribution ; and for shadowing fading , follows log - normal distribution . without loss of generality , we assume that all prs use unit transmit power ( otherwise , we can compensate by altering the corresponding channel gains ) . the measurement reports sent to the fusion center can be written as a matrix note that due to loss or errors , some of the entries of are possibly missing .the binary numbers on the diagonal of are the states that we shall estimate from the available entries of .it is typically difficult for the fusion center to acquire all entries of due to transmission failure , which means that our observation is a subset \times[m] ] , ] of s entries .we assume that the received entries are uniformly distributed with high probability .hence , we work with a model in which each entry shows up in identically and independently with probability .given , the partial observation of is defined as a matrix given by we shall first recover the unobserved elements of from .then , we reconstruct from the given and using the fact that all but rows of are zero .these nonzero rows correspond to the occupied channels .since and are much smaller than , our approach requires a much less amount of sensing and transmission , compared to traditional spectrum sensing in which each channel is monitored separatively . in previous research on matrix completion , it was proved that under some suitable conditions , a low - rank matrix can be recovered from a random , yet small subset of its entries by nuclear norm minimization : where denotes the nuclear norm of matrix and is a parameter discussed in section [ sec : stop ] below . for notational simplicity ,we introduce the linear operator that selects the components out of a matrix and form them into a vector such that .the adjoint of is denoted by .recent algorithms developed for ( [ mtx_cmp ] ) include , but not limited to , the singular value thresholding ( svt ) algorithm and the fixed - point continuation iterative algorithm ( fpca ) for fast completion of large - scale matrices ( e.g. , more than ) , a special trimming step introduced by keshavan et al . in . for our problem , we adopt fpca , which appears to run very well for our small dimensional tests . in the following subsections ,we describe this algorithm and the steps we take for nuclear norm minimization .also , we study how to use the approximate singular value decomposition ( svd)-based iterative algorithm introduced in for fast execution .we further discuss the stopping criteria for iterations to acquire optimal recovery .finally we show how to obtain from the estimation of .fpca is based on the following fixed point iteration : where is step size and is the matrix shrinkage operator defined as follows : [ def : shrinkage ] * matrix shrinkage operator * : assume and its svd is given by , where , , and . given , is defined as with the vector defined as : simply speaking , reduces every singular values ( which is nonnegative ) of by ; if one is smaller than , it is reduced to zero . in addition, is the solution of where is the frobenius norm .to understand , observe that the first step of is a gradient - descent applied to the second term in and thus reduces its value . because the previous gradient - descent generally increases the nuclear norm ,the second step of involves solving to reduce the nuclear norm of .iterations based on converge when the step sizes are properly chosen ( e.g. , less than 2 , or select by line search ) so that the first step of is not `` expansive '' ( the other step is always non - expansive ) .as stated in , the second step of requires computing the svd decomposition of , which is the main computational cost of .however , if one can predetermine the rank of the matrix , or have the knowledge of the approximate range of its rank , a full svd can be simplified to computing only a rank- approximation to .combined with the above fixed point iteration , the resulting algorithm is called fixed - point continuation algorithm with approximate svd ( fpca ) .specifically , the approximate svd is computed by a fast monte carlo algorithm developed by drineas et al. . for a given matrix and parameters , this algorithm returns an approximations to the largest singular values corresponding left singular vectors of the matrix in a linear time .we tune the parameters in fpca for a better overall performance .continuation is adopted by fpca , which solves a sequence of instances of , easy to difficult , corresponding to a sequence of large to small values of .the final is the given one but solving the easier instances of gives intermediate solutions that warm start the more difficult ones so that the entire solution time is reduced .solving each instance of requires proper stopping . because our ultimate goal is to recover 0/1 values on the diagonal of , accurate solutions of are not required .therefore , we use the criterion : where is a small positive scalar .experiments shows that is good enough for obtaining optimal .since has more columns than rows , directly solving in from given is under - determined .however , each row of corresponds to the occupancy status of channel . ignoring noise in for now, contains a positive entry if and only if channel is used .hence , most rows of are completely zero , so every column of is sparse and all s are jointly sparse. such sparsity allows us to reconstruct from and identify the occupied channels , which are the nonzero rows of .since the channel fading decays fast , the entries of have a large dynamic range , which none of the existing algorithms can deal with well enough . hence , we develop a novel joint - sparsity algorithm briefly described as follows .the algorithm is much faster than matrix completion and typically needs 1 - 5 iterations . at each iteration, every column of is independently reconstructed using the model , where is the column of .for noisy , we instead use the constraint .the same set of weights is shared by all at each iteration . is set to 1 uniformly at iteration 1 .after channel is detected in an iteration , is set to 0 . through ,joint sparsity information is passed to all .channel detection is performed on the reconstructed s at each iteration .it is possible that some reconstructed is wrong , so we let larger and sparser s have more say . if there is a relatively large in a sparse , then is detected .we have found this algorithm to be very reliable .the detection accuracy is determined by the accuracy of provided .in this section , we describe a new , highly effective algorithm for recovering and thus by thresholding . the algorithm allows but does not require the same for all crs , i.e., each cr can use a different sensing matrix .the design of is discussed in section [ sec : discussion_design ] below . in , each column ( denoted by ) corresponds to the channel occupancy status received by cr , and each row corresponds to the occupancy status of channel . _ignoring noise _ for now , a row has a positive value ( i.e. , ) if and only if channel is used .since there are only a small number of used channels , is sparse in terms of the number of rows containing nonzero . in each nonzero row , there is typically more than one nonzero entry ; in other words , if , other entries in the same row are likely nonzero . therefore , is _jointly sparse_. in the case that the true contains noise , it is approximately , rather than exactly , jointly sparse .joint sparsity is utilized in our algorithm to recover .while there are existing algorithms for recovering jointly sparse signals in the literature ( e.g. , in ) , our algorithm is very different and more effective for our underlying problem .none of the existing algorithms works well to recover because the entries of have a very large dynamic range because , in any channel fading model , channel gains decay rapidly with distance between crs and prs .most existing algorithms are based on minimizing for and .if , it is the same as minimizing the 1-norm of each column independently , so joint sparsity is not used for recovery . if or , joint sparsity is considered , but it penalizes a large dynamic range since the large values in a nonzero row of contribute superlinearly , more than the small values in that row , to the minimizing objective . in short , close 1 loses joint sparsity and bigger than 1 penalizes large dynamic ranges .our new algorithm not only utilizes joint sparsity but also takes advantages of the large dynamic range of .the large dynamic range has its pros and cons in cs recovery .it makes it easy to recover the locations of large entries , which can be achieved even without recovering the locations of smaller ones . on the other hand, it makes difficult to recover both the locations and values of the smaller entries .this difficulty has been studied in our previous work , where we proposed a fast and accurate algorithm for recovering 1d signals by solving several ( about 5 - 10 ) subproblems in the form of where the index set is formed iteratively as excluding the identified locations of large entries of . with techniques such as early detections and warm starts ,it achieves both the state of the art speed and least requirement on the number of measurements .we integrate the idea of this algorithm with joint sparsity into the new algorithm below . for every cr with enough measurements ( in presence of measurement noise , is replaced by ) report , and by thresholding the framework of the proposed algorithm is shown in table [ t : algorithm ] . at each iteration , every channel is first subject to independent recovery . unlike minimizing , which ties all crs together, independent recovery allows large entries of to be quickly recovered .joint sparsity information is passed among the crs through a shared index set , which is updated iteratively to exclude the used channels that are already discovered .below , we describe each step of the above algorithm in more details . in the * independence recovery * step , for every qualified cr , a constrained problem in the form of with constraints in the noiseless case , or in the noisy case ,is considered , where is an estimated noise level . asproblem dimensions are small in our application , solvers are easily chosen : matlab s ` linprog ' for noiseless cases and mosek for noisy cases .both of these solvers run in polynomial times .this step dominates the total running time of algorithm [ t : algorithm ] , but up to optimization problems can be solved in parallel .parallelization is simple for the joint - sparsity approach . at each outer iteration ,all lps are solved independently , and they have small scales relative to today s lp solvers , like gurobi and its matlab interface gurobi mex , where gurobi automatically detects and uses all cpu and cores for solving lps .crs without enough measurements ( e.g. , most of their reports are missing due to transmission losses or errors ) are not qualified for independent recovery because cs recovery is known unstable in such a case .specifically , we require the number of the available measurements from each qualified cr to exceed twice as many as used channels or .when measurements are ample , the first iteration will yield exact or nearly exact s . otherwise , insufficient measurements can cause a completely wrong that misleads channel detection ; neither the locations nor the values of the nonzero entries are correct . the algorithm , therefore , filters trusted s that must be either sparse or compressible .large entries in such s likely indicate correct locations .a theoretical explanation of this argument based on stability analysis for is given in .used channels are detected among the set of trusted s . to further reduce the risk of false detections , we compute a percentage for every channel in a way that those channels corresponding to larger values in and whose valuesare located in relatively sparser s are given higher percentages . here, relative sparsity is defined proportionally to the number of measurements ; for fixed number of non - zeros or degree of compressibility , the more the measurements , the higher the relative sparsity .hence , corresponding to more reported cr also tends to have a higher percentage . in short , larger and sparse solutionshave more say .the channels receiving higher percentages are detected as used channels .the index set is set as excluding the used channels that are already detected .obviously , needs to change from one iteration to the next ; otherwise , two iterations will result in an identical and thus the stagnation of algorithm .therefore , if the last iteration posts no change in the set of used channels yet the stopping criterion ( see next paragraph ) is not met , the channels corresponding to the larger are also excluded from , and such exclusion becomes more aggressive as iteration number increases .this is not an ad hoc but a rigorous treatment .it is shown in that larger entries in an inexact cs recovery tend to be the true nonzero entries , and furthermore , as long as the new excludes more true than false nonzero entries by a certain fraction , will yield a better solution in terms of a certain norm . in short ,used channels leave , and in case of no leaves , channels with larger joint values leave .finally , the iteration is terminated when the tail of is small enough .one way to define the tail size of is the fraction , i.e. , the thought unused divided by the thought used .suppose that precisely contains the unused channels and measurements are noiseless , then every recovered in channel detection is exact , so the fraction is zero ; with noise , the fraction depends on noise magnitude and is small as long as noise is small .if includes any used channel , the numerator will be large whether or not s are ( nearly ) exact . in a sense , the tail size measures how well and match the measurements and expected sparseness .unless the true number of used channels is known , the tail size appears to be an effective stopping indicator .in the worst case , algorithm [ t : algorithm ] reduces the cardinality of by 1 per iteration , corresponding to recovering at least 1 additional used channel . therefore , the number of iterations can not exceed the number of total channels .however , the first couple of iterations typically recover most of the used channels . at each iteration, the independence recovery step solves up to optimization problems , which can be independently solved in parallel , so the complexity equals a linear program ( or second - order cone program ) whose size is no more than .the worst case complexity is but it is almost never observed in sparse optimization thanks to solution sparsity .the two other steps are based on basic arithmetic and logical operations , and they run in . in practice ,algorithm [ t : algorithm ] is implemented and run on a workstation at the fusion center .computational complexity will not be a bottleneck of the system . as to the matrix completion algorithm, according to , fpca can recover matrices of rank 50 with a relative error of in about 3 minutes by sampling only 20 percent of the elements .the matrix completion ( section [ sec : algorithm1 ] ) and joint sparsity recovery ( section [ sec : algorithm2 ] ) approaches both take linear channel measurements as input and both return the estimates of used channels . on the other hand, the joint sparsity approach takes the full advantage of , so it is expected to work with smaller numbers of measurements . in addition , even though only one matrix completion problem needs to be solved in the matrix completion approach , it still takes much longer than running the entire joint sparsity recovery , and it is not easy to parallelize any of the existing matrix completion algorithms .however , in the small - scale networks , in cases where too much sensing information is lost during transmission or there are too many active prs in the network , which increase the signal sparsity level , joint sparsity recovery algorithm with our current settings will experience degradation in performance . we , however , can not verdict an eventual winner between the two approaches as they are both being studied and improved in the literature .for example , if a much faster matrix completion algorithm is developed which takes advantage of , the disadvantages of the approach may no longer exist .the proposed method senses the channels , not by measuring the responses of individual channels one by one , but rather measures a few incoherent linear combinations of all channels responses through onboard frequency - selective filter set .the filter coefficients which perform as the sensing matrix should have entries independently sampled from a sub - gaussian distribution , since this is known to be best for compressive sensing in terms of the number of measurements ( given in order of magnitude ) required for exact recovery . in other words ,up to a constant , which is independent of problem dimensions , no other type of matrix is yet known to perform consistently better . however , other types of matrices ( such as partial fourier / dct matrices and other random circulant matrices ) have been either theoretically and/or numerically demonstrated to work as effectively in many cases .these latter sensing matrices are often easier to realize physically or electrically .for example , applying a random circulant matrix performs sub - sampled convolution with a random vector . frequency - selective surfaces ( fsss ) can be used to realize frequency filtering .this can be done by designing a planar periodic structure with a unit element size around half wavelength of the frequency of interests .both the metallic and dielectric materials can be used . to deal with the bandwidth , unit elements in different shapes will be tested .channel occupancy evolves over time as prs start and stop using their channels .channel gains can also change when the prs move .however , the cs research has so far focused on static signal sensing except the very recent path following algorithms in . in the future work, we can investigate recovery methods for a dynamic wireless environment where based on existing channel occupancy information , an insignificant change of channel states can be quickly and reliably discovered . given existing channel occupancy , each new report , which is an entry of ,is compared with .if a significant number of such comparisons show differences , then there is a change in the true . since , either or , or both , have changed .a change in means new channel occupation or release . if is unchanged , then those channel gains in corresponding to occupied channels have changed .it is easy to deal with the latter case ( i.e. , changed , but did nt ) and update the gains of occupied channels because it boils down to solving a small linear system .let and denote the sub - matrices of and , respectively , formed by their columns and rows corresponding to the occupied channels .then , the new gains are given in the least - squares solution of , where shall include new reports arrived after the previous recovery / update but may still have missing entries . this system is easy to solve since the number of occupied channels is small . in a similar way it is easyto discover released channels as long as there is no introduction of new occupied channels .the release of channel means row of turns into 0 , or small numbers .therefore , one can solve the system and find the released channels , which correspond to the rows of with all zero ( or small ) entries .when the system is inconsistent , it means that the received reports can not be explained by the previously occupied channels , so there must be new channel occupation . discovering new channel occupation is more difficult since it is to find changes in the previously unoccupied ones , which are much more than the occupied channels .however , it is computationally much easier than starting from scratch .let and denote the previous and current channel information , respectively .arguably , is highly sparse in the joint sense because only its rows corresponding to newly occupied or released channels can have large nonzero entries .hence , can be quickly recovered by performing joint sparsity recovery on over the constraints ( or a relax version in the noisy case ) , a task that can be done by the algorithms for stationary recovery .the probability of detection ( pod ) and false alarm rate ( far ) are the two most important indices associated with spectrum sensing .we also consider the miss detection rate ( mdr ) of the proposed system .the higher the pod , the less interference will the crs bring to the prs , while from the crs perspective , lower far will increase their chance of transmission .there is a tradeoff between pod and far .while designing the algorithms , we try to balance the cr nodes capability of transmission and their interferences to the pr nodes .performance is evaluated in terms of pod , far and mdr defined as follows : where _ no . false _ is the number of false alarms , _ no . miss _ is the number of miss detections , _ no .hit _ is the number of successful detections of primary users , and _ no .correct _ is the number of correct reports of no appearance of pr . we define sampling rate as where is the amount of total sensing workload in traditional spectrum sensing . + according to fcc and defense advance research projects agency ( darpa )reports data , we chose to test the proposed matrix completion recovery algorithm for spectrum utilization efficiency over a range from 3% to 12% , which is large enough in practice .specifically , the number of active primary users is 1 to 4 on a given set of 35 channels with 20 cr nodes .[ f : far_mdr_sr ] shows the false alarm and miss detection rates at different sampling rates for different numbers of pr nodes . among all cases ,the highest miss detection rate is no more than 5% , and this is from only 20% samples which are supposed to be gathered from the cr nodes regarding all the channels .when the sampling rate is increased to 50% and even when the channel occupancy is relatively high , i.e. , 12% of the channels are occupied by the prs , the miss detection rates can be as low as no more than 2% . from our simulation results , with a moderate channel occupancy at 9% ,the false alarm rates are around 3% to 5% .[ f : pod_sr ] shows the probability of detection at different sampling rates .when the spectrum is lightly occupied by the licensed user at 3% channels being occupied , only 20% samples offer a pod close to 100% , and when there is a slightly raise in sampling rate , pod can reach 100% . in the worst case of 12% spectrum occupancy , 20% sampling rate still can offer a pod of higher than 95% , and as the sampling rate reaches 50% , pod can reach 98% .joint sparsity recovery is designed for large scale application , and simulations carried out for a larger dimensional applications with the following settings : we consider a -node cognitive radio network within a meter square area centered at the fusion center .the cr nodes are uniformly randomly located .these cognitive radio nodes collaboratively sense the existence of primary users within a meter square area on channels , which are centered also at the fusion center .we chose to test the proposed algorithm for the number of active pr nodes ranging from to on the given set of 500 channels .since the fading environments of the cognitive radio networks vary , we evaluate the algorithm performance under three typical channel fading models : awgn channel , rayleigh fading channel , and lognormal shadowing channel .+ + we first evaluate the pod , far , and mdr performance of the proposed joint sparsity recovery performance in the noiseless environment .[ f : nn_t0_5 ] , fig .[ f : nn_t1_5 ] , and fig .[ f : nn_t2_5 ] show the pod , far and mdr performance at different sampling rate , for awgn channel , rayleigh fading channel , and lognormal shadowing channel , respectively , when small number of cr nodes sense the spectrum collaboratively . fig .[ f : nn_t0_10 ] , fig .[ f : nn_t1_10 ] , and fig .[ f : nn_t2_10 ] show the pod , far and mdr performance at different sampling rate , for the aforementioned three types of channel models , when there are more cr nodes involved in the collaborative sensing of the spectrum .we observe that , log - normal shadowing channel model shows the best pod , far , and mdr performance no matter how many cr nodes are involved in the spectrum sensing . while the agwn channel model shows the worst pod , far , and mdr performance .with respect to pod , the performance gap between these two models is at most 10% , which happens when the sampling rate is extremely low . for the rayleigh fading channel model , when the number of samples is of the total number of channels , for all tested cases we achieve pod .if there are less active pr nodes in the network , smaller number of samples are required for exact detection .in essence , the proposed ccs system is robust to severe or poorly modeled fading environments .cooperation among the cr nodes and robust recovery algorithm allow us to achieve this robustness without imposing stringent requirements on individual radios .+ we then evaluate the pod , far , and mdr performance of the proposed joint sparsity recovery performance in noisy environments .for all the simulations considering noise , we adopt the rayleigh fading channel model .[ f : n_performance ] and fig .[ f : n_per_snr ] show the corresponding results .we observe that noise does degrade the performance .however , as shown in fig .[ f : n_performance ] , when the number of active prs is small enough ( e.g. , no . of pr = 1 ) , even with signal to noise ratio as low as 15 db , we still can achieve pod with a sampling rate of merely . then with an increase in the signal to noise ratio, lower sampling rate enables more pr nodes to be detected exactly .[ f : n_per_snr ] shows the pod , far and mdr performance vs. sampling rate at different noise level , each curve for a specific noise level is relatively flat ( i.e. , performance varies a little as sampling rate changes ) .this shows that the noise level has greater impact on the spectrum sensing performance rather than the sampling rate . at low noise level ,e.g. , snr = 45 db , sampling rate enables pod for 4 pr nodes .as snr reduces to 15 db , no more than pod will be achieved even when the number of samples equals to the number of channels in the network . + for comparison , we applied joint sparsity recovery algorithm on a small - scale network with the same settings as we have used to test the matrix completion recovery . instead of using a 500-channel network, we use a network with only 35 channels .simulation results show that joint sparsity recovery algorithm performs better than the matrix completion algorithm in the following aspects : 1 .faster computation due to lower computational complexity ; 2 . higher pod for the spectrum utilization rate between 3% and 12% in the noise free simulations ; to conclude , matrix completion algorithm is good for small - scale networks , with relatively high spectrum utilization , while joint sparsity recovery algorithm has the advantage of low computational complexity which enables fast computation in large - scale networks .+ + +in order to reduce the amount of sensing and transmission overhead of cognitive radio ( cr ) nodes , we have applied compressive sensing for collaborative spectrum detection in cognitive radio networks . we propose to equip each cr node with a frequency - selective filter , which linearly combines multiple channel information , and let it send a small number of such linear combinations to the fusion center , where the channel occupancy information is then decoded .consequently , the amount of channel sensing at the crs and the number of reports sent from the crs to the fusion center reduce significantly .two novel decoding approaches have been proposed one based on matrix completion and the other based on joint sparsity recovery .the novel matrix completion approach recovers the complete cr to center reports from a small number of valid reports and then reconstructs the channel occupancy information .the joint sparsity approach , on the other hand , skips recovering the reports and directly reconstructs channel occupancy information by exploiting the fact that each occupied channel is observable by multiple cr nodes .our algorithm enables faster recovery for large - scale cognitive radio networks .the primary user detection performance of the proposed approaches has been evaluated by simulations .the results of random tests show that , in noiseless cases , the number of samples required are no more than 50% of the number of channels in the network to guarantee exact primary user detection for both approaches ; while in noisy environments , at low channel occupancy rate , we can still have high probability of detection .the work of w. yin was supported in part by nsf career award dms-07 - 48839 , onr grant n00014 - 08 - 1 - 1101 , and an alfred p. sloan research fellowship .the work of h. li was supported in part by nsf grants 0831451 and 0901425 .the work of z. han was supported in part by nsf cns-0910461 , cns-0901425 , nsf career award cns-0953377 , and air force office of scientific research .the work of e. hossain was supported by the nserc , canada , strategic project grant stpgp 380888 . c. h. lee and w. wolf , energy efficient techniques for cooperative spectrum sensing in cognitive radios , " in _ proc .ieee consumer communications and networking conference ( ccnc08 ) _ , las vegas , nevada , january 2008 .a. ghasemi and e. sousa , collaborative spectrum sensing for opportunistic access in fading environments , " in _ proc .ieee international symposium on new frontiers in dynamic spectrum access networks ( dyspan05 ) _ , baltimore , md , november 2005 .w. saad , z. han , m. debbah , a. hjrungnes , and t. basar , coalitional games for distributed collaborative spectrum sensing in cognitive radio networks , " in _ proc .ieee conference on computer communications ( infocom09 ) _ , rio de janeiro , brazil , april 2009 . l. s. cardoso , m. debbah , p. bianchi , and j. najim , cooperative spectrum sensing using random matrix theory , " in _ proc . international symposium on wireless pervasive computing ( iswpc08 ) _ , santorini , greece , may 2008 . c. sun , w. zhang , and k. b. letaief , cluster - based cooperative spectrum sensing in cognitive radio systems , " in _ proc .ieee international conference on communications ( icc07 ) _ , glasgow , scottland , june 2007 .l. cao and h. zheng , distributed rule - regulated spectrum sharing , " _ ieee journal on selected areas in communications : special issue on cognitive radio : theoryand applications _ , vol .1 , pp . 130145 , january 2008 . c. sun , w. zhang , and k. b. letaief , cooperative spectrum sensing for cognitive radios under bandwidth constraint , " in _ proc .ieee wireless communications and networking conference ( wcnc07 ) _ , hong kong , march 2007 .j. meng , j. ahmadi - shokouh , h. li , z. han , s. noghanian , and e. hossain , sampling rate reduction for 60 ghz uwb communication using compressive sensing , " in _ proc .asilomar conference on signals , systems computers _ ,monterey , ca , november 2009 .j. meng , h. li , and z. han , sparse event detection in wireless sensor networks using compressive sensing , " in _ proc .43rd annual conference on information sciences and systems ( ciss ) _ , baltimore , md , march 2009 .e. j. cands and b. recht , exact low - rank matrix completion via convex optimization , " in _ proc .communication , control , and computing , 46th annual allerton conference _ , monticello , il , september 2008 .m. f. duarte , s. sarvotham , m. b. wakin , d. baron , and r. g. baraniuk , joint sparsity models for distributed compressed sensing , " in _ proc .workshop on signal processing with adaptive sparse structured representations _ ,rennes , france , november 2005 .j. tropp , a. c. gilbert , and m. j. strauss , simulataneous sparse approximation via greedy pursuit , " in _ proc .ieee 2005 international conference on acoustics , speech , and signal processing ( icassp ) _ , philadelphia , pa , march 2005 .j. meng , w. yin , h. li , e. hossain , and z. han , collaborative spectrum sensing from sparse observations using matrix completion for cognitive radio networks , " in _ proc .ieee 2010 international conference on acoustics , speech , and signal processing ( icassp ) _ , march 2010 .e. j. cands , j. romberg , and t. tao , robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , " _ ieee trans . on information theory _ , 52(2 ) , pp . 489509 , february 2006 .w. yin , s. p. morgan , j. yang , and y. zhang , fast sensing and signal reconstruction : practical compressive sensing with toeplitz and circulant matrices , " _ rice university caam technical report tr10 - 01_. _ submitted to vcip 2010 _ , 2010 .
spectrum sensing , which aims at detecting spectrum holes , is the precondition for the implementation of cognitive radio ( cr ) . collaborative spectrum sensing among the cognitive radio nodes is expected to improve the ability of checking complete spectrum usage . due to hardware limitations , each cognitive radio node can only sense a relatively narrow band of radio spectrum . consequently , the available channel sensing information is far from being sufficient for precisely recognizing the wide range of unoccupied channels . aiming at breaking this bottleneck , we propose to apply matrix completion and joint sparsity recovery to reduce sensing and transmitting requirements and improve sensing results . specifically , equipped with a frequency selective filter , each cognitive radio node senses linear combinations of multiple channel information and reports them to the fusion center , where occupied channels are then decoded from the reports by using novel matrix completion and joint sparsity recovery algorithms . as a result , the number of reports sent from the crs to the fusion center is significantly reduced . we propose two decoding approaches , one based on matrix completion and the other based on joint sparsity recovery , both of which allow exact recovery from incomplete reports . the numerical results validate the effectiveness and robustness of our approaches . in particular , in small - scale networks , the matrix completion approach achieves exact channel detection with a number of samples no more than of the number of channels in the network , while joint sparsity recovery achieves similar performance in large - scale networks . _ keywords _ : collaborative spectrum sensing , matrix completion , compressive sensing , joint sparsity recovery .
the primary concern of this paper is the erasure channel , which is a common digital communication channel model that plays a fundamental role in coding and information theory . throughout the paper, we assume that time is discrete and indexed by the integers . at time , the erasure channel of interest can be described by the following equation : where the channel input , supported on an irreducible finite - type constraint , is a stationary process taking values from the input alphabet , and the erasure process , independent of , is a binary stationary and ergodic process with _ erasure rate _ , and is the channel output process over the output alphabet .the word `` erasure '' as in the name of our channel naturally arises if a `` '' is interpreted as an erasure at the receiving end of the channel ; so , at time , the channel output is nothing but the channel input if , but an erasure if . let denote the set of all the finite length words over .let be a finite subset of , and let be the _ finite - type constraint _ with respect to , which is a subset of consisting of all the finite length words over , each of which does not contain any element in as a contiguous subsequence ( or , roughly , elements in are `` forbidden '' in ) .the most well known example is the -run - length - limited ( rll ) constraint over the alphabet , which forbids any sequence with fewer than or more than consecutive s in between two successive s ; in particular , a prominent example is the -rll constraint , a widely used constraint in magnetic recording and data storage ; see . for the -rll constraint with ,a forbidden set is when , one can choose to be in particular when , can be chosen to be .the _ length _ of is defined to be that of the longest words in .generally speaking , there may be many such s with different lengths that give rise to the same constraint ; the length of the shortest such s minus gives the _ topological order _ of .for example , the topological order of the -rll constraint , whose shortest proves to be , is .a finite - type constraint is said to be _ irreducible _if for any , there is a such that . as mentioned before , the input process of our channel ( [ mec ] )is assumed to be _ supported _ on an irreducible finite - type constraint , namely , , where the capacity of the channel ( [ mec ] ) , denoted by , can be computed by where the supremum is taken over all stationary processes supported on . here , we note that input - constraints are widely used in various real - life applications such as magnetic and optical recording and communications over band - limited channels with inter - symbol interference .particularly , we will pay special attention in this paper to a binary erasure channel with erasure rate ( bec( ) ) with the input supported on the -rll constraint , denoted by throughout the paper .when there is no constraint imposed on the input process , that is , , it is well known that ; see theorem [ fbnot ] . when , that is , when the channel is perfect with no erasures , proves to be the _ noiseless capacity _ of the constraint , which can be achieved by a unique -th order markov chain with . on the other hand , other than these two above - mentioned `` degenerated '' cases , `` explicit '' analytic formulas of capacity for `` non - degenerated '' cases have remained evasive , and the problem of analytically characterizing the noisy constrained capacity is widely believed to be intractable . the problem of numerically computing the capacity seems to be as challenging : the computation of the capacity of a general channel with memory or input constraints is notoriously difficult and has been open for decades ; and the fact that our erasure channel is only a special class of such ones does not appear to make the problem easier . here , we note that for a discrete memoryless channel , shannon gave a closed - form formula of the capacity in his celebrated paper , and blahut and arimoto , independently proposed an algorithm which can efficiently compute the capacity and the capacity - achieving distribution simultaneously .however , unlike the discrete memoryless channels , the capacity of a channel with memory or input constraints in general admits no single - letter characterization and very little is known about the efficient computation of the channel capacity . to date , most known results in this regard have been in the forms of numerically computed bounds : for instance , numerically computed lower bounds by arnold and loeliger , a. kavcic , pfister , soriaga and siegel , vontobel and arnold .one of the most effective strategies to compute the capacity of channels with memory or input constraints is the so - called _ markov approximation _ scheme .the idea is that instead of maximizing the mutual information rate over all stationary processes , one can maximize the mutual information rate over all -th order markov processes to obtain the -th order markov capacity . under suitable assumptions ( see , e.g. , ) , when tends to infinity , the corresponding sequence of markov capacities will tend to the channel capacity . for our erasure channel , the _-th order markov capacity _ is defined as where the supremum is taken over all -th order markov chains supported on .the main contributions of this work are the characterization of the asymptotics of the above - mentioned input - constrained erasure channel capacity . of great relevance to this workare results by han and marcus , jacquet and szpankowski , which have characterized asymptotics of the capacity of the a binary symmetric channel with crossover probability ( bsc( ) ) with the input supported on the -rll constraint .the approach in the above - mentioned work is to obtain the asymptotics of the mutual information rate first , and then apply some bounding argument to obtain that of the capacity .the approach in this work roughly follows the same strategy , however , as elaborated below , our approach differs from theirs to a great extent in terms of technical implementations . throughout the paper, we use the logarithm with base in the proofs and we use the logarithms with base in the numerical computations of the channel capacity . below is a brief account of our results and methodology employed in this work .the starting point of our approach is lemma [ nformula ] in section [ mutual - information - rate - section ] , a key lemma that expresses the conditional entropy in a form that is particularly effective for analyzing asymptotics of when is close to .as elaborated in theorem [ wolfsconjecture ] , lemma [ nformula ] naturally gives a lower and upper bound on , where the lower bound gives a counterpart result of wolf s conjecture for a bec( ) .moreover , when applied to the case when is a markov chain , lemma [ nformula ] yields some explicit series expansion type formulas in theorem [ entropyformula ] and corollary [ memorylessec ] , which aptly pave the way for characterizing the asymptotics of the input - constrained erasure channel capacity . herewe remark that the method in have been further developed for more general families of memory channels in via examining the contractiveness of an associated random dynamical system .however , the methodology to derive asymptotics of the mutual information rate in this work capitalizes on certain characteristics that are in a sense unique to erasure channels . in section [ general - asymptotics ], we consider a memoryless erasure channel with the input supported on an irreducible finite - type constraint , and in theorem [ asyerasurec ] , we derive partial asymptotics of its capacity in the vicinity of where is written as the sum of a constant term , a linear term in and an -term .the lower bound part in the proof of this theorem follows from an easy application of theorem [ entropyformula ] , and the upper bound part hings on an adapted argument in . in section [ binary - asymptotics ], we consider a bec( ) with the input being a first - order markov process supported on the -rll constraint .within this special setup , we show in theorem [ concavitybec ] that the is strictly concave with respect to some parameterization of . and in section [ sub-2 ] , we numerically evaluate and the corresponding capacity - achieving distribution using the randomized algorithm proposed in which proves to be convergent given the concavity of . moreover , the concavity of guarantees the uniqueness of the capacity achieving distribution , based on which we derive full asymptotics of the above input - constrained bec( ) around in theorem [ foc ] , where is expressed as an infinite sum of all -terms . in section [ feedback - section ], we turn to the scenarios when there might be feedback in our erasure channel .we first prove in theorem [ fbnot ] that when there is no input constraint , the feedback does not increase the capacity of the erasure channel even with the presence of the channel memory .when the input constraint is not trivial , however , we show in theorem [ yonglong - feedback ] that feedback does increase the capacity using the example of a bec( ) with the ( )-rll input constraint , and so feedback may increase the capacity of input - constrained erasure channels even if there is no channel memory .the results obtained in this section suggest the intricacy of the interplay between feedback , memory and input constraints .in this section , we focus on the mutual information of the erasure channel ( [ mec ] ) introduced in section [ introduction - section ] .the starting point of our approach is the following key lemma , which is particularly effective for analysis of input - constrained erasure channels .[ nformula ] for any , we have }h(x_{0}|x_{d})p(e_0=1,e_{d}=1,e_{d^c}=0),\ ] ] where \triangleq \{-n,\cdots,-1\} ] , also , we have }p(e_0=1,e_{d}=1,e_{d^c}=0)\log p(e_0=1|e_{d}=1,e_{d^c}=0)\label{1erasure},\end{aligned}\ ] ] where follows from a similar argument as in the proof of ( [ 0erasure ] ) and from ( [ pformula ] ) , it then follows that }\sum_{y_{-n}^{0 } : \mathcal{i}(y_{-n}^{0})=d\cup\{0\}}p_{x}(y_{d},y_0)p(e_0=1,e_{d}=1,e_{d^c}=0)\log p_{x}(y_{0}|y_{d})\notag\\ & = \sum_{d \subseteq [ -n , -1]}h(x_0|x_{d})p(e_0=1,e_{d}=1,e_{d^c}=0).\label{eerasure}\end{aligned}\ ] ] the desired formula for then follows from ( [ 0erasure ] ) , ( [ 1erasure ] ) and ( [ eerasure ] ) .one of the immediate applications of lemma [ nformula ] is the following lower and upper bounds on .[ wolfsconjecture ] for the upper bound , it follows from lemma [ nformula ] that }h(x_{0}|x_{d})p(e_0=1,e_{d}=1,e_{d^c}=0)\notag\\ & \stackrel{(a)}{\leq } \lim_{n\to \infty}\sum_{d \subseteq [ -n,-1]}h(x_{0 } ) p(e_0=1,e_{d}=1,e_{d^c}=0)\notag\\ & \leq \lim_{n\to \infty}\sum_{d \subseteq [ -n,-1 ] } p(e_0=1,e_{d}=1,e_{d^c}=0 ) \log k \notag\\ & = p(e_0=1 ) \log k \notag\\ & = ( 1-{\varepsilon } ) \log k , \notag\end{aligned}\ ] ] where we have used the fact that conditioning reduces entropy for .assume is of topological order , and let be the -order markov chain that achieves the noiseless capacity of the constraint .again , it follows from lemma [ nformula ] that }h(\hat{x}_{0}|\hat{x}_{d})p(e_0=1,e_{d}=1,e_{d^c}=0)\notag\\ & \ge \lim_{n\to \infty}\sum_{d \subseteq [ -n , -1]}h(\hat{x}_{0}|\hat{x}_{-m}^{-1},\hat{x}_{d})p(e_0=1,e_{d}=1,e_{d^c}=0)\notag\\ & \stackrel{(a)}{= } \lim_{n\to \infty}\sum_{d \subseteq [ -n , -1]}h(\hat{x}_{0}|\hat{x}_{-m}^{-1})p(e_0=1,e_{d}=1,e_{d^c}=0)\notag\\ & = p(e_0=1 ) h(\hat{x}_{0}|\hat{x}_{-m}^{-1 } ) \notag\\ & = ( 1-{\varepsilon } ) c(\mathcal{s } , 0 ) , \notag\end{aligned}\ ] ] where we have used the fact that is an -th order markov chain for .the upper bound part of theorem [ wolfsconjecture ] also follows from the well - known fact that ( see theorem [ fbnot ] ) and for any , which is obviously true .let denote the capacity of a bsc( ) with the -rll constraint . in posed the following conjecture on : where .a weaker form of this bound has been established in by counting the possible subcodes satisfying the -rll constraint in some linear coding scheme , but the conjecture for the general case still remains open .it is well known that is the capacity of a bsc( ) without any input constraint , and is the capacity of a bec( ) without any input constraint .so , for an input - constrained bec( ) , the lower bound part of theorem [ wolfsconjecture ] gives a counterpart result of wolf s conjecture . when applied to the channel with a markovian input , lemma [ nformula ]gives a relatively explicit series expansion type formula for the mutual information rate of ( [ mec ] ) .[ entropyformula ] assume is an -th order input markov chain .then , where and : \mbox{for all } , \{i_{j},i_{j}+1,\cdots , i_{j}+m\ } \not\subseteq \ { i_1,\cdots , i_{u}\}\} ] . then \theta^{n}}{(1-g_{2})g_{2}}+(2\theta+2)(1/g_{n}-2)\right\}\\ & \le&\frac{(2\theta-2 - 4\theta^n)b_{n}c_{n}+(1+\theta)(1+\theta^n)\theta^n(4-q(n,\theta)\theta^{n-1})}{b_{n}c_{n}(1+\theta)^{3}(1+\theta^n)}. \vspace{-3mm}\end{aligned}\ ] ] note that the above numerator , as a function of , takes the maximum at . denote this maximum by , where to complete the proof , it suffices to prove .substituting into , we have \\ & \ge&h(n,\theta),\end{aligned}\ ] ] where the following facts can be easily verified : * takes the minimum at some , where satisfies the following equation * for ] .when , that is , when the channel is `` perfect '' with no erasures , both and boil down to the noiseless capacity of the constraint , which can be explicitly computed ; however , little progress has been made for the case when due to the lack of simple and explicit characterization for and . in terms of numerically computing and , relevant work can be found in the subject of fsmcs , as input - constrained memoryless erasure channels can be regarded as special cases of fsmcs .unfortunately , the capacity of an fsmc is still largely unknown and the fact that our channel is only a special fsmc does not seem to make the problem easier .recently , vontobel _ et al . _ proposed a generalized blahut - arimoto algoritm ( gbaa ) to compute the capacity of an fsmc ; and in , han also proposed a randomized algorithm to compute the capacity of an fsmc . for both algorithms ,the concavity of the mutual information rate is a desired property for the convergence ( the convergence of the gbaa requires , in addition , the concavity of certain conditional entropy rate ) . on the other hand , as elaborated in , such a desired property , albeit established for a few special cases , is not true in general . the concavity established in the previous section allows us to numerically compute using the algorithm in .the randomized algorithm proposed in iteratively compute in the following way : ,\\ \theta_{n}+a_{n}g_{n^b}(\theta_{n } ) , & \mbox{otherwise } , \end{cases}\ ] ] where is a simulator for ( for details , see ) .the author shows that converges to the first - order capacity - achieving distribution if is concave with respect to , which has been proven in theorem [ concavitybec ] .therefore , with proven convergence , this algorithm can be used to compute the first - order capacity - achieving distribution and the first - order capacity ( in bits ) , which are shown in fig .[ capacity - achievingdistribution ] and fig .[ capacity1 ] , respectively . as in section [ sub-1 ] , the noiseless capacity of -rll constraint achieved by the first - order markov chain with the transition probability matrix ( [ achievingmatrix ] ) .so , we have where . in this section ,we give a full asymptotic formula for around , which further leads to a full asymptotic formula for around .the following theorem gives the taylor series of in around , which leads to an explicit formula for the -th derivative of at , whose coefficients can be explicitly computed . [ foc ]a ) is analytic in for and where is taken over all nonnegative intergers satisfying the constraint and \b ) is analytic in for with the following taylor series expansion around : where .\a ) for , with theorem [ concavitybec ] establishing the concavity of , should be the unique zero point of the derivative of the mutual information rate .so , and satisfies according to the analytic implicit function theorem , is analytic in for . in the following the -th order derivative of at computed .it follows from the leibniz formula and the faa di bruno formula that which immediately implies a ) .\b ) note that it then follows from the leibniz formula that therefore , which immediately implies b ) . despite their convoluted looks , ( [ theta - max ] ) and ( [ first - order - full ] ) are explicit and computable .below , we list the coefficients of ( in bits ) and up to the third order , which are numerically computed according to ( [ theta - max ] ) and ( [ first - order - full ] ) and rounded off to the ten thousandths decimal digit : [ cols="^,^,^,^,^",options="header " , ]in this section , we consider the input - constrained erasure channel ( [ mec ] ) as in section [ introduction - section ] however with possible feedback , and we are interested in comparing its feedback capacity and its non - feedback capacity .the following theorem states that for the erasure channel without any input - constraint , feedback does not increase the capacity and both of them can be computed explicitly .this result is in fact implied by theorem in , where a random coding argument has been employed in the proof ; we nonetheless give an alternative proof in appendix [ fb - nfb ] for completeness .[ fbnot ] for the erasure channel ( [ mec ] ) without any input constraints , feedback does not increase the capacity , and we have on the other hand , we will show in the following that feedback may increase the capacity when the input constraint in the erasure channel is non - trivial . as elaborated below , thisis achieved by comparing the asymptotics of the feedback capacity and the non - feedback capacity for a special input - constrained erasure channel . in ,sabag _ et al ._ computed an explicit formula of feedback capacity for bec with -rll input constraint .[ fbc ] the feedback capacity of the -rll input - constrained erasure channel is where the unique maximizer satisfies clearly , the explicit formula in theorem [ fbc ] readily gives the asymptotics of the feedback capacity .to see this , note that and .straightforward computations yield hence , it then follows from straightforward computations that for the case when is close to , .so , we have proven the following theorem : [ yonglong - feedback ] for a bec( ) with the -rll input constraint , feedback increases the channel capacity when is small enough .an independent work in also found that feedback does increase the capacity of a bec( ) with the same input constraint , by comparing a tighter bound of non - feedback capacity , obtained via a dual capacity approach , with the feedback capacity .recently , sabag _ et al . _ also computed an explicit asymptotic formula for the feedback capacity of a bsc( ) with the input supported on the -rll constraint . by comparing the asymptotics of the feedback capacity with the that of non - feedback capacity , they showed that feedback does increase the channel capacity in the high snr regime .it is well known that for any memoryless channel without any input constraint , feedback does not increase the channel capacity .theorem [ fbnot ] states that when there is no input constraint , the feedback does not increase the capacity of the erasure channel even with the presence of the channel memory .theorem [ yonglong - feedback ] says that feedback may increase the capacity of input - constrained erasure channels even if there is no channel memory .these two theorems , together with the results in , suggest the intricacy of the interplay between feedback , memory and input constraints .we first prove that a similar argument using the independence of and as in the proof of ( [ pformula ] ) yields that it then follows that }\sum_{y_1^n:\mathcal{i}(y_1^n)=d}p(e_{d}=1 , e_{d^c}=0)p(x_{d}=y_{d})\log p(e_{d}=1 , e_{d^c}=0)\nonumber\\ & { } \hspace{4.5mm}-\sum_{d \subseteq [ 1 , n]}\sum_{y_1^n:\mathcal{i}(y_1^n)=d}p(e_{d}=1 , e_{d^c}=0)p(x_{d}=y_{d } ) \log p(x_{d}=y_{d})\nonumber\\ & = -\sum_{d \subseteq [ 1 , n]}p(e_{d}=1 , e_{d^c}=0)\log p(e_{d}=1 , e_{d^c}=0)+\sum_{d\subseteq [ 1 , n]}p(e_{d}=1 , e_{d^c}=0)h(x_d)\nonumber\\ & = \sum_{d \subseteq [ 1 , n]}p(e_{d}=1 , e_{d^c}=0)h(x_d)+h(e_1^n)\nonumber\\ & \le h(e_1^n)+\sum_{d \subseteq [ 1 , n]}p(e_{d}=1 ,e_{d^c}=0)|d| \log k\nonumber\\ & = h(e_1^n)+\ee[e_1+\cdots+e_n ] \log k \label{paomo},\end{aligned}\ ] ] where the only inequality becomes equality if is i.i.d . with the uniform distribution .it then further follows that \log k\\ & \stackrel{(b)}{= } p(e_1=1 ) \log k\\ & = ( 1-{\varepsilon } ) \log k , \end{aligned}\ ] ] where follows from ( [ paomo ] ) and follows from the ergodicity of . the desired ( [ c - nfb ] ) then follows from the fact that the only inequality becomes equality if is i.i.d . with the uniform distribution. let , independent of , be the message to be sent and denote the encoding function .as shown in , using the chain rule for entropy , we have where ( a ) follows from the fact that is a function of and and if and only if , ( b ) follows from the independence of and . note that for , and for , where follows from the independence of and .it then follows that where follows from ( [ not0p ] ) and follows from ( [ 0p ] ) . since for any $ ] , which implies that is an -dimensional probability mass functiontherefore , through a similar argument as before , we have }\sum_{y_1^n:\mathcal{i}(y_1^n)=d}p(e_{d}=1 , e_{d^c}=0)q(y_1^n ) \log q(y_1^n)\nonumber\\ & \le h(e_1^n)+\sum_{d \subseteq [ 1 , n]}p(e_{d}=1 , e_{d^c}=0)|d| \log k\nonumber\\ & = h(e_1^n)+\ee[e_1+\cdots+e_n ] \log k , \label{hz}\end{aligned}\ ] ] where the inequality follows from the fact that is an -dimensional probability mass function . * acknowledgement .* we would like to thank navin kashyap , haim permuter , oron sabag and wenyi zhang for insightful discussions and suggestions and for pointing out relevant references that result in great improvements in many aspects of this work .d. m. arnold and h. a. loeliger , `` on the information rate of binary - input channels with memory , '' in _ proceedings of ieee international conference on communications _, vol . 9 , pp . 26922695 , jun .d. m. arnold , h. a. loeliger , p. o. vontobel , a. kavcic and w. zeng , simulation - based computation of information rates for channels with memory , " _ ieee .inf . theory _ , vol.52 , no.8 , pp . 34983508 ,2006 .b. marcus , r. roth , and p. siegel , `` constrained systems and coding for recording channels , '' _ handbook of coding theory _ ,i , ii.1em plus 0.5em minus 0.4emamsterdam : north - holland , pp . 16351764 , 1998 .h. d. pfister , `` the capacity of finite - state channels in the high - noise regime , '' _ entropy of hidden markov processes and connections to dynamical systems _ , london math .lecture note series .cambridge : cambridge univ . press , 2011 , vol .179222 .h. pfister , j. b. soriaga , and p. siegel , `` on the achievable information rates of finite state isi channels , '' in _ proceedings of ieee global telecommunications conference _, vol . 5 , pp . 29922996 , nov .2001 .yang , shaohua and kavcic , a. and tatikonda , s.,``feedback capacity of stationary sources over gaussian intersymbol interference channels , '' _ global telecommunications conference , 2006 .globecom 06 ., no . , pp.16 , nov . 27 2006-dec . 1 2006 . p. o. vontobel , a. kavi , d. m. arnold , and h. a. loeliger , `` a generalization of the blahut - arimoto algorithm to finite - state channels , '' _ ieee trans .inf . theory _54 , no .18871918 , may 2008 .
in this paper , we examine an input - constrained erasure channel and we characterize the asymptotics of its capacity when the erasure rate is low . more specifically , for a general memoryless erasure channel with its input supported on an irreducible finite - type constraint , we derive partial asymptotics of its capacity , using some series expansion type formulas of its mutual information rate ; and for a binary erasure channel with its first - order markovian input supported on the -rll constraint , based on the concavity of its mutual information rate with respect to some parameterization of the input , we numerically evaluate its first - order markov capacity and further derive its full asymptotics . the asymptotics obtained in this paper , when compared with the recently derived feedback capacity for a binary erasure channel with the same input constraint , enable us to draw the conclusion that feedback may increase the capacity of an input - constrained channel , even if the channel is memoryless . _ index terms : _ erasure channel , input constraint , capacity , feedback .
the independence sampler is the incorporation of rejection sampling within an mcmc framework .the rejection sampler obtains samples from a random variable , , with probability density function by first proposing a candidate value from a random variable , , with probability density function , and secondly accepting as a sample from with probability , where .otherwise is rejected , see , page 60 .the success of the rejection sampler depends upon making a good choice of such that is small and that is straightforward to sample from .the mcmc independence sampler is the modification of the above where a markov chain is constructed with at iteration , a candidate proposed from and if accepted is set equal to .otherwise . the rejection sampler , andconsequently , the independence sampler can usually be implemented in a straightforward and efficient manner for low dimensional ( target ) distributions but as the dimension of increases it becomes increasingly more challenging to obtain a good choice of .therefore the independence sampler is rarely used as an mcmc algorithm in its own right but instead independence sampler moves are often incorporated within metropolis - within - gibbs to effectively update low dimensional subsets of , see , page 15 .the main focus for independence samplers has been to choose the proposal density so as to have an acceptance probability as close to 1 as possible . whilst this makes intuitive sense ,the aim of the current paper is to challenge the idea of aiming for an acceptance probability as close to 1 as possible within the context of using independence samplers for updating augmented data in mcmc algorithms .specifically , we are interested in the bayesian statistical problem of obtaining samples from the posterior distribution of the parameters of a model given data , in the case where the likelihood , is intractable .we assume that given augmented data , is tractable and an mcmc algorithm can be constructed to obtain samples from the joint posterior of and , .then it is natural to construct an mcmc algorithm which alternates between updating the parameters and the augmented data as follows : 1 .update given and ._ i.e. _ use .2 . update given and ._ i.e. _ use .our focus is the use of independence samplers to update given and . for updating augmented dataa natural independence sampler often presents itself .for example , in an epidemic modelling context where denotes the removal times of infected individuals , denotes the infection and infectious period parameters and denotes the infection times of individuals , a natural candidate for the infection time of individual who is removed at time is , where denotes the infectious period distribution , see , and section [ ss : homo ] . for non - centered parameterisations , , we can often denote as a deterministic function with easy to compute , where is a vector of independent and identically distributed uniform random variables , see and section [ ss : bdm ] . then to update we can propose a new value from .the dimension of the augmented data , , can be orders of magnitude higher than and , so updating one component of at a time can be prohibitive .therefore we seek generic guidelines for updating multiple components of at a time and optimising the performance of the resulting independent sampler . specifically , this work formalises findings in and in using the independence sampler for data augmentation giving simple guidelines for producing close to optimal independence samplers .the guidelines obtained are similar to those given in for the random walk metropolis algorithm and comparisons with the random walk metropolis algorithm are made .the paper is structured as follows . in section [ s : theo ] , we study the properties of the independence sampler for independent and identically distributed product densities .this idealised scenario mimics the set up in where optimal scaling of the random walk metropolis algorithm was first explored and as in allows us to get a handle on understanding the key factors in optimising the independence sampler .in particular , we show that the optimal number of components , , of to update , is the which maximises the mean number of components per move . in the case where this optimal is largethis corresponds to a mean acceptance rate of approximately .thus there is a somewhat surprising link with the optimal scaling of the random walk metropolis algorithm , with which we make comparison and highlight the benefits of the independence sampler . in section [ s : ex ], we explore the optimal performance of the independence sampler for increasingly complex problems . in section [s : ex : intro ] , we study product gaussian target densities with gaussian and -distribution proposals demonstrating the optimal scaling results obtained in section [ s : theo ] . in sections [ ss :homo ] and [ ss : bdm ] we apply the independence sampler to two epidemic models , the classic homogeneously mixing sir epidemic model , and and a birth - death - mutation ( bdm ) model for an emerging , evolving disease , and . in section [ ss :homo ] , we show that for the homogeneously mixing sir epidemic model updating a proportion of the infection times so as to obtain a mean acceptance rate of approximately is optimal .this demonstrates that as observed with the random walk metropolis algorithm the findings of section [ s : theo ] are informative in designing independence samplers beyond the limited confines of product densities . for the bdm model in section [ ss : bdm ] the findings are somewhat different with a lower optimal mean acceptance rate corresponding to large scale data augmentation. finally , in section [ s : conc ] , we make some concluding remarks highlighting the possible benefits of the independence sampler over random walk metropolis for large scale data augmentation and the differences seen between the two epidemic models in sections [ ss : homo ] and [ ss : bdm ] .in this section we consider the theoretical properties of the independence sampler for the special case where , a product of independent and identically distributed univariate densities , .the main focus is on the asymptotic behaviour as the number of components , mirroring analysis performed in for the random walk metropolis algorithm .the aim is to characterise the optimal performance of the independence sampler in terms of the number of components to update and to draw interesting comparisons of similarities and differences with the random walk metropolis algorithm . for the independence sampler we propose to select uniformly at random components from to update . for , drawn from with probability density function , whilst for , .therefore the acceptance probability for the proposed move from to is for and , let denote the position of the markov chain after iterations . as in , we assume that the markov chain is initiated with drawn from and thus for all , .the independent and identically distributed nature of the stationary and proposal distributions means that as in it suffices to focus on the behaviour and performance of the independence sampler on the first component only .specifically , for , letting } ] and let we have the following lemma which mirrors , lemma 2.1 , which states that with sufficiently high probability we can focus upon }^n ] denotes the mean acceptance probability , in stationarity , of a proposed move and ] denotes the mean number of jumps , per unit time , of the limiting jump process , and hence , we seek which maximises ] and = d ( f \| q) ] and variance , say .now if is small , which will be the case where the central limit theorem is relevant , then .moreover , if where is small , then it is straightforward to show that and that .thus for large , with , we have by , proposition 2.4 , that & \approx & k { \mathbb{e } } [ 1 \wedge \exp ( v_k^\ast ) ] \nonumber \\ & = & k \times \left\{\phi \left ( - \frac{k i}{\sqrt{k j } } \right ) + \exp \left ( - ki + \frac{kj}{2 } \right ) \phi \left ( - \sqrt{kj } + \frac{ki}{\sqrt{k j } } \right ) \right\ } \nonumber \\ & \approx & k \times 2 \phi \left ( - \sqrt{\frac{k i}{2 } } \right),\end{aligned}\ ] ] where the latter approximation follows from setting .replacing by and by in the right hand side of , we obtain , which is the function maximized in , corollary 1.2 to maximise the optimal scaling of the random walk metropolis algorithm .the only difference is the form of which here depends upon the kullback - leibler divergence between the target and proposal distribution , whereas in ] . therefore given that it follows that )^4 ] \nonumber \\ & = & \binom{n-1}{k-1}^{-4 } \sum_{\mathbf{i}_1 \in \mathcal{i}_n } \sum_{\mathbf{i}_2 \in \mathcal{i}_n } \sum_{\mathbf{i}_3 \in \mathcal{i}_n } \sum_{\mathbf{i}_4 \in \mathcal{i}_n } { \mathbb{e}}\left [ \prod_{j=1}^4 ( \hat{h}_{\mathbf{i}_j } ( y , x_1 , \mathbf{x}_0^{n- } ) - { \mathbb{e}}[\hat{h}_{\mathbf{i}_j } ( y , x_1 , \mathbf{x}_0^{n- } ) ] ) \right ] .\nonumber \\\end{aligned}\ ] ] note that if have no elements in common then and are independent. therefore ) ] ] .the number of combinations of such that and have at least one element in common is which is bounded above by for all sufficiently large .similarly , the number of combinations of such that , and all have at least one element in common with is which is bounded above by for all sufficiently large .now ) ] $ ] is only non - zero if either can be grouped into two pairs such that both pairs have at least one element in common or if three of the components all have at least one element in common with the fourth .( note that there is overlap between these two classifications . ) thus using and , it is straightforward to combine with to show that )^4 ] \nonumber \\ & \leq & \binom{n-1}{k-1}^{-4 } \left\ { 3 \left(\frac{n^{2k-3}}{\{(k-2)!\}^2 } \right)^2 + 4 \frac{(k-1)^2 n^{4k-7}}{\ { ( k-2)!\}^4 } \right\ } \nonumber \\ & \leq & \frac{(k-1)^4}{(n - k)^{4k-4 } } \left\ { 3 n^{4k-6 } + 4 ( k-1)^2 n^{4k-7 } \right\}. \end{aligned}\ ] ] since the bound obtained in , holds for all , it follows from and that the lemma immediately follows by combining and .
the independence sampler is one of the most commonly used mcmc algorithms usually as a component of a metropolis - within - gibbs algorithm . the common focus for the independence sampler is on the choice of proposal distribution to obtain an as high as possible acceptance rate . in this paper we have a somewhat different focus concentrating on the use of the independence sampler for updating augmented data in a bayesian framework where a natural proposal distribution for the independence sampler exists . thus we concentrate on the proportion of the augmented data to update to optimise the independence sampler . generic guidelines for optimising the independence sampler are obtained for independent and identically distributed product densities mirroring findings for the random walk metropolis algorithm . the generic guidelines are shown to be informative beyond the narrow confines of idealised product densities in two epidemic examples . _ correspondence address : _ department of mathematics and statistics , fylde college , lancaster university , lancaster , la1 4yf , uk _ running title : _ optimal scaling of the independence sampler * keywords : * augmented data ; birth - death - mutation model ; markov jump process ; mcmc ; sir epidemic model .
we consider the problem of goodness of fit test for the model of ergodic diffusion process when this process under the null hypothesis belongs to a given parametric family .we study the cramer - von mises type statistics in two different cases .the first one is based on local time estimator and the second one is based on empirical distribution function estimator .we show that the cramer - von mises type statistics converge in both cases to some limits which do not depend on the unknown parameter , so the test is asymptotically parameter free ( apf ) .let us remind the similar statement of the problem in the well known case of the observations of independent identically distributed random variables .suppose that the distribution of under hypothesis is , where is some unknown parameter . then the cramer - von mises type test is ^ 2{\rm d}f\left(x- \hat \vartheta _ n\right)\ ] ] where the statistic under hypothesis converges in distribution to a random variable which does not depend on .therefore the threshold can calculated as solution of the equation the details concerning this result can be found in darling . for more general problemssee the works of kac , kiefer & wolfowitz , durbin or martynov , . a similar problem exists for the continuous time stochastic processes , which are widely used as mathematic models in many fields .the goodness of fit tests ( gof ) are studied by many authors . for examplekutoyants discusses some possibilities of the construction of such tests . in particular , he considers the kolmogorov - smirnov statistics and the cramer - von mises statistics based on the continuous observation .note that the kolmogorov - smirnov statistics for ergodic diffusion process was studied in fournie and in fournie and kutoyants .however , due to the structure of the covariance of the limit process , the kolmogorov - smirnov statistics is not asymptotically distribution free in diffusion process models .more recently kutoyants has proposed a modification of the kolmogorov - smirnov statistics for diffusion models that became asymptotically distribution free .see also dachian and kutoyants where they propose some gof tests for diffusion and inhomogeneous poisson processes with simple basic hypothesis .it was shown that these tests are asymptotically distribution free . in the case of ornstein - uhlenbeck process kutoyantsshowed that the cramer - von mizes type tests are asymptotically parameter free .another test was studied by negri and nishiyama .suppose that we observe an ergodic diffusion process , solution to the following stochastic differential equation we want to test the following null hypothesis where is some known function and the shift parameter is unknown .we suppose that .let us introduce the family the alternative is defined as where \right\ } ] and , \ii ) , such that from i ) it follows the convergence in every bounded set ] we can write so we have for the second part , we can write suppose that , similar result holds for .then we obtain thus we have now we prove ii ) .as in lemma [ intlim ] , we can deduce that so for , note that and along with we get for any , take , then we have .+ + * proof of theorem [ mainresult1 ] . * + we can write see that and that , , the smoothness of gives us the convergence + applying lemma [ intlim ] and lemma [ lemintconv ] we get we see that the limit of the statistic does not depend on , and the test with defined by belongs to .+ the same procedure can be applied with other estimators of the unknown parameter and of the invariant density , provided that they are consistent and asymptotically normal .for example , we can take the minimum distance estimator ( mde ) for : and the kernel estimators as estimator for the invariant density under some regularity conditions , the mde is asymptotically normal ( see or ) : also if we do not present explicitly here , it can be verified that does not depend on .the kernel estimator has the same asymptotic properties of the lte ( see ). then we can construct the statistic which converges to that does not depend on the unknown parameter . so that the test with the solution of the equation belongs to . in this section, we study the gof test defined by the statistic where is the empirical distribution function : denote and in theorem 4.6 , the following equality is presented : then using we have , for , and for we have and we can write these inequalities allow us to deduce the following bounds and moreover hence we get the asymptotic normality of : as in lemma [ lemconv ] and lemma [ lemintconv ] , if conditions and hold , we can show the convergence of the vector : and the convergence of the integral : we obtain finally ^ 2{\textrm{d}}x\\ & = & \int_{-\infty}^\infty\big[\eta_t^f(x)-\hat u_tf(x-\tilde{\vartheta}_t)\big]^2{\textrm{d}}x\\ & = & \int_{-\infty}^\infty\big[\eta_t^f(x ) -\hat u_tf(x-{\vartheta}_0)\big]^2{\textrm{d}}x+o(1)\\ & \longrightarrow&\int_{-\infty}^\infty\big[\eta^f(x-{\vartheta}_0 ) -\hat u f(x-{\vartheta}_0)\big]^2{\textrm{d}}x\\ & = & \int_{-\infty}^\infty\left(\eta^f(y)-\hat u f(y)\right)^2{\textrm{d}}y=\delta.\end{aligned}\ ] ] so that the limit of the statistic does not depend on , and the test with the solution of belongs to . + * remark .* it can be shown that in the case of kolmogorov - smirnov tests where the limit distributions of these statistics ( under hypothesis ) do not depend on .the proofs can be done following the same lines as in kutoyants and negri respectively .in this section we discuss the consistency of the proposed tests .we study the tests statistics under the alternative hypothesis that is defined as where \right\ } $ ] .under this hypothesis we have : let all drift coefficients under alternative satisfy the conditions and , then for any we have and * proof .* remember that under hypothesis , the mle converges to the point which minimize the distance where is the random variable of invariant density ( see , proposition 2.36 ) : in addition , denoted with the norm in , we have we can deduce moreover and finally we have the result for : a similar result can be obtained for .we consider the ornstein - uhlenbeck process .remind that the tests for o - u process were studied in as well .suppose that the observed process under the null hypothesis is we simulate trajectories of ( resp . ) and calculate the empirical quantiles of ( resp . ) .we obtain the simulated density for and that are showed in graphic [ density ] .the values of the thresholds for different are showed in graphic [ d_vep ] .dachian s. and kutoyants y.a . (2007 ) . on the goodness - of - fit tests for some continuous time processes .statistical models and methods for biomedical and technical systems , f.vonta et al .( eds ) , birkhauser , boston , 395 - 413 .
a problem of goodness - of - fit test for ergodic diffusion processes is presented . in the null hypothesis the drift of the diffusion is supposed to be in a parametric form with unknown shift parameter . two cramer - von mises type test statistics are studied . the first one is based on local time estimator of the invariant density , the second one is based on the empirical distribution function . the unknown parameter is estimated via the maximum likelihood estimator . it is shown that both the limit distributions of the two test statistics do not depend on the unknown parameter , so the distributions of the tests are asymptotically parameter free . some considerations on the consistency of the proposed tests and some simulation studies are also given . * keywords : * ergodic diffusion process , goodness - of - fit test , cramer - von mises type test .
since halmos s observation in 1943 that automorphisms of compact groups automatically preserve haar measure , these maps have provided a rich class of examples in dynamics . in casethe group is abelian , its dual group is a module over the laurent polynomial ring ] , i.e. , the integral group ring of .the commutative algebra of such modules provides effective machinery for analyzing such actions .this point of view was initiated in 1989 by kitchens and the second author , and a fairly complete theory of the dynamical properties of such actions is now available .let denote an arbitrary countable group , and let denote an action of by automorphisms of a compact abelian group , or an _algebraic -action_. the initial steps in analyzing such actions were taken in ( * ? ? ?* chap . 1 ) , and give general criteria for some basic dynamical properties such as ergodicity and mixing . in 2001einsiedler and rindler investigated the particular case when , the discrete heisenberg group , as a first step towards algebraic actions of noncommutative groups . herethe concrete nature of suggests that there should be specific answers to the natural dynamical questions , and they give several instances of this together with instructive examples .however , the algebraic complexity of the integral group ring prevents the comprehensive analysis available in the commutative case .a dramatic new development occurred in 2006 with the work of deninger on entropy for principal -actions .let , and let denote the principal left ideal generated by .then acts on the quotient , and there is a dual -action on the compact dual group , called a _principal -action_. deninger showed in that in many cases the entropy of equals the logarithm of the fuglede - kadison determinant the linear operator corresponding to on the group von neumann algebra of . in case , this reduces to the calculation in of entropy in terms of the logarithmic mahler measure of .subsequent work by deninger , li , schmidt , thom , and others shows that this and related results hold in great generality ( see for example , , and ) . in authors proved that three different concepts connected with -actions , namely entropy , fuglede - kadison determinants , and -torsion , coincide , revealing deep connections that are only partly understood .these ideas have some interesting consequences .for example , by computing the entropy of a particular heisenberg action in two different ways , we can show that for almost every pair of real numbers . despite its simplicity, this fact does not appear to follow from known results on random matrix products .our purpose here is to survey what is known for the heisenberg case , and to point out many of the remaining open questions . as is the simplest noncommutative example ( other than finite extensions of , which are too close to the abelian case to be interesting ) , any results will indicate limitations of what a general theory can accomplishalso , the special structure of should enable explicit answers to many questions , and yield particular examples of various dynamical phenomena .it is also quite instructive to see how a very general machinery , used for algebraic actions of arbitrary countable groups , can be made quite concrete for the case of .we hope to inspire further work by making this special case both accessible and attractive .let be a countable discrete group .the _ integral group ring _ of consists of all finite sums of the form with , equipped with the obvious ring operations inherited from multiplication in .the _ support _ of is the subset .suppose that acts by automorphisms of a compact abelian group .such actions are called _algebraic -actions_. denote the action of on by .let be the ( discrete ) dual group of , with additive dual pairing denoted by for and .then becomes a module over by defining to be the unique element of so that for all , and extending this to additive integral combinations in . conversely , if is a -module , its compact dual group carries a -action dual to the -action on .thus there is a 1 - 1 correspondence between algebraic -actions and -modules .let be the additive torus .then the dual group of can be identified with via the pairing where and . for , the action of on is defined via duality by for all . by taking , we obtain that .it is sometimes convenient to think of elements in as infinite formal sums , and then .this allows a well - defined multiplication of elements in by elements from , both on the left and on the right .we remark that the shift - action is opposite to the traditional shift direction when is or , but is forced when is noncommutative .this has sometimes caused confusion ; for example the last displayed equation in is not correct .now fix .let be the principal left ideal generated by .the quotient module has dual group .an element is in iff for all .this is equivalent to the condition that for all , where .hence exactly when , using the conventions above for right multiplication of elements in by members of .in other words , if we define to be right convolution by , then is the kernel of . in terms of coordinates, is in precisely when for all .our focus here is on the discrete heisenberg group , generated by , , and subject to the relations , , and .alternatively , is the subgroup of generated by we will sometimes use the notation for the integral group ring of when emphasizing its ring - theoretic properties .the center of is .the center of is then the laurent polynomial ring ] .suppose that .adjusting by units and reordering factors if necessary , we may assume that and have the form and .expanding , we find that hence and .then we would have this proves that has no nontrivial factorizations in .since is nilpotent of rank 2 , it is polycyclic , and so is both left- and right - noetherian , i.e. satisfies the ascending chain condition on both left ideals and on right ideals .suppose now that is a finitely generated left -module , say generated by .the map defined by \mapsto g_1m_1+\dots g_l m_l ] .let \in r^{k\times l} ] , corresponding to the quotient module and the principal -action .let be a compact abelian group and let denote haar measure on , normalized so that . if is a continuous automorphism of , then the measure defined by is also a normalized translation - invariant measure .hence , and is -invariant .this shows that if is an algebraic action of a countable group on , then is -measure - preserving .a measurable set is _ -invariant _ if agrees with off a null set for every .the action is _ ergodic _ if the only -invariant sets have measure 0 or 1 .the following , which is a special case of a result due to kaplansky , gives an algebraic characterization of ergodicity .let be a countable discrete group , and be an algebraic -action on a compact abelian group whose dual group is . then is ergodic if and only if the -orbit of every nonzero element of is infinite .roughly speaking , this result follows from the observation that the existence of a bounded measurable -invariant function on is equivalent to the existence of a nonzero finite -orbit in . for actions of the heisenberg group , this raises the question of characterizing those for which is ergodic .the first result in this direction is due to ben hayes .[ thm : hayes ] for every the principal algebraic -action is ergodic . we give a brief sketch of the proof .first recall that is a unique factorization domain .define the _ content _ of to be the greatest common divisor in of the nonzero coefficient polynomials , and put .a simple variant of the proof of gauss s lemma shows that for all .now fix .the case is trivial , so assume that .suppose that has finite -orbit in . then there are such that and for some . then so that divides in . also , so that =c(g_2) ] , showing that in .hayes called a group _ principally ergodic _ if every principal algebraic -action is ergodic .he extended theorem [ thm : hayes ] to show that the following classes of groups are principally ergodic : torsion - free nilpotent groups that are not virtually cyclic ( i.e. , do not contain a cyclic subgroup of finite index ) , free groups on more than one generator , and groups that are not finitely generated . clearly is not principally ergodic , since for example the action of on the module /{\langle}x^k -1{\rangle} ] so that is mixing .let .then the principal -action is mixing if and only if has no roots that are roots of unity .suppose first that has a root that is a root of unity , so that has a factor dividing for some .then , but \in{{\mathbb{z}}{\gamma}}g ] , and is the irreducible factorization of in ] is one of the form for some and choice of integers , not all .there is a well - defined ring homomorphism ] and is not divisible by a generalized cyclotomic polynomial in and .2 . ] is expansive if and only if does not vanish on . the usual proof of wiener s theorem via banach algebras is nonconstructive since it uses zorn s lemma to create maximal ideals .paul cohen has given a constructive treatment of this and similar results .we are grateful to david boyd for showing us a simple algorithm for deciding expansiveness in this case .there is a finite algorithm , using only operations in ] , whether or not is expansive .we may assume that ] .observe that any root of on must also be a root of .let the degree of be .then , so that the coefficients of are symmetric . if is odd , then is a root of since all other possible roots come in distinct pairs . if is even , it is simple to compute ]. we can then apply sturm s algorithm , which uses a finite sequence of calculations in ] .decidability of expansiveness for other groups , even just , is a fascinating open question .[ prob : expansiveness ] is there a finite algorithm that decides , given , whether or not is expansive ?there is one type of polynomial in that is easily seen to be expansive . call _lopsided _ if there is a such that .this terminology is due to purbhoo .if is lopsided with dominant coefficient , adjust by multiplying by so that . then , where .we can then invert in by geometric series : thus lopsided polynomials are expansive .the product of lopsided polynomials need not be lopsided ( expand ) .surprisingly , if is expansive , then there is always a such that is lopsided .this was first proved by purbhoo for using a rather complicated induction from the case , but his methods also provided quantitative information he needed to approximate algorithmically the complex amoeba of a laurent polynomial in several variables .the following short proof is due to hanfeng li .[ prop : lopsided - is - invertible ] let be a countable discrete group and let be expansive .then there is such that is lopsided . since is expansive , by theorem [ thm : expansive ]there is a such that .there is an obvious extension of the definition of lopsidedness to .note that lopsidedness is an open condition in .first perturb slightly to having finite support , and then again slightly to having finite support and rational coordinates .this results in such that is lopsided .choose an integer so that . then is lopsided .one consequence of the previous result is that if is expansive , then the coefficients of must decay exponentially fast . recall that if is finitely generated , then a choice of finite symmetric generating set induces the _ word norm _ , where is the length of the shortest word in generators in whose product is . clearly . a different choice for symmetric generating set gives an equivalent word norm in the sense that there are two constants such that for all .[ prop : exponential - decay ] let be a finitely generated group , and fix a finite symmetric generating set .suppose that is invertible in . then there are constants and such that for all . by the previous proposition ,there is a such that is lopsided .we may assume that the dominant coefficient of occurs at , so that , where and has .let and put .now and for all .furthermore , if , then . hence if , then .thus whenever .this shows that with and suitable .since , we obtain the result with the same and different .we remark that a different proof of this proposition , using functional analysis and under stricter hypotheses on , was given in ( * ? ? ?if and ] .the _ logarithmic mahler measure _ of is defined as .the _ mahler measure _ of is defined as .suppose that with .factor over as .then jensen s formula shows that has the alternative expression as where for .mahler s motivation was to derive important inequalities in transcendence theory . using that , he showed that if n{\leqslant}-1 ] , and suppose there is a for which and such that vanishes for at least one value of .is there a value of for which the partial sums are bounded above for all ?fraczek and lemaczyk showed in that these sums are unbounded for almost every .the argument of besicovitch to prove the existence of bounded sums makes essential use of continuity , and does not apply in this case where there are logarithmic singularities .[ rem : algebraic ] by invoking some deep results in diophantine approximation theory , we can show that the second alternative in the last paragraph of the proof of theorem [ thm : linear - expansive ] never occurs .for we claim that if , then must be an algebraic number in . assuming this for the moment, then a result of gelfond shows that for every there is a such that for every .since is smooth , its fourier coefficients decay rapidly as , and so the formal solution to , with for , also decays rapidly , giving a continuous solution .to justify our claim , write where and ) are laurent polynomials with integer coefficients and the and are algebraic functions .the condition becomes , via jensen s formula , which is an algebraic equation in , so is algebraic . from this , and the preceding proof , under our assumptions that neither nor vanish anywhere on ,we conclude that if and is irrational , then _ every _ choice of will yield a nonzero point with .this idea was observed independently by evgeny verbitskiy ( oral communication ) . to illustrate some of the preceding ideas , we provide an informative example .this was chosen so that the diophantine estimates mentioned in remark [ rem : algebraic ] can be given an elementary and self - contained proof , rather than appealing to difficult general diophantine results .in addition , the constants in our analysis are effective enough to rule out nonexpansive behavior at all rational .one consequence is that for this algebraic -action , nonexpansiveness can not be detected by looking at only finite - dimensional representations of .[ exam : nonexpansive ] let , where and .it is easy to check that neither nor vanishes on , so that does not vanish anywhere on . also , has roots and .consider , so that here . then =\log|a(\xi)|+\log|c(\zeta)| ] for all .the assumption on implies that , so that the series for converges uniformly on .then gives the required coboundary .[ lem : diophantine ] let ] , define the _ complex variety _ of to be where , and the _ unitary variety _ of to be then by theorem [ thm : expansive ] and wiener s theorem , is expansive if and only if . in this case , let . asbefore , let be given by , which clearly commutes with the left -actions .put . since , we see that , and is also homoclinic .furthermore , is _ fundamental _ , in the sense that every homoclinic point is a finite integral combination of translates of ( cf .* lemma 4.5 ) ) . in this caseall homoclinic points decay rapidly enough to have summable coordinates . in order to describe homoclinic points of principal -actions , we first `` linearize '' as follows .put suppose now that is expansive , and define .then is invertible on , and , where for every .[ prop : cover ] let be a countable discrete group , and be expansive , so that is invertible in . put and let be defined as , where is reduction of coordinates .then : 1 . is surjective , and in fact the restriction of to the set of those with is also surjective ; 2 . ; 3 . commutes with the relevant left -actions ; and 4 . is continuous in the weak * topology on closed , bounded subsets of .suppose that .there is a unique lift with and for all .then in , hence , and in fact .furthermore this proves ( 1 ) , and the remaining parts are routine verifications .if is expansive , let and call the _ fundamental homoclinic point _ of .this name is justified by the following .[ prop : fundamental - homoclinic - point ] let be a countable discrete group , and be expansive. put and .then every element of is a finite integral combination of left translates of .suppose that , and lift to as in the proof of proposition [ prop : cover ] .then , and since as , the coordinates of must vanish outside of a finite subset of , i.e. , . then has the required form .next we show that expansive principal actions have a very useful orbit tracing property called specification .[ prop : specification ] let be a countable discrete group , and be expansive .then for every there is a finite subset of such that if and are arbitrary subsets of with and if and are arbitrary points in , then we can find such that for every , for .let .the set is chosen so that .lift each to , and then truncate each to a having support in .it is then easy to verify that as the required properties ( see ( * ? ? ?4.4 ) for details ) .a point with is called _summable_. let denote the group of all summable homoclinic points for .summability is crucial in using homoclinic points for dynamical purposes .let ] provided that the dimension of is at most .more precisely , with this condition , there is another polynomial ] has the form with .factor over as .then yuzvinskii showed that an interpretation of from shows that term is due to geometric expansion , while the term is due to -adic expansions for those primes dividing , an adelic viewpoint that has been useful in other contexts as well .mahler measure is defined for polynomials =r_d ] . then if and only if is a product of generalized cyclotomic polynomials times a monomial or its negative .this result was originally proved by boyd using deep results of schinzel , but was later given a simpler and more geometric proof by smyth .turning now to noncommutative , we sketch some background material on related von neumann algebras .the functional analysis used can be found , for example , in ( * ? ? ?* chaps .vii , viii ) .let which is a complex hilbert space with the standard inner product =\sum_{{\delta}}w_{\delta}\overline{v_{\delta}} ] . using the functional calculus for , we can form the operator , where the lower limit indicates that we ignore any point mass at that may have .we then define .\ ] ] this fuglede - kadison determinant has the following very useful properties ( see for details ) : we remark that multiplicativity of is not obvious , essentially being a consequence of the campbell - baker - hausdorff formula and vanishing of trace on commutators , although for technical reasons a complex variables approach is more efficient .let , and write elements of as . the fourier transform identifies with , with being identified with the function , where ] given by right multiplication by , where is an extension of to such ( see ) for details .[ thm : li - thom ] let be a countable discrete amenable group , and let .suppose that is injective. then . more generally , if and if is injective , then . if , then particular , .this is a highly nontrivial fact , since there is no direct connection between and .the computation , or even estimation , of the values of fuglede - kadison determinants is not easy . in the next two sections we will explicitly calculate entropy for certain principal actions of the heisenberg group . as pointed out by deninger , there are examples of lopsided polynomials for which can be computed by a rapidly converging series .[ exam : traces ] let .write , where .it is easy to see ( * ? ? ?* lemma 2.7 ) that the spectrum of , considered as an element in , is contained in ] , where is the riemann zeta - function .next we extend the face entropy inequality described for principal -actions in ( * ? ? ?* remark 5.5 ) to principal -actions .this inequality proves that many such actions have strictly positive entropy .we start with the basic case .[ prop : basic - inequality ] let with . then . by definitionthe map has kernel , and is surjective since multiplication by is injective on ] , the index of in , and that is an injective endomorphism on this group by expansiveness .[ cor : fixed - point - count ] under the hypotheses in proposition [ prop : fix ] , observe that is an injective linear map on the real vector space of dimension ] . then for every choice of for , the points are -separated .hence if is any flner sequence , then for every we have that }>0 . \qedhere\ ] ] for , it turns out that the periodic points for are always dense in for every finitely generated -module ( * ? ? ?the simple example of multiplication by on dualizes to an automorphism of a compact group with no nonzero periodic points , since is invertible in ( see ( * ? ? ?* example 5.6(1 ) ) ) .we do not know the answer to the following .let be a countable discrete residually finite group , and be a finitely generated -module .must the -periodic points always be dense in ?we now focus on using periodic points to calculate entropy .it is instructive to see how these calculations work in a simple example .[ exam : periodic - golden - mean ] let , and let ] had a root , then the factor in the calculation of determinant would occasionally be very small , which could cause the limit not to exist .this is one manifestation of the difficulties with nonexpansive actions .we turn to the heisenberg case . for put , which is a normal subgroup of of index .let be expansive . recall that is an expansive constant for .in particular , if is a finite - index subgroup of , and if , then for any fundamental domain of there is a such that .a bit of notation about the limits we will be taking is convenient .if is a quantity that depends on , we write to mean that for every there is a such that for every and all sufficiently large and . [ thm : periodic - point - limit ] let be expansive , and define as above .then } \log|{\operatorname{\mathsf{fix}}}_{{{\lambda}_{rq , sq , q}}}({\alpha_f})|= { \mathsf{h}}({\alpha_f}).\ ] ] in light of example [ exam : periodic - golden - mean ] , it is tempting to use a fundamental domain for of the form , but such a is far from being right - flner , since right multiplication by will drastically shear in the -direction . the method used in (5.7 ) is to decompose into pieces , each of which is thin in the -direction , and translate these pieces to different locations in .the union of these translates will still be a fundamental domain , but now it will also be flner , and so can be used for entropy calculations .this method depends in general on a result of weiss , and ultimately goes back to the -quasi - tiling machinery of ornstein and weiss . in our case, we can give a simple description of this decomposition .choose integers such that but as .consider the set it is easy to verify that since is small compared with , right multiplication by a fixed creates only small distortions , and so for every we have that define - 1}x^{rqj}f_{rq , a(q)j , q}\,\,,\ ] ] where if does not evenly divide , make the obvious modification in the last set .then is also a fundamental domain for , but now it is also a flner sequence as by . by the separation property of periodic points , for all we have that . since is flner , for the reverse inequality , let and let .choose a finite set such that .the sets also form a flner sequence , and as .fix , , and for the moment , and choose a -separated set of maximal cardinality .for every , let be its lift , with and .write for the unique point with for all .our choice of implies that the points in are -separated , hence distinct .hence .since is flner and as , we see that completing the proof .we apply this result , combined with corollary [ cor : fixed - point - count ] , to compute entropy for expansive principal -actions . using the above notations ,let , so that we will compute this determinant by decomposing into -invariant subspaces , each having dimension 1 or . to do this , for each let observe that , and similarly , so that is a common eigenvector for and , let , and . for arbitrary by the above , for every and , the subspace is invariant under the right action of . now let , and assume that is expansive .adjusting by a power of if necessary , we may assume that has the form , where each ] .if , then the matrix of on takes the following circulant - like form , where for notational convenience we assume that : by our expansiveness assumption is injective , and each is -invariant , hence for all and all .now suppose that .for convenience we will assume that both and are primes distinct from .then is a basis for .note that if , then for . since by relative primeness of and , we can parameterize the spaces by .this gives the -invariant decomposition we now evaluate the limit of as using the decomposition . on each of the -dimensional spaces in acts as multiplication by .hence the contribution to of the first large summand in is by expansiveness , never vanishes for , so that is continuous on . by convergence of the riemann sums to the integral , as we have that the additional factor of in the denominator of shows that it converges to 0 as . for the spaces with , the expansiveness assumption shows that never vanishes for , so again is continuous on .hence for , as we have that adding these up over , and observing that , we have shown the following .[ thm : expansive - entropy ] let be expansive .then where the matrices are as given above . at first glancethe denominator in appears puzzling .the explanation is that one comes from averaging over the -th roots of unity , while the other comes from the size of the matrices . from the point of view of von neumannalgebras , we should really be using the `` normalized determinant '' corresponding to the normalized trace on , and then the second would not appear . for expansive polynomials in that are linear in the entropy formula in the preceding theorem can be simplified considerably .[ thm : linear - entropy ] let , where and are laurent polynomials in ] , then shows that .we will use the convention . with the usual conventions about arithmetic and inequalities involving , the results that follow will make sense and are true even if some of the polynomials are 0 .suppose that ] each have degree .then for every , if , then the previous lemma implies that for every we have that and .hence and the second inequality follows by integrating over .for the first inequality , observe that and similarly for .we need one more property of mahler measure , proved by david boyd .recall that , and by convention .[ thm : boyd - continuity ] the map given by is continuous .continuity is clear when the coefficients remain bounded away from 0 since the roots are continuous functions of the coefficients , but for example if then continuity is more subtle .boyd s proof , which also applies the polynomials in several variables with bounded degree , uses graeffe s root - squaring method , managing to sidestep various delicate issues , leading to a remarkably simple proof .if , then we have seen in that by , for any complex numbers and , hence it follows that let . writing and ,note that these are laurent polynomials of uniformly bounded degree in ] , and claim this never happens provided that is expansive .write and , where ] ?it will follow from results in the next section that is valid if either or .let us turn to the quadratic case .we start with a simple result about determinants .let , , and be arbitrary complex numbers .then if for every , where subscripts are taken mod , then the value of this determinant simplifies to where and . taking subscripts mod , a permutation of contributes a nonzero summand in the expansion of the determinant if and only if it has the form , where , 1 , or 2 .the sequences corresponding to permutations are precisely the closed paths of length in the labeled shift of finite type depicted below . the paths and give the terms and , respectively , while it is easy to check that the golden mean shift of finite type produces middle term of the result .if for all , then each occurrence of a block 20 in a closed path of length can be replaced by the block 11 , changing the factor to together with an appropriate sign change .the result of these substitutions is that every closed path of length in the golden mean shift gives the same contribution to the expansion , and there are ^q=\tau^q+\sigma^q ] . however , since this will be subsumed under deninger s results , there is no need to provide an independent proof here . to state the main result in , we need to give a little background . for each irrational is the rotation algebra , which is the von neumann algebra version of the twisted algebras used in allan s local principle ( see for details ) .there are also natural maps .as explained in , there is a faithful normalized trace function on each such that for every .this implies that for determinants we have hence we need a way of evaluating the integrands for .suppose that is monic in and of degree , and so has the form where the ] and . for every irrational denote the lyapunov exponents for and as above by with multiplicities .then and hence by theorem [ thm : li - thom ] and , let . for each is a nonzero vector and a multiplier with such that since the determinants of these matrices all have absolute value 1 , there is exactly one nonnegative lyapunov exponent , of multiplicity one . for each irrational its valueis given by , and hence a numerical calculation of the graph of is shown in figure [ fig : surface ] , and indicates the complexity of these phenomena even in the quadratic case .although lyapunov exponents are generally difficult to compute , there is a method to obtain rigorous lower bounds on the largest lyapunov exponent known as `` herman s subharmonic trick . ''its use in our context was suggested to us by michael bjorklund .let , where the ] define .theorem [ thm : met ] shows that for almost every we have that expand ,\ ] ] where the are polynomials in with complex coefficients .now each is subharmonic for , and hence is also subharmonic for .thus furthermore , as . the entries in are uniformly bounded above , and hence is uniformly bounded for all and .thus observe that since only nonnegative powers of are allowed , this result is reminiscent of the face entropy inequality in corollary [ cor : face - entropy ] . it is stronger since it gives a lower bound for every irrational , but the integrated form is weaker since it uses only the top lyapunov exponent to give a lower bound for the entropy of the face corresponding to . [exam : big - example ] we finish by returning to example [ exam : y^2-xy-1 ] .let . using the change of variables , and , becomes monic and linear in , hence by the previous theorem we can compute that . on the other hand , treating as monic and quadratic in , we see that the lyapunov exponents must all be . but the determinant has absolute value 1 , and so in fact the lyapunov exponents must vanish almost everywhere .in other words , by taking transposes to reverse the order of the product , we obtain from the introduction .although this appears to be a simple result , we have not been able to obtain this as a consequence of any known results in random matrix theory . manfred einsiedler and klaus schmidt , _ markov partitions and homoclinic points of algebraic -actions _ , in : dynamical systems and related topics , proc .* 216 * ( 1997 ) , interperiodic publishing , moscow , 259279 .douglas lind , klaus schmidt , and evgeny verbitskiy , _ entropy and growth rate of periodic points of algebraic -actions _ , in : dynamical numbers : interplay between dynamical systems and number theory , ed .s. kolyada , yu .manin , m. mller , p. moree and t. ward , contemp .532 , american mathematical society , providence , r.i . , 2010 .s. a. yuzvinskii , _ metric properties of the endomorphisms of compact groups _ , izv .nauk sssr ser. mat . * 20 * ( 1965 ) , 12951328 ( russian ) ; translated in amer . math .* 66 * ( 1968 ) , 6398 .
the study of actions of countable groups by automorphisms of compact abelian groups has recently undergone intensive development , revealing deep connections with operator algebras and other areas . the discrete heisenberg group is the simplest noncommutative example , where dynamical phenomena related to its noncommutativity already illustrate many of these connections . the explicit structure of this group means that these phenomena have concrete descriptions , which are not only instances of the general theory but are also testing grounds for further work . we survey here what is known about such actions of the discrete heisenberg group , providing numerous examples and emphasizing many of the open problems that remain .
when modeling nonlinear problems , dissipative algorithms have provided researchers with an extremely valuable tool since usually most non dissipative schemes fail to produce a stable evolution .more precisely , when using neutrally stable algorithms , instabilities often arise which spoil the evolution .the addition of artificial dissipation removes these instabilities by `` damping '' the growing modes of the solution , in a somewhat controlled way .therefore , its inclusion in a discretization scheme provides a practical and economic way of achieving longer evolutions .the most widely used algorithms with this property are the family of lax schemes ; whereby adding to the equation a term proportional to one obtains a stable discretization of the system that would otherwise be unstable .however , one might correctly ask whether this is not tantamount to solving a completely different problem .the beauty of these methods is that the proportionality factor depends on the discretization size , and in the continuous limit the approximation to modified pde results in a consistent approximation to the original one .although there is much experience with these kind of schemes , most of the standard dissipative algorithms have been tailored for cauchy initial value problems , where initial data is provided at one instant of time and evolved to the next instant by means of the evolution equation . however , in radiative problems , it is sometimes more convenient to choose a sequence of hypersurfaces adapted to the propagation of the waves , and therefore one adopts a foliation adapted to the characteristics of the pde under study . in the present work , we present a new algorithm adequate for hyperbolic systems .the underlying strategy of the proposed algorithm is quite different from the conventional cauchy - type methods .rather , it is inspired by analytic concepts developed in the 1960s for studies of gravitational radiation in general relativity and in their subsequent numerical integrations .the main features of this approach are the use of characteristic surfaces ( for the foliation of the spacetime ) and compactification methods ( that enable the inclusion of infinity in the numerical grid ) to describe radiation properties .although evolution algorithms ( for systems possessing some kind of symmetry ) developed within this approach proved to be successful in the linear and mildly nonlinear regime , they produce unstable evolutions when applied to the general case ; which shows the need for devising algorithms that could be applied in this situation . in the present work we present a new algorithm having dissipative properties , making it a valuable tool to study systems where strong fields might be present . in section the details of the algorithm for the wave equationare presented and its stability properties discussed .section 3 is devoted to treat a model problem which shows how the dissipative algorithm might be a useful tool for numerical investigations in general relativity .finally , in section we describe two particular applications of this algorithm in the numerical implementation of general relativity .waves of amplitude traveling in one spatial direction with unit speed obey the familiar equation which can be solved in the region , assuming and are given .if , instead , one is interested in the solving the problem restricted to the region , boundary data must also be provided corresponding to .the analysis of this problem can be described in a simple way in terms of the characteristics of this equation , which are given by through each spatial point . in particular , when solving eq . in the region .the domain of dependence of a point is given by , with naturally defined by the characteristics passing through as and is the region to the future of * the line , + * the region defined by or } \\ & \\ 0 & \mbox{otherwise , } \end{array } \right.\ ] ] where is the spin - two spherical harmonic .the code was run for different values of under different choices of the dissipation parameter . in all cases, unstable evolutions resulted from the choice , however for nonzero values of the code ran without any stability problem as illustrated in figure [ fig : evol1 ] ( for a run where ) . yet , as expected of any dissipative algorithm , the solution decreases in amplitude with time .this highlights the need to carefully tune the value of .notwithstanding this fact , it is important to stress once again that this set of runs would not have been possible without dissipation .this problem was originally studied in the perturbative regime by price .there is no known analytic solution to the problem in the nonlinear regime and applying numerical methods is the only way to study it .the accuracy of the dissipative scheme can be assessed indirectly by inspection of the gravitational waves produced by the system .gravitational waves can be described in terms of two _ polarization modes _ ( refered to as _ plus _ and _ cross _ modes ) . however , when considering spacetimes with axisymmetry , the cross mode must vanish and this fact can be used to test the algorithm . calculatingthe gravitational radiation is a rather involved problem that exceeds the scope of this work .a set of algorithms to numerically calculate the gravitational wave forms was constructed in the characteristic formulation in and tested under different situations .we used these algorithms in the present work to calculate the polarization modes for the choice and with an axisymmetric pulse with as the initial data .the cross polarization mode actually converges to zero in second order indicating an accurate discretization of einstein equations , as can be seen in figure [ fig : conver ] .the algorithm described in this work represents a valuable tool for the study of nonlinear problems in the characteristic formulation .its use enables long term evolution that would otherwise be impossible . yet, there is still much room for improvement as the number of numerical techniques adapted to characteristic type evolutions is scarce ( as opposed to the situtation in the cauchy type evolution where one has at hand a great number of algorithms ) .the variety of physical problems where propagating waves are to be described , highlights the need of further investigations on `` characteristic '' algorithms . of particular interestis the application of these type of algorithms to the characteristic module constructed to model the collision of a binary black hole self - gravitating system . in this problem, it is imperative to have robust enough schemes capable of dealing with highly nonlinear fields .the complexity of the problem inspired the creation of the binary black hole grand challenge alliance , where a group of u.s .universities and outside collaborators are joining efforts to tackle the problem . a strategy to studythis problem is a `` hybrid '' scheme that implements at the same time a cauchy evolution ( for the region near the black holes ) and a characteristic evolution ( for the exterior region ) .this approach is called _ cauchy - characteristic matching ( ccm ) _ , and in principle , its implementation manages to avoid the problems and to exploit the best features of each evolution scheme .ccm has been shown to work ( and outperform traditional outer boundary conditions ) in situations where special symmetries were assumed and its full dimensional application in g.r .is currently under study .the characteristic code is one of the pieces of this bigger algorithm and the need for robust performance prompted this investigation .however , its use is not limited to g.r .any hyperbolic system describing waves will have an evolution equation similar to eq .( [ weqnul ] ) .the algorithm presented in this work should provide a useful tool in the numerical modeling of these systems .this work has been supported in part by the andrew mellon fellowship , by the nsf grants phy 9510895 and nsf int 9515257 to the university of pittsburgh and by the binary black hole grand challenge alliance , nsf grant phy / asc 9318152 ( arpa supplemented ) . the author would like to thank jeffrey winicour , roberto gmez and nigel bishop for valuable suggestions , also to the universities of south africa and of durban - westville for their hospitality in completing part of this work .computer time for this project has been provided by the pittsburgh supercomputing center under grant phy860023p .
we present a dissipative algorithm for solving nonlinear wave - like equations when the initial data is specified on characteristic surfaces . the dissipative properties built in this algorithm make it particularly useful when studying the highly nonlinear regime where previous methods have failed to give a stable evolution in three dimensions . the algorithm presented in this work is directly applicable to hyperbolic systems proper of electromagnetism , yang - mills and general relativity theories . we carry out an analysis of the stability of the algorithm and test its properties with linear waves propagating on a minkowski background and the scattering off a scwharszchild black hole in general relativity . key words : derivation of finite difference approximations ; stability and convergence of difference methods ; electromagnetism , other . + subject classifications : 65p05 , 65p10 , 77c10 , 77a99 . proposed running head : dissipative algorithm for wave - like equations . + * name : * luis lehner * address : * 2truemm 2truemm * tel . : * 1-(512 ) 471 - 5426 * e - mail address : * luisl.ph.utexas.edu * fax : * 1-(512 ) 471 - 0890
matrix variate distributions have proven to be useful for modelling three - way data , such as multivariate longitudinal data . however , in most cases , the underlying distribution has been elliptical such as the matrix variate normal and the matrix variate distributions .however , there has been relatively little work done on matrix variate data that can account for skewness present in the data .the work that has been carried out in the area of matrix variate skew distributions is mostly limited to the matrix variate skew - normal distribution .herein , we derive a matrix variate skew- distribution .the remainder of this paper is laid out as follows . in section 2 ,some background is presented . in section 3 ,the density of the matrix variate skew- distribution is derived and a parameter estimation procedure is given .section 4 looks at some simulations , and we conclude with a summary and some future work ( section 5 ) .one natural method to model three - way data is to use a matrix - variate distribution .there are many examples in the literature of such distributions , the most well - known being the matrix - normal distribution . for notional clarity , we use to denote a realization of a random matrix .an random matrix follows a matrix variate normal distribution with location parameter and scale matrices and of dimensions and , respectively .we write to denote such a random matrix and the density of can be written one well known property of the matrix variate normal distribution is where is the multivariate normal density with dimension , is the vectorization of , and is the kronecker product .although the matrix variate normal is arguably the most mathematically tractable , there are examples of non - normal cases .one famous example is the wishart distribution arising as the distribution of the sample covariance matrix of a multivariate normal sample .more recently , however , there has been some work done in the area of matrix skew distributions such as the matrix - variate skew normal distribution , e.g. , , , and .more information on matrix variate distributions can be found in .very recently , there has also been work done in the area of finite mixtures .specifically , looked at clustering and classification of multivariate longitudinal data using a mixture of matrix variate normal distributions .also , , looked at mixtures of matrix variate distributions .various multivariate distributions such as the multivariate , and skew- , the shifted asymmetric laplace distribution , and the generalized hyperbolic distributions arise as special cases of a normal variance - mean mixture ( cf . * ? ? ?* ch . 6 ) . in this formulation ,the density of a -dimensional random vector takes the form which is equivalent to the representation where and is a latent random variable with density .the multivariate skew- distribution with degrees of freedom arises as a special case with , where denotes the inverse gamma distribution with density function a random variable has a generalized inverse gaussian ( gig ) distribution with parameters and if its density function can be written as where is the modified bessel function of the third kind with index . several functions of gig random variables have tractable expected values , e.g. , where , , and is the modified bessel function of the third kind with index .these results will prove to be useful for parameter estimation for the matrix - variate skew- distribution .we will say that an random matrix has a matrix variate skew- distribution , , if can be written where and are matrices , , and .analogous to its multivariate counterpart , is a location matrix , is a skewness matrix , and are scale matrices , and is the degrees of freedom .it then follows that and thus the joint density of and is \right\ } , { \addtocounter{equation}{1}\tag{\theequation}}\label{eqn : joint}\end{aligned}\ ] ] where .we note that the exponential term in can be written as \right\},\ ] ] where therefore , the marginal density of is \right\}dw.\end{aligned}\ ] ] making the change of variables given by we can write \left[\delta({\mathbf{x}};{\mathbf{m}},{\mathbf\sigma},{\mathbf\psi})+\nu\right]}\right).\end{aligned}\ ] ] the density of , as derived here , is considered a matrix variate extension of the multivariate skew- density used by .as discussed by and in the matrix variate normal case , the estimates of and are unique only up to a multiplicative constant .indeed , if we let and , , the likelihood is unchanged .this identifiability issue can be resolved , for example , by setting or .note that , so the estimate of the kronecker product is unique . for the purposes of parameter estimation , note that the conditional density of is ^{\frac{\lambda}{2}}w^{\lambda-1}}{2k_{\lambda}(\sqrt{\rho({\mathbf{a}},{\mathbf\sigma},{\mathbf\psi})[\delta({\mathbf{x}};{\mathbf{m}},{\mathbf\sigma},{\mathbf\psi})+\nu}])}\exp\left\{-\frac{\rho({\mathbf{a}},{\mathbf\sigma},{\mathbf\psi})w+[{\delta({\mathbf{x}};{\mathbf{m}},{\mathbf\sigma},{\mathbf\psi})+\nu}]/{w}}{2}\right\}.\end{aligned}\ ] ] therefore , where .finally , we note that where denotes the multivariate skew- distribution with location parameter , skewness parameter , scale matrix , and degrees of freedom .this can be easily seen from the representation given in and the property of the matrix normal distribution given in .note that the normal variance - mean mixture representation as well as the relationship with the multivariate skew- distribution present two convenient methods to generate random matrices from the matrix variate skew distribution .the former is used in section 4 .suppose we observe a sample of matrices from an matrix variate skew- distribution .as with the multivariate skew- distribution , we proceed as if the observed data is incomplete , and introduce the latent variables .the complete - data log - likelihood is then -\frac{1}{2}\sum_{i=1}^nw_i{\,\mbox{tr}}({\mathbf\sigma}^{-1}{\mathbf{a}}{\mathbf\psi}^{-1}{\mathbf{a}}'),\end{aligned}\ ] ] where is constant with respect to the parameters .we proceed by using an expectation - conditional maximization ( ecm ) algorithm described overleaf . *1 ) initialization * : initialize the parameters .set * 2 ) e step * : update , where where [\delta({\mathbf{x}}_i;\hat{{\mathbf{m}}}^{(t)},\hat{{\mathbf\sigma}}^{(t)},\hat{{\mathbf\psi}}^{(t)})+\hat{\nu}^{(t)}]},\ ] ] and * 3 ) first cm step * : update the parameters . and . the update for the degrees of freedom can not be obtained in closed form .instead we solve for to obtain . where is the digamma function . *4 ) second cm step * : update \end{split}\ ] ] * 5 ) third cm step * : update \end{split}\ ] ] * 6 ) check convergence * : if not converged , set and return to step 2 .note that there are several options for determining convergence of this ecm algorithm . in the simulations in section 4, a criterion based on the aitken acceleration is used .the aitken acceleration at iteration is where is the ( observed ) log - likelihood at iteration .the quantity in can be used to derive an asymptotic estimate ( i.e. , an estimate of the value after very many iterations ) of the log - likelihood at iteration , i.e. , ( cf . * ? ? ?* ; * ? ? ?as in , we stop our em algorithms when , provided this difference is positive .as discussed in and for parameter estimation in the matrix variate normal case , the estimates of and are unique only up to a multiplicative constant .indeed , if we let and , , the likelihood is unchanged .however , we notice that , , so the estimate of the kronecker product is unique .we conduct two simulations to illustrate the estimation of the parameters . in both simulations ,we take 50 different datasets of size 100 , from a matrix skew- distribution .also , in both simulations , and . in simulation 1 , we took the location and skewness matrix to be and , respectively , and and in simulation 2 , where in figures [ fig : lineplots1 ] and [ fig : lineplots2 ] , we show line plots of the marginals for each column ( labelled v1 , v2 , v3 , v4 ) of a typical dataset from simulations 1 and 2 , respectively .the dashed red lines denote the means . in figure [fig : lineplots1 ] , the skewness in columns 1 , 2 , and 4 , for simulation 1 , is very prominent when visually compared to column 3 , which has zero skewness .the skewness is also apparent in the lineplots for simulation 2 , however , because the values of the skewness are generally less than those for simulation 1 , it is not as prominent .the component - wise means of the parameters as well as the component wise standard deviations are given in table [ tab : results ] .we see that the estimates of the mean matrix and skewness matrix are very close to the true value for both simulations .moreover , we see that the estimates of and also correspond approximately to the their true values , and thus so would the kronecker product , which is not shown .[ tab : results ]the density of a matrix variate skew- distribution was derived .this distribution can be considered as a three - way extension of the multivariate skew- distribution .parameter estimation was carried out using an ecm algorithm . because the formulation of multivariate skew- distribution this work is based onis a special case of the generalized hyperbolic distribution , it is reasonable to postulate an extension to a broader class of matrix variate distributions .ongoing work considers a finite mixture of matrix variate skew- distributions for clustering and classification of three - way data .domnguez - molina , j. a. , g. gonzlez - faras , r. ramos - quiroga , and a. k. gupta ( 2007 ) .a matrix variate closed skew - normal distribution with applications to stochastic frontier analysis . _36_(9 ) , 16911703 .lindsay , b. g. ( 1995 ) .mixture models : theory , geometry and applications . in _ nsf - cbms regional conference series in probability and statistics _ ,volume 5 .california : institute of mathematical statistics : hayward .
although there is ample work in the literature dealing with skewness in the multivariate setting , there is a relative paucity of work in the matrix variate paradigm . such work is , for example , useful for modelling three - way data . a matrix variate skew- distribution is derived based on a mean - variance matrix normal mixture . an expectation - conditional maximization algorithm is developed for parameter estimation . simulated data are used for illustration . + * keywords * : matrix variate distribution ; skew- distribution
we can learn a lot on the functioning of the human brain in health and disease when we consider it as a large - scale complex network , whose properties can be analyzed using graph theoretical analysis . with the advent of miscellaneous and noninvasive mri techniques , this _connectome _ has been mainly characterized by either structural or functional connectivity .structural connectivity is commonly based on white matter tracts quantified by diffusion tractography ; functional connectivity relies on the other hand on statistical dependencies such as temporal correlation .an important addition to this framework can come from effective connectivity analysis , in which the flow of information between even remote brain regions is inferred by the parameters of a predictive dynamical model .+ for some techniques , such as dynamic causal modelling ( dcm ) and structural equation modelling , these models are built and validated from specific anatomical and physiological hypotheses .other techniques such as granger causality analysis ( gca ) , are on the other hand data - driven and rely purely on statistical prediction and temporal precedence . while powerful and widely applicable , this last approach could suffer from two main limitations when applied to blood - oxygenation level - dependent ( bold)-functional mri ( fmri ) data : confounding effect of hemodynamic response function ( hrf ) and conditioning to a large number of variables in presence of short time series .early interpretation of fmri based directed connectivity by gca always assumed homogeneous hemodynamic processes over the brain ; several studies have pointed out that this is indeed not the case and that we are faced with variable hrf latency across physiological processes and distinct brain regions . recently , a number of studies have addressed this issue proposing to model the hrf according to several recipes . as well , a recent study has proposed that it would still feasible to infer connectivity at bold level , under the assumption that granger causality is theoretically invariant under filtering and that the hrf can be considered as a filter .it is still unclear whether and how specific effects related to hrf disturb the inference of temporal precedence .in addition a simulated or experimental ground truth is difficult to obtain , though some studies on simulated fmri data have tried to reveal the relationship between neural - level and bold - level causal influence .a considerable help to obtain the hrf for deconvolution could come from multimodal imaging where the high temporal resolution of eeg is combined to the high spatial resolution of fmri , but this experimental approach is still far from being optimal and widely applicable .hrf has been studied almost since the early days of fmri . for task - related fmri, neural population dynamics can be captured by modeling signal dynamics with explicit exogenous inputs , i.e. deconvolution according to the explicit task design is possible in this case . for resting - state fmri on the other hand, the absence of explicit inputs makes this task more difficult , unless relying on some specific prior physiological hypothesis . in order to overcome these issues and to allow a more general approach , here we present a simple and novel blind - deconvolution technique for bold - fmri signal .+ coming to the second limitation , in order to distinguish among direct and mediated influences in multivariate datasets it is necessary to condition the analysis to other variables .a bivariate analysis would indeed lead to the detection of many false positives . in presence of a large number of variable and short time series, a fully multivariate conditioning could lead to computational problems due to the overfitting . furthermore, conceptual issues would arise in presence of redundant variables . in this paperwe thus apply partial conditioning for granger causality ( pcgc ) to a limited subset of variables , as recently proposed for reconstructing the bold and deconvolved bold level effective connectivity network ( ecn ) and compare them .hemodynamic deconvolution of bold signal is performed as described in . under the assumption that the transformation from neural activation to bold response can be modeled as a linear and time invariant system ,measured fmri data can be seen as the result of the convolution of neural states with a hrf : where is the time and denotes convolution . is the noise in the measurement , which we assume to be white . since the right side of the above equation includes three unobservable quantities , in order to solve the equation for we need to substitute with a hypothetical model of the neural activation for .here we employ a simple on - off model of activation to model : where is the delta function .this allows to fit the hrf according to using a canonical hrf ( two gamma functions ) and two derivatives ( multivariate taylor expansion : temporal derivative and dispersion derivative ) , as is common in most fmri studies .+ once calculated , we can obtain an approximation of the neural signal from the observed data using a wiener filter let , , , and be the fourier transforms of , , , and , respectively .then where denotes complex conjugate .the estimation of the neural states is then given by where is the inverse fourier transform operator .+ for task - related fmri , the stimulus function provides the prior expectations about neural activity and a generative model whose inversion corresponds to deconvolution ; this is in principle not the case for resting - state fmri .nonetheless there is increasing evidence of specific events and neural states that govern the dynamics of the brain at rest .furthermore , tagliazucchi et al . proposed that these events are reflected by relatively large amplitude bold signal peaks and thus that such fluctuations could encode relevant information from resting - state fmri recordings .inspired by their work , we consider resting - state fmri as _ spontaneous event - related _ , and we propose to extract the hrf from those pseudo - events . after doing this, we can employ the deconvolution model in the same way as described above .it is known that the bold response is much slower than the neural activation that is presumed to drive it .consequently , the peak of the bold signal lags behind the peak of neural activation ( i.e. by points ) .so here we assume that these events are generated from .+ glover pointed out that the noise spectrum in task - related fmri can be obtained from time series measurements in nonactivated cortical regions ; here we extend the model to cope with resting - state fmri for which there is no explicit activation . in this study we assumed covariance of noise equal to ] , where is an arbitrary maximum value , choosing the one for which the noise error covariance is smallest as the onset . by this methodwe can perform deconvolution on all bold signals , requiring no information on timing or a priori spatial information of events ; furthermore , the time series could be the average of time series over a region of interest with any scale , or series extracted by independent or principal component analysis . a flow chart for bold signal deconvolutionis shown in fig.[fig1 ] .+ this is the pseudo - code for our procedure . + ] here we employ a methodology proposed in which allows to compute granger causality conditioned to a limited number of variables in the framework of information theory .the idea is that conditioning on a small number of variables , chosen as the most informative for the candidate driver variable , is sufficient to remove indirect interactions for sparse connectivity patterns . + we consider covariance - stationary variables , denoting the state vectors as : being the model order .let be the mean squared error prediction of on the basis of the vectors .the partially conditioned granger causality index is defined as follows : + where is a set of the variables , in , which are most informative for .we adopt the following approximate strategy for : given the previous , the set is obtained adding the variable with greatest information gain .this is repeated until variables are selected .a method for establishing a ground truth for fmri data has not reached a general consensus . recentlya benchmark dataset , netsim has attracted a lot of attention .previous studies have shown that lag - based methods perform very poorly on these datasets ; it is anyway worthy to mention that these data are simulated under the dcm framework , contain no reciprocal connections and only gaussian noise , limiting their universality as ground truth .here we analyzed the largest of these datasets , consisting of 50 nodes .after deconvolution the sensitivity improved significantly , increasing from 20 to 30 .also the specificity improved from 88 to 94 .this does not render gc the method of choice for these data , for which we also have to point out that neural events and noise are not distinguishable , but gives nonetheless an indicative result for the usefulness of deconvolution of the bold signal . in order to investigate the role of repetition time ( tr ) on the deconvolution procedure and on the effective network reconstruction, our analyses were performed on a resting - state fmri dataset which has been publicly released in the 1000 functional connectomes project ] .all participants had no history of neurological and psychiatric disorders and all gave the informed consent approved by local institutional review board . during the scanning participantswere instructed to keep their eyes closed , not to think of anything in particular , and to avoid falling asleep . +two data sets with different tr ( tr=1.4s and tr=0.645s ) were acquired on siemens 3 t trio tim scanners using developed multiplexed echo planar imaging .as specified in detail below , two resting - state fmri data are included in the protocol - a tr=0.645s ( 3 mm isotropic voxels , 10 minutes ) to provide optimal temporal resolution and tr=1.4s ( 2 mm isotropic voxels , 10 minutes ) to provide optimal spatial resolution .the third data set , acquired on a 4 t scanner , contains standard resting - state fmri acquisitions with a longer tr ( tr=3 , 4 mm isotropic voxels , 5 minutes ) . for more detail on subject and data information, please see website .preprocessing of resting - state images was performed using the statistical parametric mapping software ( spm8 , http://www.fil.ion.ucl.ac.uk/spm ) .the preprocessing included slice - timing correction relative to middle axial slice for the temporal difference in acquisition among different slices , head motion correction , spatial normalization into the montreal neurological institute stereotaxic space , resampling to 3-mm isotropic voxels .8(9 ) subjects were excluded from the dataset with tr=0.645s ( tr=1.4s ) because either translation or rotation exceeded mm or , resulting in 16(tr=0.645s ) and 15(tr=1.4s ) subjects each one scanned in two sessions which were used in the analysis ) .one subject whose data were too noisy was excluded from the tr=3 dataset , resulting in 10 subjects used in the analysis . in order to avoid introducing artificial local spatial correlations between voxels , no spatial smoothing was applied for further analysis , as previously suggested .the functional images were segmented into 90 regions of interest ( roi ) using automated anatomical labeling ( aal ) template as reported in previous studies . for each subject ,the representative time series of each roi was obtained by averaging the fmri time series across all voxels in the roi .several procedures were used to remove possible spurious variances from the data through linear regression .these were i ) six head motion parameters obtained in the realigning step , ii ) signal from a region in cerebrospinal fluid , iii ) signal from a region centered in the white matter , iv ) global signal averaged over the whole brain .the bold time series were deconvolved into neural state signal using the above mentioned approach .the topological properties of the effective connectivity network were defined on the basis of a binary directed graph , consisting of nodes and directed edges : where refers to the directed edge from roi to roi in the graph . indicates the threshold . in a directed graph not necessarily equal to . considering that the graph we focused on is directed ,all topological properties were calculated on incoming and outgoing matrix , respectively .graph theoretical analyses were carried out on the effective connectivity network using the brain connectivity toolbox . as previous studies suggested that the brain networks of each subject normally differ in both the number and weighting of the edges , we applied a matching strategy to characterize the properties of effective connectivity network .both the global and local network efficiencies have a propensity for being higher with greater numbers of edges in the graph . modifying the sparsity values ( number of edges ) of the adjacency matrix also altered the graph s structure .as a consequence it was suggested that the graphs to be compared must have ( a ) the same number of nodes and ( b ) the same number of edges .the cost was defined as the ratio of the number of existing edges divided by the maximum possible number of edges in a network .since there is currently no formal consensus regarding selection of cost thresholds , here we selected a range of 0.05 to 0.14 with step = 0.01 for subsequent network analyses .the lower bound was chosen as the one yielding a sparse graph with mean degree ( total number of edges where ) .the upper threshold corresponded to the smallest significant value of granger causality ( f - test with ) across all subjects ) . for effective connectivity network at each cost threshold, we calculated both overall topological properties and nodal characteristics .the overall topological properties included i ) small - worldness ( ) , related to normalized clustering coefficient ( ) and normalized characteristic path length ( ) ; ii ) network efficiency , divided in local efficiency ( ) and global efficiency ( ) .the nodal characteristics included i ) the nodal degree , that quantifies the extent to which a node is relevant to the graph , and ii ) the nodal efficiency , that quantifies the importance of the nodes for the communication within the network .furthermore we calculated the area under the curve ( auc ) across all cost values for the above mentioned network properties .this quantity represents a summarized scalar for topological characterization of brain networks independent of single cost threshold selection .we tested the proposed deconvolution method on resting - state fmri data ; following the procedure summarized in the box , firstly we set a maximum time lag from a given threshold crossing , and obtain an optimal value for this lag , denoted with . the histograms for , reported in fig.[fig2 ] show a maximum around , which is consistent with a previous study according to which the latency delay is in gray matter .it is worth to mention that the lower tr could allow a more accurate estimation of the lag . + .( :without regression of global signal ) [ fig2 ] ] to assess the effect of deconvolution , we compared the shape of voxel based hrf over the whole brain using different trs .we focused on three parameters : response height , time - to - peak , and full - width at half - max ( fwhm ) as potential measures of response magnitude , latency , and duration . using principal component analysis we determined the average intersubject variability of hrf maps .we found that the first component of hrf accounted for (response height ) , (time to peak ) and (fwhm ) of the variance .furthermore , the spatial distribution is very similar to the mean group map .the mean group results are plotted in fig.[fig3 ] . the response height ,time to peak and fwhm of hrfs differ across brain regions , as a consequence of multiple factors including neural activity differences , global magnetic susceptibilities , vascular differences , baseline cerebral blood flow , slice timing differences etc . . these patterns are remarkably similar across subjects and trs , reflecting the robustness of the proposed approach .+ : without regression of global signal)[fig3 ] ] as another indicator of the stability of the proposed joint of deconvolution and pcgc approach we tested the variance of causality matrix across all subjects .we calculated the variance of the granger causality matrix obtained both on the bold and deconvolved bold level signal .firstly , we converted the matrix to z - scores , then we calculated the variance of each matrix element , finally summing up the all these values into an overall variance index .the variance of granger causality matrix obtained from the deconvolved signal is much lower than the one of the bold level matrix for all tr values ( fig.[fig6 ] ) . also , pcgc method kept the variance lower than full conditioned gc method .this result was confirmed testing a network at another scale using 1024 nodes ( fig.[fig6 ] , the native aal segmentation was parcellated into 1024 micro regions of interest of approximately identical size across both hemispheres ; in this case we could not test full conditioned gc due to small number of samples ) . ] as shown in previous studies , several sources of spurious variance should be removed by regression : motion artifacts , white matter and ventricular time courses .still , the effects of regression against the global signal , calculated by averaging across all voxels within a whole brain mask , are debated . in order to evaluate this effect on our data we calculated spatial correlation between the the group mean image of hrf(response height , time to peak , fwhm ) with and without regression of global signal in the preprocessing step , obtaining high pearson correlation between them : r=0.97(response height ) , 0.90(time to peak ) , 0.88(fwhm ) .we can thus conclude that regression against global signal still preserved the spatial distribution .when trying to reconstruct effective connectivity networks , we are faced with the problem of coping with a large number of variables , when the application of multivariate granger causality may be questionable or even unfeasible , whilst bivariate granger causality would detect also indirect interactions .conditioning on a large number of variables requires an high number of samples in order to get reliable results .reducing the number of variables that one has to condition over would thus provide better results for small data - sets . in the general formulation of granger causality, one has no way to choose this reduced set of variables ; on the other hand , in the framework of information theory , it is possible to individuate the most informative variables one by one .+ the optimal model order ( order of the autoregressive model in granger causality , embedding dimension in transfer entropy ) for deconvolved bold and bold signal was determined by leave - one - out cross - validation , and was found to be 3 for tr=0.645s , 2 for tr=1.4s and 1 for tr=3s . under the gaussian assumption , we constructed effective connectivity network using pcgc method .we firstly have to determine the number of variables upon which conditioning . to do this we look at how much uncertainty we eliminate adding an extra variable , letting the number of conditioning variables included vary from 1 to 20 .this uncertainty can be expressed in terms of the information that we gain adding an extra variable . in fig.[fig4 ] , we plot the information gain as a function of ; as expected , both this quantity and its increment decrease monotonically with . + ) ,when the -th variable is included , is plotted versus .the information gain is averaged over all the variables .[ fig4 ] ] we can observe that the knee of the curves occurs when six variables are considered .this happens also when we consider different brain prior templates with 17 or 160 nodes ( results not reported here ) .this could be connected to the fact that the average number of modules which explain the equal - time correlations of resting brain is close to six , therefore picking one variable from each module is sufficient to have most of the information , about a given channel , that can be obtained from the remaining channels , and this independently on the number of nodes .the effect of deconvolution , for the information gain , is to qualitatively raise the curve for tr=0.645s , and to lower them for tr=1.4s .this trend ( not statistically significant ) might be the result of two competing effects , the fact the deconvolution may remove spurious correlations and/or restore genuine correlations obscured by noise .+ synthesizing the knee of the curves , sensitivity and specificity , we consider as the most appropriate number of most informative variables to include in the conditioning procedure .+ the global topological properties of brain ecn at deconvolved bold and bold level rely on the choice of thresholds .we used multiple cost thresholds and the auc to evaluate the stability of the topological organization ( table 1 ) .an higher number of differences between the two networks was found with a ( relatively ) longer tr ( tr=1.4s ) .specifically , the auc of small - worldness ( ) , normalized clustering coefficient ( ) , clustering coefficient( ) and local efficiency( ) displayed the most significant differences , similar to what emerged with tr=0.645s .for the data set with shorter tr we found significant differences in the characteristic path length and global efficiency of the outgoing network , whereas the most relevant differences were found for the incoming network with the longer tr ..comparison of auc between deconvolved bold and bold . , y : , fdr corrected ; n : otherwise . [ cols="^,^,^,^,^,^,^ " , ]+ comparing the two networks on nodal degree , nodal global efficiency and nodal local efficiency revealed modifications in deconvolved bold and bold level ( fig.[fig5 ] ) .the patterns of nodal degree modifications resembled to those of nodal global efficiency in incoming network in both tr=0.645s and tr=1.4s fmri data sets .in addition , more brain regions showed modified nodal degree and ( global / local ) efficiency in tr=0.645s rather than tr=1.4s data ., fdr corrected , ( ) .blue indicate negative values , red positive values .the point size is proportional to the absolute z value.[fig5 ] ]in this study we proposed a novel methodology to achieve deconvolution in resting state data using spontaneous pseudo events , and to apply partially conditioned granger causality to the analysis of fmri data . in our opinion this joint approach is the most convenient to infer effective connectivity with granger causality from resting state fmri data .in the absence of a well defined ground truth , and in the light of the still active and unresolved debate on the usefulness of hrf deconvolution granger causality based connectivity , we limit ourselves to validate the stability of the proposed method and indicate a possible path for the continuation of this debate , quantifying and comparing the overall topological properties of large - scale ecns on deconvolved bold - level versus bold - level signals , investigating also the effect of different time resolutions ( tr=0.645s and tr=1.4s ) .+ previous discussions on evaluating effective connectivity from fmri data reached the conclusion that it is better to use state - space model for inferring causality on hidden neural states .a pioneering eeg - fmri study provided the first experimental substantiation of the theoretical possibility to improve interregional coupling estimation from hidden neural states of fmri .though promising , these implications are still limited by the fact that multimodal recording is invasive and not applicable to healthy controls . as a consequence ,data - driven methods for substantiating the confounding variability of haemodynamics have been developed .the two available types of state space models in estimation of hrf : the generic ( linear canonical / spline hrf ) and biophysically informed models ( dcm nonlinear hrf) .generic models are widely applicable but lack specific biophysical constraints , while biophysically informed models are constrained by the hypothesis itself .a recently proposed , biophysically informed bind deconvolution approach based on the state - of - the - art cubature kalman filtering could be a useful tool for resting - state fmri . in the present study ,however , we use a simpler approach which employs the generic linear canonical hrf for deconvolution .it is worth to point out that the significant differences between bold- and deconvolved bold - level effective connectivity found in complex network measures can not absolutely exclude the misestimation of hrf .furthermore hrf latency effect does not always critically affect the evaluation of mutual influence , so ecns on bold and deconvolved bold level could have important consistencies .+ findings from brain connectivity studies have now demonstrated that the human brain network exhibits robust small - world topological properties , not only in the anatomical connectivity ( reconstructed by diffusion tractography ) and functional connectivity network , but also in effective connectivity network .the current results also suggested that the ecns obtained from bold and deconvolved data , with shorter and longer tr , have prominent small - world attributes , which would thus be confirmed as a general signature of robust organization of complex brain networks .small - worldness indicates indeed an optimal balance between segregated and integrated organization to process the information .for relatively longer tr we found significant differences between bold and deconvolved ecns .although an explanation based on precise neurobiological mechanisms is still not evident , we can suggest that the bold effect results from a more complex sequence of effects linking neuronal activity , vascular changes and mri signal .hemodynamic delay , and hence the correct onset of the events is indeed hard to capture with a long tr .+ in complex networks organization , the normalized clustering coefficient and the clustering coefficient are two key measures .they quantify the extent of local cliquishness or of local efficiency of information transfer of a network , reflecting the local properties of network topologies . for longer tr , we observed significant differences between the two level ecns .thus the short - scale or local - scale network properties are indeed affected by deconvolution . moreover , the normalized characteristic path length and the characteristic path length quantify global efficiency or the capability for parallel information propagation of a network .these two measurements along with global efficiency are mainly associated with long - range connections ensuring effective interactions or rapid transfers of information .it is widely accepted that long - range axonal connectivity being an important indicator of the functional - anatomical organization of the human cortex .this study reported no differences in long - range network organization .+ it is known that resting - state functional connectivity studies using either seed functional connectivity or independent component analysis benefit from higher sampling rates to adequately sample undesirable respiration and cardiac effects , while for event - related fmri , faster sampling could allow for a better characterization of the hemodynamic response .the same applies to gca .the previous simulations showed that accuracy of granger causality depends on volume tr , faster sampling interval increased the detection capacity of gca of fmri data to neural causality . in this paper, we focus on resting - state fmri data with tr=0.645s and 1.4s to maximally escape information loss due to low sampling . considering the limitation of acquisition sequence, the conventional fast tr data acquisition brings to the loss of the fine spatial resolution .+ other methodological considerations are worth to be mentioned .the first one concerns data preprocessing . as a general ideaspatial smoothing can reduce the noise and increase signal - to - noise ratio , therefore improving the accuracy of detecting of neural event .here we do not include this step .as we used aal template , spatial smoothing would blur the boundary among these regions , which may affect the gc inference .temporal filtering is frequently a necessary step for functional connectivity analysis of resting - state fmri data . in line with previous studies that considered a low model order in gca , we did not performed low - pass filtering .+ secondly , graph theoretic approach is one of the most powerful and flexible approaches to investigate functional and structural brain connectome ; still some controversies remain , concerning the definition of network nodes and edges .different node definitions by prior anatomic brain templates or node scales could produce different results . in future works , more brain templates and more node scales comparison for effective connectivity network should be explored .h. chen was supported by the natural science foundation of china ( no . 61125304 and no .61035006 ) .wu gratefully acknowledges the financial support from china scholarship council(2011607033 ) .53 natexlab#1#1url # 1`#1`urlprefix[2][]#2 [ 2]#2 , , , , , , . ., , , . . ,, , , . . ,, , , . . ,, , , , , , , . . ,, , , . . ,, , , , , , , , , . . ,. , . . , . , , ,. . , . , , , ,. . , . , , ,. . , . , , , , , , ,. . , . , , , , ,. . , . , , , , . , . , , ,. . , . , , , , ,. . , . , , , ,. . , . , , , , , , ,. . , . , , ,volume . ., , , , , . . ,, , , , . . ,, , , . . ,, , , , , , , , , . . ,. , . . , . , , , ,. . , . , , ,. . , . , , , , ,. . , . , , , , ,, in press ( doi : 10.1002/hbm.21513 ) ., , , , , , , . . ,, , , . . ,, , , . . ,, , , , . . ,, , , , , , . . ,, , , , , , , , . . ,, , , , . ., , , , . . ,, , , , , , , , , . . ,, , , , , , , , , , . . ,, , , . . ,, , , , , , , . , ., , , , , , , , , , et al . , .
a great improvement to the insight on brain function that we can get from fmri data can come from effective connectivity analysis , in which the flow of information between even remote brain regions is inferred by the parameters of a predictive dynamical model . as opposed to biologically inspired models , some techniques as granger causality ( gc ) are purely data - driven and rely on statistical prediction and temporal precedence . while powerful and widely applicable , this approach could suffer from two main limitations when applied to bold fmri data : confounding effect of hemodynamic response function ( hrf ) and conditioning to a large number of variables in presence of short time series . for task - related fmri , neural population dynamics can be captured by modeling signal dynamics with explicit exogenous inputs ; for resting - state fmri on the other hand , the absence of explicit inputs makes this task more difficult , unless relying on some specific prior physiological hypothesis . in order to overcome these issues and to allow a more general approach , here we present a simple and novel blind - deconvolution technique for bold - fmri signal . in a recent study it has been proposed that relevant information in resting - state fmri can be obtained by inspecting the discrete events resulting in relatively large amplitude bold signal peaks . following this idea , we consider resting fmri as spontaneous event - related , we individuate point processes corresponding to signal fluctuations with a given signature , extract a region - specific hrf and use it in deconvolution , after following an alignment procedure . coming to the second limitation , a fully multivariate conditioning with short and noisy data leads to computational problems due to overfitting . furthermore , conceptual issues arise in presence of redundancy . we thus apply partial conditioning to a limited subset of variables in the framework of information theory , as recently proposed . mixing these two improvements we compare the differences between bold and deconvolved bold level effective networks and draw some conclusions . bold signal , deconvolution , effective connectivity , granger causality
the united nations organization initiative `` atoms for peace '' presented by usa president dwight d. eisenhower in december 1953 was the first step in peaceful use of nuclear technology . today , many countries of the world develop or begin to create their strong nuclear programsthere are more than 440 nuclear power plants ( npp ) operating in 30 countries , more than 400 ships with nuclear reactors used as propulsion systems .about 300 research reactors operate in 50 countries .this kind of reactors produces radioisotopes for medical diagnostics and therapy of cancer , neutron sources for research and training .approximately 55 nuclear power plants are under construction and 110 ones are planned .the republic of belarus is a newcomer country in nuclear energy .there is no experience of the nuclear power plant construction but large scientific potential in the field of atomic and nuclear physics , radiochemistry and radiation chemistry .hence the development of nuclear knowledge portal is one of the first step for the scenario of nuclear knowledge management .the objectives of the portal are considered as the preservation and enhancing of nuclear knowledge , assisting all the participants of national nuclear energy system development in accumulation of the international experience and competence needed for the effective and safe use of nuclear energy as well as popularization of nuclear knowledge for schoolchildren and the general public .since the beginning of the xxi century the international atomic energy agency ( iaea ) pays big attention to the nuclear knowledge management ( nkm ) . nuclear knowledge ( nk ) is the base stem of appropriate research and development as well as industrial applications of nuclear technologies and includes energy and non - energy applications .knowledge management ( km ) is an integrated , systematic approach to identifying , acquiring , transforming , developing , disseminating , using , sharing and preserving knowledge , relevant to achieving specified objectives .basic km concept by iaea is depicted as a pyramid . on its foundationthere is data .data is presented as unorganized and unprocessed facts , a set of discrete facts about events . over datathere is the information as aggregation of data that makes decision making easier .knowledge is the highest level of information .there is wisdom and enlightenment on the top of km pyramid .approximate percentage of subject area of nuclear knowledge by the iaea is as following : * nuclear physics 11% , * nuclear materials 9% , * engineering and instrumentation 9% , * elementary particle physics 16% , * atomic , molecular and condensed matter physics 10% , * life sciences 18% , * chemistry 4% , * nuclear power and safety 6% , * nuclear fuel cycle and radioactive waste 3% , * fusion research and technology 7% , * environmental and earth sciences 3% , * isotopes 1% , * non - nuclear energy 1% , * economic , legal and social fields 2% .the strategy of the iaea in nkm is as following : it is extremely important that the educational process involves the enterprises of nuclear industry .great attention is paid to the development of national , regional and international educational networks and portals .the iaea developed detailed recommendations for the creation of portals with the formulation of purposes and principles of their design .taxonomy is the main km concept .taxonomy ( from greek _ taxis _ meaning arrangement or division and _ nomos _ meaning law ) is a hierarchical structure in which a body of information or knowledge is categorized , allowing an understanding of its various parts relations with each other .taxonomies are used to organize information in systems , thereby helping users to find it .the stages of taxonomy developing are : * determination of taxonomy requirements ; * identification of its concepts ( where is the content and what do the users think ) ; * developing draft taxonomy ; * its review by users ; * refining taxonomy ; * adaptation of taxonomy to the content ; * taxonomy management and maintenance .each developed country , forming its own nuclear industry , should independently create , establish and maintain the nuclear knowledge portal , integrated into the global nuclear knowledge management industry .such portals will allow to manage information resources , knowledge and competencies of the nuclear industry of belarus , as well as to preserve , maintain and develop the knowledge at the level that provides a safe , sustainable and efficient development of belarusian nuclear industry . nowadays in belarusthere are several websites of selected organizations and institutions that are not related to the united portal , providing separate information on the subject far from completeness .creating of a full - fledged portal of nuclear knowledge is the multistage process . as the first step it is proposed to create educational and research portal of nuclear knowledge .it will not be a portal of npp , which should be developed separately .prospective participants of educational and research portal of nuclear knowledge are : ministry of education of the republic of belarus , belarusian state university ( bsu ) , research institute for nuclear problems of bsu , universities , training specialists for nuclear power plant , department for nuclear and radiation safety of the ministry for emergency situations of the republic of belarus ( gosatomnadzor ) , the joint institute for power and nuclear research sosny .all works should be executed by the monitor of the iaea .the first proposals for the development of educational and research portals of nuclear knowledge , which in the long term could be developed into a full - fledged national portal , are published in .the portal will be developed on the basis of iaea experience and methodological support .the development and support of the portal requires permanent governmental funding . for its developmentit is necessary to build infrastructure and to have the availability of a critical mass of basic science to support practical applications .the portal is a system that integrates all available ( in the country and abroad ) openly accessible information resources ( applications , databases , analytical systems , etc . ) , which allow the developers and users to interact with each other .the portal should provide users with secure access to information and virtual channels of communication , e.g. they can work together on documents from geographically spaced locations ; access to all information resources of the portal through a single web - based mode with a strong collaborative personalization ( right of access to certain resources : data , services , applications , documents ) .the mission of the portal consists in formation of favorable information , socio - cultural , business and educational environment for the sustainable development of nuclear industry in belarus .portal objectives are the following : acceleration of search and access to necessary data and information ; creation of new knowledge ; promotion of participation in research , education and training programs in the nuclear industry .it has to be an integration tool , an access tool for information resources and a communication tool .basic principles of the portal creation are the next : * discussion the requirements of the portal with all stakeholders before development ; * developing a hierarchical taxonomy of the portal ; * constant testing the portal for compliance with technical requirements ; * maintaining transparency of the portal development ; * publishing a description of the portal ; * incorporation of representatives of all interested organizations to the group of developers .interface that provides a mechanism for interaction between applications and the users of the portal is the main element of the portal .interface provides coordination between teams and individuals , with a convenient and quick search and navigation .other elements are the next : electronic document management system to ensure the preparation of the document with the required level of quality , intelligent search , categorization of information ; project management , including project planning , establishing project objectives , project schedule control of resources , planning and allocation of human and financial resources ; e - library ( documents collection , knowledge repository ) consisting of various electronic materials , reports , technical documentation , regulatory documents , training materials , etc . ; learning content management system with a system of courses and distance learning and the ability to develop and improve the courses of studies ; forums on the main areas of activity ; news feeds and other applications that are integrated into the portal .thus , portal of nuclear knowledge will be simultaneously 1 ) a vertical portal ( portal - niche ) having a thematic focus and oriented on full coverage of stated themes ; 2 ) public portal open for the general internet public interested in nuclear subjects ; and 3 ) enterprise collaboration portal .its main difference from the usual web site is availability of interactive services ( mail , news , forums , tools for collaborative work and individual users including distance learning tools ) .draft structure of the portal is presented in fig.[fig1 ] .content of portal is divided by subjects and marked by labels .portal content means all information forming portal .main subjects are the next : nuclear physics , nuclear materials , engineering and instrumentation , elementary particle physics , atomic physics , molecular physics , condensed matter physics , life sciences , chemistry , nuclear power and safety , safeguards , isotopes , fusion research and technology , nuclear fuel cycle and radioactive waste , etc .labels can be the following : image , photo , video , audio , archive , news , countries , organizations , etc .it is no need to place by copying all the information .it is enough to make the necessary links to corresponding portals and sites containing this information .distance learning system , which will be available within the on - line mode of portal should contain video lectures and animated lessons ( perhaps , the last ones should be broken into short modules ) , online tutorials , interactive quizzes and other materials developed by the best professors of the country .such systems are actively being developed worldwide last 20 years .distance learning system in the framework of nuclear knowledge portal will enhance the prestige and quality of education in the field of nuclear science and technology .together , the e - library materials , training courses , databases , electronic documents ( photos , videos , etc . ) and other portal content will be organized in the nk base that contains knowledge in the field of nuclear technology , including nuclear and reactor physics , ionizing radiation , the application of nuclear methods in various fields of science and technology , radiation and radiochemistry , nuclear medicine , etc .it is necessary to establish within the portal open areas , open and restricted areas and restricted areas depending on user access rights .moreover , users with fewer rights should not even see a reference to restricted areas .so , the structure of portal is a 4d matrix with the following layers : content , area , subject , label .the main stages of the work on portal development consist of two parts .the first stage with duration of two years includes the next steps : * identifying the source of the portal funding ; * determining the owner of the portal ; * defining the project team ( responsibility , roles , functions ) ; * determining the structure and the platform of the portal ; * identifying the tools , techniques and sources for data collection / accumulation and storage of information ( life cycle of documents ) ; * determining the necessary hardware and software ; * developing the taxonomy ; * developing a specification for the portal ; * creating a portal prototype based on selected technology ; * starting the collection of information ; * testing the portal in on - line mode .full implementation of the portal on the second stage consists of purchasing computer equipment , installation the software , and experimental implementation of the portal : testing and evaluation of response time and accuracy of data ; checking the portal for the safety and effectiveness ; refining the portal ; developing a guide for users ; supporting the portal and filling it with information .the novelty of the work presented can be formulated as creating the belarusian educational and research portal of nuclear knowledge , taking into account the specific conditions of the republic of belarus on the basis of free software developed by belarusian it specialists : e - lab electronic document management system , .this system will be the basis of the interface , e - library , document management system , project management system , training materials , etc . in 2008 ,the computer program `` laboratory information management system e - lab '' received the certificate no .051 of the national intellectual property center of the republic of belarus .it is implemented in the educational process of leading belarusian universities ( bsu , belarusian state technological university , belarusian national technical university ) , introduced in the chemical - toxicological laboratory of the minsk drug treatment clinic .e - lab is on the basis of management of specimens , measurements and passports of fuels and lubricants of belarusian army ( since 2012 ) and belarusian branch of russian company gazpromneft ( since 2013 ) .e - lab is an electronic system of the client - server architecture designed on the basis of free software : debian gnu / linux , web server apache , firebird database server using the application server php .it runs under windows and linux .it gives web based multi - user operation with different rights of access through widely used browsers .e - lab operates reliably without interruption , completely secure from unauthorized access and has a fast response to user requests .the system provides visibility and accessibility of information .archival storage of materials on the site is provided by the close adjustment it to the place of storage with the control of the storage conditions .it is provided a single interface for a wide range of integrated applications .e - lab is the system easily modifiable and adaptable to the conditions of the project .the presence of adaptable to the conditions of the project document management system e - lab based on free software is very important because the lack of the necessary software is a serious problem in the development of web portals .however , it is now apparent that the existing e - lab system must be radically revised and modernized in order to simultaneously ensuring the smooth operation of a large number of users , as well as providing opportunities for e - learning .development of belarusian educational and research web portal of nuclear knowledge will provide quick access to necessary information and create conditions for its exchange , accumulation and integrity of knowledge at the level ensuring a safe , sustainable and efficient development of nuclear energy and industry of the country , as well as the promotion of nuclear knowledge to attract to this area the most able young people and to create a positive image of nuclear science .it is obvious that the portal development has no end especially in the part of its filling by information .it is necessary to accomplish persistent content inventory . in the future , on the base of the proposed portal it may be changing its themes and the development of educational and research portals with distance learning system of the various types .international atomic energy agency gc(47)/res/10 .strengthening of the agency s activities related to nuclear science , technology and applications .part b : nuclear knowledge .vienna : iaea , 2003 . 7p. international atomic energy agency .knowledge management for nuclear industry operating organizations , iaea - tecdoc-1510 .vienna : iaea , 2006 .185 p. knowledge management for nuclear research and development organizations .iaea - tecdoc-1675 .vienna : iaea , 2012 .74 p. status and trends in nuclear education .iaea nuclear energy series , no .ng - t-6.1 .vienna : iaea , 2011 .239 p. fast reactor knowledge preservation system : taxonomy and basic requirements .iaea nuclear energy series , no .ng - t-6.3 .vienna : iaea , 2008 .89 p. lobko a.s ., sytova s.n . , charapitsa s.v .educational and scientific nuclear knowledge portal . proc . of the iv congress of physicists belarus .april 24 - 26 , 2013 , minsk .p.419 - 420 .sytova s.n . ,charapitsa s.v ., lobko a.s .ability to use electronic document management system e - lab for the creation of educational and scientific nuclear knowledge portal .congress csist2013 ,november 4 - 7 , 2013 , minsk .p.254 - 259 .dubovskaya i.ya . ,savitskaya t.a ., lobko a.s .et al . creating of creating of educational and research portal of nuclear knowledge .dedicated to the 50th anniversary of mrti - bsuir ( minsk , march 18 - 19 , 2014 ) .p.450 - 451 .basyrov r.i .1c - bitrix corporative portal . improving the efficiency of the company . st .petersburg : piter , 2010 .320 p. charapitsa s.v .et al . electronic management system of accredited testing laboratory e - lab .`` mathematical modelling and analysis '' , june 6 - 9 , 2012 , tallinn .charapitsa s. v. et al. electronic system of quality control and inventory management of fuels and lubricants `` e - lab fuel '' . deposited at the state organization `` belarusian institute of system analysis and information support for scientific and technical sphere '' ( so `` belisa '' ) 26.03.2013 no . d201310 .85 p. klevtsov a.l ., orlov v.y . , trubchaninov s.a . principles of creating a knowledge portal on the safety of nuclear installations .nuclear and radiation safety .p.53 - 57 .
the main objectives and instruments to develop belarusian educational and research web portal of nuclear knowledge are discussed . draft structure of portal is presented . keywords : portal , electronic document management system , nuclear knowledge .
the purpose of this talk is to explain to statisticians how astrophysicists , mostly using fourier transform techniques , go about analyzing x - ray time - series data obtained from accreting compact objects ( neutron stars and black holes ) , and to point out a few problems with the usual approaches .the point will be made , that the conglomerate of statistical methods that is being applied in this branch of high - energy astrophysics , even though most definitely not always rigorous , on the whole serves its purpose well and is providing astronomers with the quantitative answers they require .this talk will aim , however , at a few areas where we run into problems , and where more statistical expertise might help . in thinking about how to solve the problems that i shall outline , it will be important to keep an eye on what the capabilities of the existing methods are , as those capabilities will need to be preserved in whatever new approachone would like to propose .mostly , accreting neutron stars and black holes occur in double star systems known as x - ray binary stars , where a normal star and the compact object are in a close orbit around each other ( fig.[lmxb ] ) .matter flows from the normal star to the compact object by way of a flat , spiraling flow called an accretion disk , and finally accretes onto the compact object .a large amount of energy is released in this process ( typically 10 to 10 / sec ) , and is emitted , mostly in the form of x - rays .the characteristic variability timescale for the x - ray emitting regions is predicted ( and observed ) to be very short ( less than a millisecond ) . by studying the properties of this rapid x - ray variability it is possible to extract a great deal of information about the flow of matter onto the compact object , and , indirectly , about the object itself .see van der klis ( 1995 ) for a recent review of the results of studies of this type . in order to understand the character of the date we are dealing with ,i shall follow the flow of information from star to computer .an x - ray binary star emits x - ray photons at a very large rate ( say , 10 / sec ) . for all practical purposes ,the x - ray photon rate produced by the star can be considered as a continuous function of time .these photons are emitted isotropically , or at least over a solid angle of order 4 . because the x - ray detector onboard the x - ray satellite spans only a very small solid angle as seen from the x - ray star ,only a very small fraction ( say , 10 ) of the photons is detected in the instrument .the time series of photon arrival times ( with the total number of detected photons ) is the information that , ideally , we would like to have available for analysis .however , typically , instrumental limitations ( and the maximum telemetry rate ) prevent the registration and transmission of all these arrival times .therefore , onboard the satellite , the data are _ binned _ into equidistant time bins .the information that is finally telemetered to the ground station consists of a sequence of numbers , where is the total number of time bins , representing the number of photons that was detected during time bin . in most cases , and unlike the usual case in astronomy , the time bins are equidistant and contiguous ( no gaps ) .the statistical problem facing us can be summarized as follows : `` _ given , reconstruct as much as possible about _ '' .we shall assume that for the bright x - ray binaries that i am discussing here , the rate of background photons can be considered to be negligible , and that there are no other relevant effects affecting the photon arrival time series than the huge geometrical dilution factor just described .therefore , if the x - ray star does not vary intrinsically , the will to a high degree of precision be uniformly and randomly distributed , so that the follow poisson statistics appropriate to a rate , with a standard deviation equal to .of course these stars _ do _ vary .the variability time scales of interest are , and the detected photon rates 10 / sec , which means that the time bins must be chosen such that typically there are on average only a few ( sometimes ) photons per time bin , and is of order ( sometimes ) .as the intrinsic variability of the star often has a relatively small amplitude , only a few percent of the total flux , it is clear that we are in a low signal - to - noise regime ( with the `` signal '' the intrinsic stellar variability and the `` noise '' the poisson fluctuations ) .fortunately , typical observations span 10 to 10 , so that the number of time bins ( the number of `` measurements '' ) is 10 to 10 , which allows us to recover the signal from the noise .the techniques used for this are described in the next section .the standard approach ( see van der klis 1989 ) is to divide the time series into equal - length segments of time bins each ( so , ideally ) , to calculate the discrete fourier transform of each segment : to convert this into a power spectrum for each segment and then to _ average _ these power spectra ( see below ) .note that in our application , the number of photons detected in the segment . with this power spectral normalization , due to leahy et al .( 1983 ) , it is true that if the are distributed according to the poisson distribution , then the follow the distribution with 2 degrees of freedom , so and .this white noise component with mean level 2 and standard deviation 2 , induced in the power spectrum due to the poisson fluctuations in the time series , is called the `` poisson level '' .it can be considered as `` background '' in the power spectrum , against which we are trying to observe the other power spectral features , which are caused by the intrinsic variability of the x - ray binary .the physical dimension of the thus defined powers is the same as that of the time series : =[a_j]=[x_k]$ ] .often the y - axis of plots of power spectra in this normalization is just labeled `` power '' , which reflects the fact that the physical interpretation of the in terms of properties of the star is inconvenient .for this reason , in recent years another power spectral normalization has become popular where the powers are reported as , with the `` count rate '' , the number of detected photons per second : , where is the duration of a segment .the are dimensionless , and can be interpreted as estimates of the power density near the frequency , where is a function of frequency whose integral gives the fractional root - mean - square amplitude of the variability .this latter quantity is defined as ( this follows directly from parseval s theorem . )the fractional rms amplitude due to fluctuations in a given frequency range is given by so , the physical interpretation of is easy : it is the function whose integral gives you the square of the fractional rms amplitude of the variability in the original time series .the physical unit used for is ( rms / mean)/hz , where `` rms '' and `` mean '' both refer to the time series ; `` rms / mean '' is just the dimensionless quantity .the averaging of the power spectra mentioned above is usually performed both by averaging individual power spectra ( from different segments ) _ together _ ( averaging the s with the same from the different segments ) and by averaging powers at adjacent frequencies ( to , say ) .the main purpose of this is of course to decrease the standard deviation of the power estimates , which in the raw spectra is equal to the mean power .the reason to calculate many power spectra of segments of the data rather than one very large power spectrum of the whole data set , apart from computational difficulties with this approach , is that in this way it is possible to study the variations in the power spectrum as a function of time .the final step in the analysis is to fit various functional shapes to the power spectra using the method of minimization that is also used in x - ray spectroscopy ( the levenberg - marquardt method described in press et al .1992 , chapter 15 ) .because many individual power estimates have been averaged in the analysis process , the central limit theorem ensures that the uncertainties of the final power estimates are approximately normally distributed , as required for this method to work well .a problem is what `` uncertainties '' to assign to these power estimates .usually is assumed , where is the number of individual powers averaged to obtain ( stands for the `` width '' , the number of adjacent powers averaged ; for the number of averaged power spectra ) .this is approximately correct , and was experimentally verified ( fig.[cesme ] ) , in the case that the dynamic range of the power spectrum is dominated by the intrinsic differences in the mean amplitude of the star s variability as a function of frequency rather than by the stochastic fluctuations in power .if instead the stochastic variations dominate then this procedure for estimating the uncertainties can lead to severe underestimation of the power , as accidentally low powers will get high weights in the fitting procedure and vice versa . a solution to this problem that is sometimes adoptedis to estimate the uncertainty in as , where is the fit function and the frequency corresponding to .the method described in the previous section works .it allows astronomers to quickly characterize the variability properties of large amounts of data , to study the changes in the properties of the variability as a function of time and other source characteristics , and to measure amplitudes and characteristic time scales of the variability .the method straightforwardly extends to simultaneous multiple time series , for example time series obtained in different photon energy bands . for simultaneous time series, it is possible to calculate different cross - spectra between them , and to look for systematic time delays in the variability as a function of energy .a comprehensive description of the methods used for this can be found in the paper by vaughan et al .( 1994 ) .a very important aspect is the possibility to identify different `` power spectral components '' in the variability .this is done by studying the changes in the shape of the power spectra as a function of time and other source properties , such as brightness or photon energy spectrum .it usually turns out that the simplest way to describe the changes in the power spectrum as a function of time is in terms of the sum of a number of components whose properties ( strength , characteristic frequency ) depend in a smooth , systematic and repeatible way on , for example , brightness .if this is the case , then it is natural to interpret these power spectral components as due to different physical processes ( or different aspects of the same process ) that are all affecting the count rates at the same time , and that have been disentangled from each other in the analysis just described .examples of power spectral components that are distinguished in practice are `` power law noise '' , `` band - limited noise '' and `` quasi - periodic oscillations ( qpo ) '' .all of these are all presumed to be stochastic processes in the time series that cause , respectively , a component in the power spectra that fits a function , one that fits and one that fits a lorentzian . for a component to be calledqpo it is sufficient that the corresponding power spectral component has a shape approximately described by with the peak full - width - at - half - maximum ( fwhm ) less than half its centroid frequency ; of course this definition is arbitrary .figure[bhcps ] shows a number of actually observed power spectra , plus the functions that were fit to them in order to describe them in terms of the power spectral components just mentioned .qpo peaks , band - limited noise and power law noise can all be seen .sometimes the shape of the observed power spectrum is ambiguous with respect to the decomposition into power spectral components as described here .an advantage of the minimization method is that it allows to quantify the degree of this ambiguity by comparing the values of the different possible combinations of fit parameters .one problem is , that in order to reach the lowest frequencies in the first place , it is necessary to choose the length of the time segments relatively large and therefore relatively small .this means that at the lowest frequencies the statement made in section[ana ] , that a large number of individual powers has been averaged and therefore the average power is approximately normally distributed is no longer true .other ( e.g. , maximum - likelihood ) fitting procedures are required to take the true distribution of the average powers into account , but such methods are not usually applied .see papadakis and lawrence ( 1993 ) for a discussion of a method to remedy some of these problems .a more serious problem is , that the true power spectrum at these frequencies often seems to be a quite steep power law .the finite time window in those situations leads to so - called `` low - frequency leakage '' ( see deeter 1984 and references therein ) : power shows up at a higher frequency in the power spectrum than where it belongs .one way to see this is by noting that the lowest frequency accessible in the discrete fourier transform is .if there is a lot of variability at lower frequencies than this , then due to these slow variations the time series of an individual time segment will usually have a large overall trend .the fourier transform of this trend produces a power law with index in the power spectrum .another way to describe the effect of low - frequency leakage is by noting that according to the fourier convolution theorem the fourier transform calculated in the finite time window is related to the true fourier transform by a convolution of the true transform with the transform of the window function .as the window function is a boxcar , its fourier transform is the well - known sinc function , with a big central lobe and upper and lower sidelobes that gradually decrease in amplitude . for true power spectra steeper than a power law of index , the contribution to the convolution of the lower side lobes overwhelms that of the central lobe , and the result is a power spectrum that is a power law with index ( e.g. , bracewell 1965 ) .so , for any true power spectrum steeper than , the actually measured power spectrum will have slope of .there are well - known solutions to this well - known problem .the most famous one is data tapering : instead of a boxcar window a tapered window is used , i.e. , a window that makes the data go to zero near its end points more gradually than by a sudden drop to zero .other methods are polynomial detrending ( fitting a polynomial to the data and subtracting it ) and end - matching ( subtracting a linear function that passes through the first and the last data point ) .deeter et al .( 1982 ) and deeter ( 1984 ) have explored a number of non - fourier methods .all these methods work in the sense that to some extent they suppress the side lobes of the response function and therefore they are able to recover power laws steeper than with index ( the value of the power law index where the method breaks down is different in each case ) .however , typically these methods have only been evaluated for the case where the time series is pure power - law noise , and in many cases even only with respect to their effectiveness in recovering the power law index , not even the noise strength .some methods require that the index of the power law is known in advance ! nearly nothing is known about the way in which these methods affect the results of fits to power spectra with complicated shapes such as those described above .these methods may recover the correct power law index for the low - frequency part of the power spectrum , but what will be the effect on the fractional rms values and time scales of all the other components ? for this reason , these methods are not usually applied .the high - frequency end of the power spectra , near the nyquist frequency , is of particular interest to astrophysicists , as it is there that we expect to find the signatures of the fastest accessible physical processes going on in the star .a question that is often asked in this context is : `` what is the shortest variability time scale we can detect in the data ? '' .generally , what is observed at high frequency is that the power spectrum more or less gradually slopes down towards the poisson level .the problem is to decide out to which frequency intrinsic power is observed above the poisson background , and _ what this number means_. an , i think , decidedly misleading way to answer the question about the shortest time scale is by determining the _ shortest time interval within which significant variations can be detected_. it is obvious , that for a slowly varying source , by zooming in on some gradual slope in its light curve , this shortest time interval can be made as short as one wishes , just by improving the quality of the data ( by increasing for example ) . yet, most of the work on `` shortest time scales '' seems to aim for measuring rather than .it seems clear that one must also take into account the amplitude of the variations , not just the time within which they occur to do something that is physically useful .following fabian ( 1987 ) one can for example define the variability time scale as where the dot denotes the time derivative .defined this way , is a measure of how steeply changes with , and depends itself on . using a power spectrum, one would measure some average of this quantity by determining the fractional rms amplitude near some high frequency in the way described in section[ana ] , and then write where is a constant of order unity depending on assumptions on exactly how the variations causing the power in the spectrum took place .however , for precise work one has to worry about low - frequency leakage , too .exactly the same problem as described in section[low_f ] can occur here : power from lower frequencies can leak up to higher frequencies due to the sinc response associated with the power estimators . that this problem is serious is apparent from the fact that the observed power spectrum of for example the famous black - hole candidate cygx-1 has an index of for frequencies above 10hz ( see fig.[bhcps ] ) , just the value at which low - frequency leakage begins to worry us .it would be of great interest to have a foolproof way to subtract power that has leaked up from lower frequencies , or even to have a way to make a conservative estimate of ( obtain a lower limit on ) the true high - frequency power .it is not clear to what extent this can be accomplished . obviously , as in any convolution problem, some information has been lost , but how much can be recovered is a problem x - ray astronomers do not know the answer to .i wonder to what extent standard deconvolution procedures might be useful here .i note parenthically that a method that has been applied for determining the shortest variability in a time series by meekins et al .( 1984 ) seems to suffer from the same problem of low - frequency leakage . in this method ,the time series is divided up in very short segments of , for example , bins each . in each segment a quantity called `` chi - squared '' is calculated as follows : here is the number of photons detected in bin and is the total number of photons in the segment .one recognizes an `` observed over expected '' variance ratio for an expected poisson distribution .the distribution of this quantity is then compared to that expected if all variability in the time series would be due to poisson fluctuations and if a significant excess is found , this is interpreted as detection of variability on time scales between the length of a segment and the duration of one bin .one would expect low - frequency leakage to be as much of a problem here as in the power spectral method .it is easy to see that if there are variations in the time series on time scales much longer than the segment length , the data points in the segments will usually follow steep trends , causing large values of that are not related to variability on time scales shorter than .indeed , meekins et al . show that is closely related to the fourier power in the segment ,so from the point of view of low - frequency leakage the meekins et al .method is less effective than the standard power spectral approach , as it requires to be quite short and therefore increases the probability of steep trends in the data segments .bracewell , r. , the fourier transform and its applications , mcgraw - hill , 1965 .deeter , j. , astrophys .j. 281 , 482 , 1984 .deeter , j. , and boynton , p.e ., astrophys .j. 261 , 337 , 1982 .deeter , j. , astrophys .j. fabian , a.c ., in proc .`` the physics of accretion onto compact objects '' , lecture notes in physics 266 , 229 , 1987 .leahy , d.a ., darbro , w. , elsner , r.f . ,weisskopf , m.c . ,sutherland , p.g . , kahn , s. , grindlay , j.e ., astrophys .j. 266 , 160 , 1983 .meekins , j.f . , wood , k.s . , hedler , r.l . ,byram , e.t . ,yentis , d.j . , chubb , t.a . , friedman , h. , astrophys .j. 278 , 288 , 1984 .mitsuda , k. , dotani , t , publ .japan , 41 , 557 , 1989 .papadakis , i.e. and lawrence , a. , mon . not .r. astron .261 , 612 , 1993 .press , w.h ., teukolsky , s.a . , vetterling , w.t . , flannery , b.p ., `` numerical recipes in fortran '' , 2nd edition , cambridge univ . press , p.650 , 1992 . van der klis , m. , in `` x - ray binaries '' , w.h.g . lewin et al .( eds . ) , cambridge univ . press , p.252 , 1985 .van der klis , m. , in `` timing neutron stars '' , gelman and van den heuvel ( eds . ) , nato asi c262 , kluwer , p. 27vaughan , b.a . ,van der klis , m. , wood , k.s . ,norris , j.p . , hertz , p. , michelson , p.f . ,van paradijs , j. , lewin , w.h.g . , mitsuda , k. , penninx , w. , astrophys .j. 435 , 362 , 1994 .vikhlinin , a. , churazov , e. , gilfanov , m. , astron .287 , 73 , 1994 .zhang , w. , jahoda , k. , morgan , e.h . ,giles , a.b ., astrophys .j. 449 , 930 , 1995 .
i discuss some practical aspects of the analysis of millisecond time variability x - ray data obtained from accreting neutron stars and black holes . first i give an account of the statistical methods that are at present commonly applied in this field . these are mostly based on fourier techniques . to a large extent these methods work well : they give astronomers the answers they need . then i discuss a number of statistical questions that astronomers do nt really know how to solve properly and that statisticians may have ideas about . these questions have to do with the highest and the lowest frequency ranges accessible in the fourier analysis : how do you determine the shortest time scale present in the variability , how do you measure steep low - frequency noise . the point is stressed that in order for any method that resolves these issues to become popular , it is necessary to retain the capabilities the current methods already have in quantifying the complex , concurrent variability processes characteristic of accreting neutron stars and black holes . # 1*(#1 ) *
checkpoint - restart is now a mature technology with a variety of robust packages .this work concentrates on the dmtcp ( distributed multithreaded checkpointing ) package and its sophisticated plugin model that enables process virtualization .this plugin model has been used recently to demonstrate checkpointing of 32,752 mpi processes on a supercomputer at tacc ( texas advanced computing center ) .dmtcp itself is free and open source .the dmtcp publications page lists approximately 50 refereed publications by external groups that have used dmtcp in their work .this work concentrates on the recent advances in the dmtcp programming model that were motivated by work with intel corporation .while intel works with multiple vendors of hardware emulators , this work reflects the three - way collaboration between the dmtcp team , intel , and mentor graphics , a vendor of hardware emulators for eda .further information specific to eda ( electronic design automation ) is contained in . in particular ,the ability to save the state of a simulation _ including the state of a back - end hardware emulator _ is a key to using checkpoint - restart in eda . for background on how dmtcp is used generally at intel ,see .the focuses of the ongoing work at intel is best described by their statement of future work : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` within intel it , we will focus on the development and enhancement of the dmtcp technology for use with graphical eda tools , with strong network dependencies . there is also additional engagement with third - party vendors to include native dmtcp support in their tools , as well as engagement with super - computing development teams on enabling dmtcp for the xeon phi family of products . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a hardware emulator may entail a thousand - fold slowdown , as compared to direct execution in silicon .there are two natural use cases of checkpointing in the context of eda . in both cases ,the natural strategy is to run until reaching the region of logic of interest . then checkpoint .later , one can repeatedly restart and test the logic , without worrying about the long initialization times under a hardware emulator .restarting under dmtcp is extremely fast , especially when the fast - restart flag is used that takes advantage of mmap ( ) to load pages into memory on - demand at runtime ( after the initial restart ) .the two use cases follow .run until reaching the logic to be tested ; then repeatedly restart and follow different logic branches ; and run until reaching the logic to be tested ; then repeatedly restart , inject faults in the emulated ( or simulated ) silicon model and run along a pre - determined logic branch to determine the level of fault tolerance for that silicon design . for this work ,the second case is of greater interest .this requires running arbitrary code either immediately at the point of restart by injecting faults in the logic design , or by interposing on later logic functions of the simulator / emulator so as to inject transient faults .the first use case above has been extensively studied using dmtcp in domains as varied as architecture simulation , formal verification of embedded control systems , network simulation , and software model checking .while the two use cases are closely related , this work highlights the second use case , by including the possibility of interposing at runtime .section [ sec : processvirtualization ] presents the tools for such interposition , including the creation of global barriers at an arbitrary point in the program .section [ sec : casestudies ] presents three particular extensions of checkpointing that were added to the dmtcp plugin model specifically motivated by the concerns observed in our general collaboration on eda .the dmtcp plugin model is critical in this latter application .one must stop a computation at a pre - defined location in the simulation , save additional state information ( such as the state of a hardware emulator being used ) , and then inject additional code ( such as fault injection ) at restart time .a contribution of the dmtcp plugin model is the ability to virtualize multiple aspects of the computation .these include : pathnames ( for example , the subdirectory corresponding to the current `` run slot '' of the emulator ) ; environment variables ( for example , modification of the display environment variable , or other environment variables intrinsic to the running of the simulation ) ; interposition of the simulation by a third - party plugin ( for example , for purposes of measuring timings since restart at multiple levels of granularity , or programmatically creating additional checkpoints for analysis of interesting states leading to logic errors ) ; and third - party programmable barriers across all processes ( enabling the acceleration of simulations through the use of parallel processes and even distributed processes within a single computation ) .the dmtcp plugin model is an example of _ process virtualization _ : virtualization of external abstractions from within a process .it is argued here that the dmtcp plugin model sets it apart from other checkpointing approaches . to this end , a brief survey of existing checkpointing approaches and process virtualization is provided at the end . in the rest of this paper , section [ sec : processvirtualization ]motivates the need for a model of process virtualization with a simple example concerning process ids .it also reviews the dmtcp plugin model .section [ sec : casestudies ] presents a series of micro - case studies in which dmtcp was extended to support the applications at intel , along with third - party dmtcp plugins developed by mentor graphics for use by intel and other customers ..section [ sec : relatedwork ] the provides a survey of dmtcp and some other related approaches to checkpointing and process virtualization .section [ sec : conclusion ] then presents the conclusions .application - specific checkpointing and system - level transparent checkpointing are two well - known options for checkpointing .unfortunately , neither one fits the requirements for the proposed use case for simulating fault injection in silicon logic .application - specific checkpointing is error - prone and difficult to maintain .system - level transparent checkpointing generally does not provide enough control at runtime to dynamically adjust the type of fault injection .in particular , it is often necessary to capture control of the target application dynamically at runtime in order to inject faults .here we show how that can be incorporated in a modular dmtcp plugin , rather than incorporated directly into the simulator / emulator . for a more thorough introduction to the dmtcp plugin model ,see either or the dmtcp documentation .this section highlights those aspects most likely to assist in adding fault injection through a dmtcp plugin .the primary features of the model of interest for fault injection are : 1 .interposition on function / library calls , and their use in virtualization ; 2 .programmatically defined barriers across all processes on a computer ; and 3 .programmatically defined choices of when to checkpoint and when to avoid checkpointing .figure [ fig : pid - virt ] succintly describes the philosophy of process virtualization .some invariant ( in this case the pid ( process i d ) of a process may have a different name prior to checkpoint and after restart .a virtualized process will interact only with virtual process ids in the base code .a dmtcp plugin retains a translation table between the virtualized pid known to the base code and the real pid known to the kernel .since the base code and the kernel interact primarily through system calls , the dmtcp plugin defines a wrapper function around that system call .the wrapper function translates between virtual and real pids both for arguments to the system call and for the return value .this is illustrated both in figure [ fig : pid - virt ] and in the example code of listing [ lst : pidwrapperexample ] ..... wrapper int kill(pid_t pid , int sig ) { disable_ckpt ( ) ; real_pid = virt_to_real(pid ) ; int ret = real_@(real_pid , sig ) ; enable_ckpt ( ) ; return ret ; } .... additionally , pid s may be passed as part of the proc filesystem , and through other limited means . to solve this , dmtcp implements virtualization of filenames as well as pid names , andso the `` open '' system call will also be interposed upon to detect names such as ` /proc/ pid / maps ` . in this way, a collection of wrapper functions can be collected together within a dmtcp plugin library .such a library implements a virtualization layer .the elf library standard implements a library search order such that symbols are searched in order as follows : executable lib1 lib2 ... libc kernel where the symbol is finally replaced by a direct kernel call .this sequence can also be viewed as a sequence of layers , consistent with the common operating system implementation through layers .a dmtcp plugin for pids then presents a virtualization layer in which all higher layers see only virtual pids , and all lower layers see only real pids .this is analogous to an operating system design in which a higher layer sees the disk as a filesystem , and a lower layer sees the disk as a collection of disk blocks . in a similar way, dmtcp provides layers to virtualize filenames , environment variables and myriad other names . in this way, an end user can implement a fault injection plugin layer such that all code below that layer sees injected faults , while higher layers do not see the injected faults .additionally , such a layer can be instrumented to gather information such as the cumulative number of faults .dmtcp also provides an api for the application or a plugin to either request a checkpoint or to avoid a checkpoint . upon checkpoint ,each plugin is notified of a checkpoint barrier , and similarly upon restart .thus , it is feasible to create successive checkpoints available for restart or available as a snaphot for later forensics on the cause of a later error .optimizations such as forked checkpointing ( fork a child and continue in the parent ) are available in order to take advantage of the kernel s copy - on - write in order to make checkpointing / snapshotting extremely fast .checkpointing in a distributed application context requires coordination between multiple processes at different virtualization layers .the use of programmable barriers enables this coordination .in addition to the checkpoint and restart events , each plugin ( or virtualization layer ) can define its own set barriers and a callback to execute at a barrier .a centralized dmtcp coordinator forces the application processes to execute the barriers in sequence .further , a hardware resource , for example , the interface to a hardware emulator , might be shared among multiple processes that share parent - child relationships . to get a semantically equivalent state on restart, the barriers can be used to elect a leader to save and restore the connection to the hardware emulator on restart .this section describes three specific real - world use cases where dmtcp was extended to support hardware emulation and simulation software .the examples are motivated by our work with various hardware and eda tool vendors .gui - based simulation software presents a unique challenge in checkpointing .the front - end software communicates with an x server via a socket .the x server runs in a privileged mode and outside of checkpoint control .while the connection could be blacklisted for the checkpointing , application s gui context and state is part of the x server and can not be checkpointed .the context does not exist at restart time and needs to be restored .dmtcp was extended to transparently support checkpointing of vnc and xpra .the two tools allow x command forwarding to a local x server that can be run under checkpoint control . presents an alternate record - prune - replay based approach using dmtcp to checkpoint gui - based applications .authentication and license services is an important issue for protecting the intellectual property of all the parties . often , the authentication protocols and software are proprietary and specific to a vendor .further , the licensing services are not run under checkpoint control , which makes it difficult to get a `` complete '' checkpoint of the software .extensions were added to dmtcp to allow a vendor to hook into the checkpoint and restart events and mark certain connections as `` external '' to the computation . at checkpoint time , the connections marked external are ignored by dmtcp and instead the responsibility of restoring these connections is delegated to the vendor - specific extension .the vendor - specific plugin also allows the application to check back with the licensing service at restart time in order to not violate a licensing agreement that restricts the number of simultaneous `` seats '' .the ability to migrate a process among the available resources is critical for efficient utilization of hardware emulator resources .however , the environment variables , the file paths , and the files that are saved as part of a checkpoint image make such migrations challenging. we added dmtcp extensions ( plugins ) to virtualize the environment and the file paths .this allows a process to be restarted on a different system by changing the values and the paths .another extension that we added to dmtcp allows a user to explicitly control the checkpointing of files used by their application at the granularity of a single file .hardware emulators communicate with the host software via high - speed interfaces .any in - flight transactions at checkpoint time can result in the data being lost and inconsistent state on restart .thus , it is important to bring the system to a quiescent state and drain the in - flight data on the buses before saving the state .further , checkpointing while the software is in a critical state ( like holding a lock on a bus ) can lead to complications on restart . to help mitigate such issues ,dmtcp was extended to allow fine - grained programmatic control over checkpointing .this enables the hardware / eda tool vendor to tailor the checkpointing for their specific requirements .in particular , it allows a user to invoke checkpointing from within their code , disable checkpointing for critical sections , or delay the resuming of user threads until the system reaches a well - behaved state .the software toolchain used for simulation and emulation is often put together by integrating various third - party components .the components may be closed - source and may use proprietary protocols for interfacing with each other and the system .for example , many software toolchains rely on legacy 32-bit code that s difficult to port to 64-bits , and so , support for mixed 32-/64- bit processes was an important consideration .checkpointing while holding locks was another interesting issue .while the locks and their states are a part of the user - space memory ( and hence , a part of the checkpoint image ) , an application can also choose to use an error - checking lock that disallows unlocking by a different thread than the one that acquired it .on restart , when new thread ids would be assigned by the system , the locks would become invalid and the unlock call would fail .we extended dmtcp by adding wrapper functions for lock acquisition and release functions to keep track of the state of locks . at restart time , a lock s state is patched with the newer thread ids . more generally , the problem described above is about the state that s preserved when a resource is allocated at checkpoint time and needs to be deallocated at restart time .while the restarted process inherits its state from the checkpoint image , its environment ( thread ids , in the above case ) might have changed on restart .an application author with domain expertise can extend the dmtcp checkpointing framework to recognize and virtualize these resources .the state could be a part of the locks that are acquired by a custom thread - safe malloc library , or the guard regions created by a library to guard against buffer overflows , or the libraries that are loaded temporarily .high performance computing ( hpc ) is the traditional domain in which checkpoint - restart is heavily used .it is used for the sake of fault tolerance during a long computation , for example of days . for a survey of checkpoint - restart implementations in the context of high performance computing ,see egwutuoha . in the context of hpc , dmtcp and blcr the most widely used examples of transparent , system - level checkpoint - restart parallel computing .( a transparent checkpointing package is one that does not modify the target application . )dmtcp ( distributed multithreaded checkpointing ) is a purely user - space implementation .in addition to being transparent , it also does not require any kernel modules and its installation and execution does not require root privilege or the use of special linux capabilities .it achieves its robustness by trying to stay as close to the posix standard as possible in its api with the linux kernel . the first version of dmtcp was later described in .that version did not provide the plugin model for process virtualization .for example , virtualization of network addresses did not exist , as well as a series of other constructs , such as timers , session ids , system v shared memory , and other features .these features were added later due to the requirements of high performance computing . eventually , the current procedure for virtualizing process ids ( see section [ sec : pidvirtexample ] was developed . to the best of our knowledge ,dmtcp is unique in its approach toward process i d virtualization .eventually , the plugin model was developed , initially for transparent support of the infiniband network fabric .the current extension of that plugin model is described in . still later , the requirements for robust support of eda in collaboration with intel led to the development of reduction of runtime overhead graphic support using xpra , path virtualization ( for virtualization of the runtime slot and associated directory of a run using a hardware emulator , including different mount points on the restart computer ) , virtualization of environment variables including the x - windows display variable ( for similar reasons ) , robustness across a variety of older and newer linux kernels and gnu libc versions , mixed multi - architecture ( 32- and 64-bit ) processes within a single computation , low - overhead support for malloc - intensive programs , re - connection of a socket to a license server on restart , and whitelist and blacklist of special temporary files that many or may not be present on the restart computer .blcr supports only single - node standalone checkpointing .in particular , it does not support checkpointing of tcp sockets , infiniband connections , open files , or sysv shared memory objects .blcr is often used in hpc clusters , where one has full control over the choice of linux kernel and other systems software .typically , a linux kernel is chosen that is compatible with blcr , a blcr kernel module is installed , and when it is time to checkpoint , it is the responsibility of an mpi checkpoint - restart service to temporarily disconnected the mpi network layer , then checkpoint locally on each node , and finally re - connect the mpi network layer .note that blcr is limited in what features it supports , notably including a lack of support for sockets and system v shared memory . quoting from the blcr user s guide : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` however , certain applications are not supported because they use resources not restored by blcr : applications which use sockets ( regardless of address family ) . ; applications which use character or block devices ( e.g. serial ports or raw partitions ) . ; applications which use system v ipc mechanisms including shared memory , semaphores and message queues . '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the lack of blcr support for shared memory also prevents its use in openshmem . zapc and cruz represent two other checkpointing approaches that are not currently widely used .zapc and cruz were earlier efforts to support distributed checkpointing , by modifying the kernel to inserting hooks into the network stack using netfilter to translate source and destination addresses .zapc and cruz are no longer in active use .they were designed to virtualize primarily two resources : process ids and ip network addresses .they did not support ssh , infiniband , system v ipc , or posix timers , all of which are commonly used in modern software implementation .criu leverages linux namespaces for transparently checkpointing on a single host ( often within a linux container ) , but lacks support for distributed computations . instead of directly virtualizing the processi d , criu relies on extending the kernel api through a much larger proc filesystem and a greatly extended `` prctl '' system call .for example , the `` pr_set_mm '' has 13 additional parameters that can be set ( e.g. , beginning end end of text , data , and stack ) . in another example, criu relies on the `` config_checkpoint_restore '' kernel configuration to allow a process to directly modify the kernel s choice of pid for the next process to be created . in a general context, there is a danger that the desired pid to be restored may already be occupied by another process , but criu is also often used within a container where this restriction can be avoided . finally , criu has a more specialized plugin facility .some examples are : ability to save and restore the contents of particular files ; and the means to save and restore pointers to external sockets , external links , and mount points that are outside the filesystem namespace of an lxc ( linux container ) .recall that criu does not try to support distributed computations .perhaps it is for this reason that criu did not have the same pressure to develop a broader plugin system capable of supporting generic external devices such as hardware emulators .the term _ process virtualization _ was used in . that work discusses kernel - level support for such process virtualization , while the current work emphasizes an entirely user - space approach within unprivileged processes .related to process virtualization is the concept of a library os , exemplified by the drawbridge library os and exokernel .however , such systems are concerned with providing _ extended or modified _ system services that are not natively present in the underlying operating system kernel .both process - level virtualization and the library os approach employ a user - space approach ( ideally with no modification to the application executable , and no additional privileges required ) .however , a library os is concerned with providing _ extended or modified _ system services that are not natively present in the underlying operating system kernel .process virtualization is concerned with providing a semantically equivalent system object using the _ same _ system service .this need arises when restarting from a checkpoint image , or when carrying out a live process migration from one computer to another .the target computer host is assumed to provide the same system services as were available on the original host .although process - level virtualization and a library os both operate in user space without special privileges , the goal of a library os is quite different .a library os modifies or extends the system services provided by the operating system kernel .for example , drawbridge presents a windows 7 personality , so as to run windows 7 applications under newer versions of windows .similarly , the original exokernel operating system provided additional operating system services beyond those of a small underlying operating system kernel , and this was argued to often be more efficient that a larger kernel directly providing those services . -12ptin order to develop a successful plugin model for checkpointing in the context of eda , one required modularity that enabled the dmtcp team , intel , and mentor graphics to each write their own modular code .further , the intel and mentor graphics dmtcp - based plugins and other code were of necessity proprietary .this work has shown how the dmtcp plugin model can be used to provide a flexible model enabling full cooperation , while avoiding the more extreme roadmaps of either fully application - specific code or transparent , system - level checkpointing with no knowledge of the proprietary aspects of the mentor graphics hardware emulator .j. ansel , k. arya , and g. cooperman , `` dmtcp : transparent checkpointing for cluster computations and the desktop , '' in _ ieee int .symp . on parallel anddistributed processing ( ipdps)_.1em plus 0.5em minus 0.4em ieee press , 2009 , pp .112 .k. arya , r. garg , a. y. polyakov , and g. cooperman , `` design and implementation for checkpointing of distributed resources using process - level virtualization , '' in _ ieee int .conf . on cluster computing ( cluster16)_.1em plus 0.5em minus 0.4emieee press , 2016 , pp .402412 .j. cao , k. arya , r. garg , s. matott , d. k. panda , h. subramoni , j. vienne , and g. cooperman , `` system - level scalable checkpoint - restart for petascale computing , '' in _22nd ieee int . conf . on parallel and distributed systems ( icpads16)_.1em plus 0.5em minus 0.4emieee press , 2016 , also , technical report available as : arxiv preprint arxiv:1607.07995 .g. cooperman , j. evans , a. garg , r. garg , n. a. rosenberg , and k. suresh , `` transparently checkpointing software test benches to improve productivity of soc verification in an emulation environment , '' 2017 , ( submitted ) .i. ljubuncic , r. giri , a. rozenfeld , and a. goldis , `` be kind , rewind checkpoint & restore capability for improving reliability of large - scale semiconductor design , '' in _ 2014 ieee high performance extreme computing conference ( hpec-2014 ) _ , 2014 .[ online ] .available : http://www.ieee-hpec.org/2014/cd/index_htm_files/finalpapers/34.pdf a. shina , k. ootsu , t. ohkawa , t. yokota , and t. baba , `` proposal of incremental software simulation for reduction of evaluation time , '' in _ 2012 third international conference on networking and computing ( icnc ) _ , dec 2012 , pp . 311315 .s. resmerita and w. pree , `` verification of embedded control systems by simulation and program execution control , '' in _ 2012 american control conference ( acc)_.1em plus 0.5em minus 0.4emieee , 2012 , pp .35813586 .w. leungwattanakit , c. artho , m. hagiya , y. tanabe , m. yamamoto , and k. takahashi , `` modular software model checking for distributed systems , '' _ ieee trans . on software engineering _ ,40 , no . 5 ,483501 , may 2014 .j. cao , g. kerr , k. arya , and g. cooperman , `` transparent checkpoint - restart over infiniband , '' in _ proc . of the 23rd int. symp . on high - performance parallel anddistributed computing_.1em plus 0.5em minus 0.4emacm press , 2014 , pp .1324 .g. janakiraman , j. santos , d. subhraveti , and y. turner , `` cruz : application - transparent distributed checkpoint - restart on standard operating systems , '' in _ international conference on dependable systems and networks , 2005 .dsn 2005 . proceedings _ , jun .2005 , pp . 260269 .g. vallee , r. lottiaux , d. margery , and c. morin , `` ghost process : a sound basis to implement process duplication , migration and checkpoint / restart in linux clusters , '' in _ proc .of the the 4th int .symp . on parallel anddistributed computing _ , ser .ispdc 05 , 2005 , pp .97104 .d. e. porter , s. boyd - wickizer , j. howell , r. olinsky , and g. c. hunt , `` rethinking the library os from the top down , '' in _ proc . of the sixteenth international conference on architectural support for programming languages and operating systemsasplos xvi.1em plus 0.5em minus 0.4emnew york , ny , usa : acm , 2011 , pp . 291304 .d. r. engler , m. f. kaashoek , and j. otoole , jr ., `` exokernel : an operating system architecture for application - level resource management , '' in _ proc . of 15th acm symp .on operating systems principles _ , ser .sosp 95.1em plus 0.5em minus 0.4emacm , 1995 , pp . 251266 .
checkpoint - restart is now a mature technology . it allows a user to save and later restore the state of a running process . the new plugin model for the upcoming version 3.0 of dmtcp ( distributed multithreaded checkpointing ) is described here . this plugin model allows a target application to disconnect from the hardware emulator at checkpoint time and then re - connect to a possibly different hardware emulator at the time of restart . the dmtcp plugin model is important in allowing three distinct parties to seamlessly inter - operate . the three parties are : the eda designer , who is concerned with formal verification of a circuit design ; the dmtcp developers , who are concerned with providing transparent checkpointing during the circuit emulation ; and the hardware emulator vendor , who provides a plugin library that responds to checkpoint , restart , and other events . the new plugin model is an example of process - level virtualization : virtualization of external abstractions from within a process . this capability is motivated by scenarios for testing circuit models with the help of a hardware emulator . the plugin model enables a three - way collaboration : allowing a circuit designer and emulator vendor to each contribute separate proprietary plugins while sharing an open source software framework from the dmtcp developers . this provides a more flexible platform , where different fault injection models based on plugins can be designed within the dmtcp checkpointing framework . after initialization , one restarts from a checkpointed state under the control of the desired plugin . this restart saves the time spent in simulating the initialization phase , while enabling fault injection exactly at the region of interest . upon restart , one can inject faults or otherwise modify the remainder of the simulation . the work concludes with a brief survey of the existing approaches to checkpointing and to process - level virtualization .
the bak sneppen ( bs ) model of biological evolution has been introduced in to study self - organised critical behaviour in populations with natural selection and spatial interactions . in the model, a population is spread out over a circle so that each species has exactly two neighbours , and each site is assigned a numerical parameter between zero and one that describes the _ fitness _ of the species at that particular site .the system evolves according to the following discretetime dynamics : at each time step the species with the smallest fitness parameters becomes extinct and is replaced by a new species whose fitness parameter becomes independent uniformly distributed on the interval ] , where .there are only few rigorous results about the original bs model ; in it is proved that the expected fitness at a fixed site in the stationary regime is bounded away from unity uniformly in the size of the population ; gives a conditional characterization of the limiting distribution in terms of a set of critical thresholds ( see also * ? ? ?* ; * ? ? ?in this paper we suggest a novel approach to determining the stationary fitness distribution in the bs model with a finite number of species .the method consists of translating the evolutionary dynamics into a set of linear recurrence equations for the coefficients of the densities of the finitetime distributions and analysing the asymptotic properties of these recursive equations .we apply the method to derive the asymptotic fitness distribution in the bs model with four species , which is given by ( [ theorem - asympdist4 ] ) {\boldsymbol{1}}_{[0,1]^4}({\boldsymbol{x}}){\mathrm{d}}^4{\boldsymbol{x}}.\ ] ] it has been suggested that a similar , and potentially easier , analysis can be applied to the anisotropic bak sneppen ( abs ) model ; in this variation of the model the evolutionary dynamics are modified so that the fitness parameter of only one neighbour , say the right one , is updated at each time step together with the fitness parameter of the least fit species . while many qualitative properties are believed to be shared by the original and the anisotropic bs , analytical considerations and numerical calculations indicate that they fall in different universality classes , and have thus different critical exponents .our method can be applied to the abs . in the first nontrivial case of three species the asymptotic fitness distribution follows from computations which are very similar to those leading to [ theorem - asympdist4 ] , andis given by {\boldsymbol{1}}_{[0,1]^3}({\boldsymbol{x}}){\mathrm{d}}^3{\boldsymbol{x}}.\ ] ] the complexity of the calculations for larger populations seems to be increasing at a similar rate as in bs and studying abs does therefore not , for the purpose of understanding the method we propose in the current paper , offer major advantages over the original model .for this reason we concentrate in the following on the isotropic case . [ [ outline - of - the - paper ] ] outline of the paper + + + + + + + + + + + + + + + + + + + + the paper is organised as follows . in [ section - qualana ]we present one of several possible , precise definitions of the bs model in terms of a markov chain , and use this representation to derive qualitative properties for arbitrary population sizes .we then turn to the special case of the fourspecies model in [ section - n4 ] , where we derive the full asymptotic fitness distribution .finally , [ section - genrec ] is devoted to making further progress towards a similarly explicit solution for larger populations .in particular , we obtain explicit linear recursions for the coefficients that describe the finitetime distributions in the fivespecies model .[ [ notation ] ] notation + + + + + + + + throughout the paper we denote by the set of positive integers and write for the number of species in the bs model .arithmetic operations in indices pertaining to the number of species are always understood to be performed modulo , so as to give a result between and .the symmetric group on letters is denoted by . for a vector ^n ] and \nu[} ] and \nu[} ] and \nu[,i} ] we extend the notation of outer projections by setting \nu[_{{\boldsymbol{\xi}}}}:[0,1]^n\to[0,1]^n,\quad { \boldsymbol{x}}_{]\nu[_{{\boldsymbol{\xi } } } } = ( \xi_1,\xi_2,\xi_3,x_{\nu+2},\ldots , x_n , x_1,\ldots , x_{\nu-2}).\ ] ] we use the multiindex notation , ^n ] , see , e.g. for a general introduction to this topic . to avoid the trivial case in which all fitness parameters are updated at each time step , we assume that . for each time ,the values of denote the fitnesses of the different species in the population . in order to formalise the heuristic description of the evolutionary processes described in the introduction we first observe that the fitness landscape at time can be determined easily if one knows the fitness values at time and , in particular , which species was the least fit at that time .in fact , if at time the species is the least fit , then the fitness values of the species at the , , and site are independent uniformly distributed at time , while all other fitness values remain unchanged . for this argument to be valid ,it is necessary that has a unique minimal value .in fact the bs model , as we have described it , is not welldefined without a rule for breaking ties .one natural possibility is to randomly select one of the sites with minimal fitness value .we will , however , not have to concern ourselves with this complication because we will assume that the initial fitness distribution at time is absolutely continuous , in which case ties occur with probability zero .further formalising the argument , we can partition the event , \in\mathscr{b}([0,1]^n) ] and } ] , and thus the last probability equals }\right) ] , and denotes the dirac measure located at .this quite explicit expression is enough to prove that the markov chains are ergodic in a strong sense , which is crucial for our approach to determining their stationary distributions . for the definition of uniform convergencewe refer the reader to ( * ? ? ?* definition ( 16.6 ) ) .[ prop - ergodicity ] for every positive integer , the markov chain is uniformly ergodic .it is straightforward , albeit tedious , to show that the multistep transition kernels , which are defined recursively by ^n}{\mathbb{p}}_{n,{\boldsymbol{x}}}^1\left({\mathrm{d}}^n{\boldsymbol{y}}\right){\mathbb{p}}_{n,{\boldsymbol{y}}}^{m-1}\left({\mathrm{d}}^n{\boldsymbol{\xi}}\right),\quad m{\geqslant}2,\ ] ] dominate , for each , a constant times lebesgue measure restricted to ^n ] .we now proceed to give a qualitative description of the finitetime distributions .[ prop - finitetimedistqual ] for every , the finitetime distributions of the markov chain are absolutely continuous with piecewise polynomial lebesgue densities .more precisely , for every and , there exist finite index sets and polynomials ] and , the expression is a shorthand for .since , for each , the integrand is equal to \nu[}\right) ] , the claim follows . for the special cases we will see later how [ eq - recgnk ] can be translated into explicit recursions for the coefficients . before we do that , however , we compute , for general , the density of the fitness distribution after the first evolution step . for each , the density is given by ^n}({\boldsymbol{x}})\sum_{\nu=1}^n{p(\min{\boldsymbol{x}}_{]\nu[})},\quad p(x ) = x\left(\frac13x^2-x+1\right).\ ] ]the claim follows from [ eq - recgnk ] and the observation that an initial distribution of }^{\otimes n} ] , i.e. in the supremum norm , to , which is given by it follows from [ prop - ergodicity ] that the functions converge uniformly to some limit , and because converges , for each , to by [ lemma - beta ] , this limit must be . using [ prop - rec - alpha4 ]one sees that the sequence satisfies the homogeneous four - term recursion by the theory of homogeneous linear recurrence equations , is given by where are the roots of the characteristic polynomial , and are constants determined by the initial conditions . in the present case , as the proof of [ lemma - beta ] shows , these initial values are given by they can also easily be read off [ table - alpha4 ] , where we have recorded the functions for .the constants are thus the solution of the vandermonde system that is it is then easy to compute as the form of the function with all horner coefficients being equal to is very interesting .we do not have an intuitive explanation for this peculiarity ..values of the coefficients determining the finitetime distributions of the fourspecies bak sneppen model .values of , , emphasised .empty cells indicate zeros . [ cols="^,^,^,^,^,^,^,^,^,^,^,^,^,^,^,^ " , ] we have thus proved the following result .[ theorem - asympdist4 ] let be as in [ prop - limitq4 ] .the stationary distribution of the bak sneppen model with four species is given by {\boldsymbol{1}}_{[0,1]^4}({\boldsymbol{x}}){\mathrm{d}}^4{\boldsymbol{x}}.\ ] ] in particular , the onedimensional marginal of , which is the asymptotic fitness distribution at a fixed site , is absolutely continuous with density }(x).\ ] ] in [ fig - margdens ] we have plotted the densities of the onedimensional marginal distributions of for , together with their limit as given by [ eq - asympmargdens ] .the convergence asserted by [ prop - limitq4 ] is clearly visible . for ( dashed ) , together with their limit ( solid ) as given by [ eq - asympmargdens ] ]for general values of , a similarly complete analysis of the asymptotic fitness distribution in the bs model as we presented for in the previous section seems difficult to obtain .here we collect some formulae that serve as the starting point if one wants to carry through the same approach as in [ section - n4 ] . similarly to [ eq - comparecoeff4 ] , it holds for every that \nu[}^{{\boldsymbol{i}}}{\boldsymbol{1}}_{\{0{\leqslant}{\boldsymbol{x}}_{]\nu[,\sigma(1)}{\leqslant}\ldots{\leqslant}{\boldsymbol{x}}_{]\nu[,\sigma(n-3)}\}}\notag\\ = & \sum_{\mu,{\boldsymbol{i}},\sigma}\alpha_{n , k,\sigma,{\boldsymbol{i}}}\int_{[0,1]^3 } { \boldsymbol{1}}_{\left\{\xi_2<\min(\xi_1,\xi_3,{\boldsymbol{x}}_{]\nu[})\right\}}\left[\left({\boldsymbol{x}}_{]\nu[_{{\boldsymbol{\xi}}}}\right)_{]\mu[}\right]^{{\boldsymbol{i}}}{\boldsymbol{1}}_{\left\{0{\leqslant}\left({\boldsymbol{x}}_{]\nu[_{{\boldsymbol{\xi}}}}\right)_{]\mu[,\sigma(1)}{\leqslant}\ldots{\leqslant}\left({\boldsymbol{x}}_{]\nu[_{{\boldsymbol{\xi}}}}\right)_{]\mu[,\sigma(n-3)}{\leqslant}1\right\}}{\mathrm{d}}^3{\boldsymbol{\xi}}.\end{aligned}\ ] ] the integrals that appear on the right hand side of this equation fall in four categories , depending on how many of the three s are hit by the outer projection \mu[} ] and [} ] . with the obvious definitions of and ,the integral then becomes ^ 3 } { \boldsymbol{1}}_{\left\{\xi_2<\min(\xi_1,\xi_3,{\boldsymbol{x}}_{]\nu[})\right\}}{\boldsymbol{1}}_{\left\{{\boldsymbol{y}}_{\sigma\left(\sigma^{-1}(n-4)-1\right)}{\leqslant}\xi_1{\leqslant}{\boldsymbol{y}}_{\sigma\left(\sigma^{-1}(n-4)+1\right)}\right\}}\xi_1^{i_{n-4}}\xi_2^{i_{n-3}}{\mathrm{d}}^3{\boldsymbol{\xi}}\right]{\boldsymbol{1}}^{(*)}_{\left\{0{\leqslant}{\boldsymbol{y}}_{\sigma(2)}{\leqslant}\ldots{\leqslant}{\boldsymbol{y}}_{\sigma(n-3)}{\leqslant}1\right\ } } , \ ] ] the evaluation of which we leave to the reader .[ [ case-4-all - other - mu . ] ] case 4 : all other .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in this last case , all three s remain unaffected by the and contribute to making the integral a little more complicated than before .similar to the previous cases one obtains \nu[_{{\boldsymbol{\xi}}}}\right)_{]\mu [ } = \left(x_{\mu+2},\ldots , x_n , x_1,\ldots , x_{\nu-2},\xi_1,\xi_2,\xi_3,x_{\nu+2},\ldots , x_{\mu-2}\right),\ ] ] and , again , the integral vanishes if .consequently , the integral in [ eq - comparecoeffn ] becomes ^ 3 } { \boldsymbol{1}}_{\left\{\xi_2<\min(\xi_1,\xi_3,{\boldsymbol{x}}_{]\nu[})\right\}}{\boldsymbol{1}}_{\left\{{\boldsymbol{y}}_{\sigma\left(\sigma^{-1}(j_1)-1\right)}{\leqslant}\xi_1{\leqslant}{\boldsymbol{y}}_{\sigma\left(\sigma^{-1}(j_1)+1\right)}\right\}}{\boldsymbol{1}}_{\left\{{\boldsymbol{y}}_{\sigma\left(\sigma^{-1}(\nu+1)-1\right)}{\leqslant}\xi_3{\leqslant}{\boldsymbol{y}}_{\sigma\left(\sigma^{-1}(\nu+1)+1\right)}\right\}}\right.\notag\\ & \qquad\qquad\times\left.\xi_1^{i_{j_1}}\xi_2^{i_{j_2}}\xi_3^{i_{j_3}}{\mathrm{d}}^3{\boldsymbol{\xi}}\right]{\boldsymbol{1}}^{(*)}_{\left\{0{\leqslant}{\boldsymbol{y}}_{\sigma(2)}{\leqslant}\ldots{\leqslant}{\boldsymbol{y}}_{\sigma(n-3)}{\leqslant}1\right\}},\end{aligned}\ ] ] where , , , and .according to whether or not and are separated by some in , there are a number of subcases to consider , which , however , do not present any difficulty .closedform expressions of the integrals [ eq - case2 ] , [ eq - case3 ] , and [ eq - case4 ] have been obtained , but are omitted for lack of space and because they do not add much insight at the current stage . by equating equal powers of , [ eq - comparecoeffn ] ,can thus , in principle , be used to derive an explicit recursion for the coefficients . here, we only illustrate this potential by giving the recursion for , leaving the problem of working out the combinatorics of the general case for future research .an easy symmetry argument shows that is equal to so that it is enough to consider the identity permutation .[ prop - rec - alpha5 ] for , the coefficients have the following properties : a. for all and . b. for and , they satisfy the recursion \alpha_{k , i-2,j}+\left[\frac{1}{3}+\frac{1}{i(i-2)}\right]\alpha_{k , i-3,j } , & i{\geqslant}3 .\end{cases}\ ] ] c. for and , they satisfy as well as the recursion \alpha_{k , i-2,0}+\left[\frac{1}{3}+\frac{1}{(i-2)i}\right]\alpha_{k , i-3,0}\\ \quad+\left(1+\frac{2}{i}\right)\sum_{p=0}^{3k+1}\frac{\alpha_{k , i-1,p}}{p+1}-\left(\frac{1}{2}+\frac{2}{i}\right)\sum_{p=0}^{3k+1}\frac{\alpha_{k , i-2,p}}{p+1}\\ \quad-\left(1+\frac{2}{i}\right)\sum_{p=0}^{i-2}\frac{\alpha_{k , i-2-p , p}}{p+1}+\left(\frac{1}{2}+\frac{2}{i}\right)\sum_{p=0}^{i-3}\frac{\alpha_{k , i-3-p , p}}{p+1}\\ \quad+\sum_{p=0}^{i-2}\frac{\alpha_{k , i-2-p , p}}{i - p } -\frac{1}{2 } \sum_{p=0}^{i-3}\frac{\alpha_{k , i-3-p , p}}{i - p } , & i{\geqslant}3. \end{cases}\ ] ] unfortunately , it does not seem possible to find an explicit solution to this recursion .in fact , the algorithm hyper for linear recurrence relations with polynomial coefficients indicates that the last line of [ eq - recn5jgeq1 ] does not have a hypergeometric solution .our results can , however , be used to obtain exact expression for the finitetime fitness distributions in the fivespecies bs model without having to evaluate any integrals .calculations based on [ eq - recn5jgeq1 ] and [ eq - recn5j0 ] give rise to the following conjecture .the coefficients have the following properties : a. they form a stair case pattern in the sense that , for , in particular , for .b. they become constant at time , that is there exist numbers such that for all .in fact , we conjecture that , for each , there exist and such that for all .i thank two anonymous referees for carefully reading the manuscript and making helpful suggestions to improve its presentation .i also thank them for pointing out the applicability of our method to the anisotropic bak sneppen model .
we suggest a new method to compute the asymptotic fitness distribution in the bak sneppen model of biological evolution . as applications we derive the full asymptotic distribution in the fourspecies model , and give an explicit linear recurrence relation for a set of coefficients determining the asymptotic distribution in the fivespecies model .
below we report the results of an experiment designed to measure the value of using direct lagrangian particle tracking in an elastic turbulence flow in a microfluidic tube . however , our measurements of pair dispersion show no evidence for the predicted exponential growth in time of the mean squared separation . to our great surprise , we have discovered that for a significant fraction of the observation time , the mean relative pair dispersion evolves quadratically to leading order in time , where and .furthermore , our data admits a scaling collapse , providing a convincing evidence that these observations are well - described by the short time expansion of the relative pair dispersion , the so called _ ballistic _ regime .we have established an experimental database of trajectories derived from tracers in the flow ; see materials and methods .one example of a pair realisation can be found in fig.[fig : intro]_a , b_. a sub - sample of pair separation curves is plotted in fig.[fig : intro]_c _ ; here and .these curves , together with the rest of the pairs which were found in each bin , were collected and the second moments were calculated to construct the datasets of presented in fig.[fig : normed_pair_dispersion ] on a semi - logarithmic scale , where an exponential curve would have appeared linear ( the un - normalised data can be found in fig.[fig : not_normed_pair_dispersion ] ; the sample size data is presented in fig.[fig : sample_size ] ) .( image ) at ( 0,0 ) ; ( 0.01,0.76 ) node a ; ( 0.01,0.28 ) node b ; ( 0.50,0.76 ) node c ; however , our data shows no supporting evidence for the predicted exponential growth of the second moment of pair separations . the insets present a zoom - in on the temporal sub - intervals where the full range plot may seem to contain linear segmentsnevertheless , no single slope , to be identified with in eq.[eq : exp_prediction ] , can be found .moreover , an exponential pair dispersion should extrapolate to the origin on this plot , and this is clearly not the case .to our great surprise we have found that , for a significant fraction of the observation time , the mean relative pair dispersion evolves quadratically in time to leading order ; this observation is evident in the insets of fig.[fig : short_time_behaviour_forward ] , and stands in sharp contradiction to the interpretations of recent experimental results . to better understand the source for this scaling let us write the taylor expansion around this in the expression for the relative pair dispersion and considering the ensemble average over pairs of the same initial separation we find that the leading order term at short times is indeed quadratic in the so - called _ ballistic _ regime .this is a universal property which does not require any assumptions on the character of the flow .a sign of this scaling beneath the dissipative scale has been observed in simulations of isotropic turbulence ( * ? ? ? * fig.5 ) . yetneither quantitative analysis of the coefficients in eq.[eq : relative_dispersion_taylor ] nor any contrast with the exponential separation prediction were made .other experimental and numerical results are limited to the inertial subrange of turbulence . to test thisfurther we rescale the relative pair dispersion by the pre - factor , the mean initial squared relative velocity . unlike the case of inertial turbulence , for elastic turbulence there are no exact results nor scaling arguments to derive the coefficients appearing in eq.[eq : relative_dispersion_taylor ] .therefore we extract them from the experimental data ; the profile as function of is presented in fig.[fig : uo2_ro_profile]_a_. indeed we find that our data admits a scaling collapse with no fitting parameters , providing a convincing experimental evidence that these observations are well - described by the short time expansion of the relative pair dispersion .the data shown in fig.[fig : short_time_behaviour_forward ] seem to deviate significantly from the quadratic scaling only after 23 seconds .( image ) at ( 0,0 ) ; ( 0.01,0.9 ) node a ; ( image ) at ( 0,0 ) taylor_backwards_pair_dispersion_loglog ; ( 0.01,0.96 ) node b ; in order to expose the sub - leading contributions to the initial relative pair dispersion , we subtract the backwards - in - time dynamics from the forward one . this way the time - symmetric terms , even powers of ,are eliminated .the result , the time asymmetric contributions presented in fig.[fig : short_time_behaviour_backwards - forward ] , shows that indeed initially the next - to - leading order correction follows and that the curves do collapse onto one when rescaled by , the appropriate coefficient in eq.[eq : relative_dispersion_taylor ] .the values of were , once again , extracted from the experimental data ; see fig.[fig : a.u_ro_profile ] for the profile . however , the deviations from this scaling are noticeable earlier than half a second , much earlier than the deviations from the ballistic behaviour discussed above .this hints that the deviation observed in fig.[fig : short_time_behaviour_forward ] is in fact due to higher order terms , potentially an indication of a transition to another regime .the fact that the transition takes place at an earlier time for the larger initial separations indicates the potential effects of the vessel size and its geometry , as well as the failure of the linear flow approximation ; see fig.[fig : uo2_ro_profile ] .in the light of eq.[eq : relative_dispersion_taylor ] , the ballistic evolution at short times should not be too surprising . andyet , to our knowledge it has not been discussed experimentally , nor confronted with the exponential separation prediction . moreover ,the ` memory ' of the initial conditions in our data is longer than one may have expected . on the one hand ,our observations are consistent with the time - scale derived from the ratio of the coefficients in eq.[eq : relative_dispersion_taylor ] ( see fig.[fig : dt_star_ro_profile ] ) , as deviations from the scaling are expected to be noticeable for .this is also manifested in the rescaling of the data by , presented in fig.[fig : normalised_behaviour_forward ] , and to be compared with numerical simulations of inertial turbulence ( * ? ? ?* fig.1 ) . on the other hand , during the time the statistics are dominated by the initial conditions , single realisations are not monotonic nor predictable ( see fig.[fig : intro ] ) , that is to say , our observations are not a trivial result of the mean flow alone , which in itself is inhomogeneous . moreover , during one second tracers typically travel distances larger than the width of the tube .given the stochastic nature of the dynamics , the spatial structure of the flow and the limited range of the linear flow approximation ( see insets in fig.[fig : uo2_ro_profile ] ) , the taylor expansion in time eq.[eq : relative_dispersion_taylor ] was expected to have a rather restrictive radius of convergence .further analysis is required to clarify whether higher order terms are of importance and to estimate the lagrangian correlation time , which accounts for both the temporal and the spatial structure of the flow .hence , in - depth discussion of the transition away from the ballistic regime in our system is left for future work . nonetheless , the time for which the ballistic approximation holds is longer than a second , a memory of the initial relative velocities which allows a wide prediction horizon .we are unaware of any previous discussion of pair dispersion short time statistics neither in the context of elastic turbulence nor for the broader class of wall - bounded viscous mixing flows .the implications are far - reaching knowing the average velocity difference at a given scale , which is an eulerian property , one can determine the degree of dispersion , a lagrangian property , as was recently discussed in .we have demonstrated its predictive power over a significant time interval , which provides a new paradigm for chaotic flows dominated by dissipation .the exponential growth in eq.[eq : exp_prediction ] relies on two underlying assumptions : the velocity field admits a linear approximation in space throughout the observation time ; and the observation time is much longer than the correlation time of the velocity gradients .these requirements are quite stringent and are not fulfilled by our experiment . to the best of our knowledgethese assumptions have not been met experimentally for tracer particles so far , and yet the exponential growth prediction seems to be the leading paradigm in interpreting experimental results .our estimations for the experimental system presented here indicate these may be possibly relevant for and .examining the experimental parameters reported in recent works we find that the trade - off between short correlation times and a large enough dissipation scale renders them difficult to reach in intense inertial turbulence , as batchelor himself anticipated ; also see .the lesson in this case highlights the limitations of asymptotic analyses for the descriptions of real - world flows for practical purposes the asymptotic dynamics are not necessarily the generic description which should be considered but rather the short - time behaviour , here dictated by the eulerian averages at the initial time .according to our findings the lyapunov exponents picture is not the appropriate one to describe the dynamics of wall - bounded elastic turbulence at scales larger than few percent of the vessel size .this raises questions regarding the mechanism for polymer stretching once their end - to - end distance reaches few microns length .following our work one may reach the conclusion that the batchelor prediction is irrelevant . by all means ,this is not the case !batchelor considered the evolution of the total length of a material line while this work , like the above mentioned reports , focused on the shortest distance between pairs of tracers starting with a finite initial separation .therefore the question of what natural or experimental systems admit the assumptions underlying the exponential pair dispersion eq.[eq : exp_prediction ] for tracer particles persists .[ [ methods - summary ] ] methods summary + + + + + + + + + + + + + + + the work presented here relies on constructing a database of trajectories in an elastic turbulence flow .elastic turbulence is essentially a low reynolds number and a high weissenberg number phenomenon .the former means the inertial non - linearity of the flow is over - damped by the viscous dissipation .the latter estimates how dominant is the non - linear coupling of the elastic stresses to the spatial gradients of the velocity field compared with the dissipation of these stresses via relaxation .this is the leading consideration in the design of the flow cell .the lagrangian trajectories are inferred from passive tracers seeded in the fluid . in order to study the dynamics of pairs ,the 3d positions of the tracers are needed to be resolved , even when tracers get nearby to each other .the requirement of large sample statistics dictates the long duration of the experiment , which lasts over days .the fluctuations due to the chaotic nature of the flow set the temporal resolution at milliseconds .this leads to a data generation rate of about .hence both the acquisition and the analysis processes are required to be steady and fully automated . the 3d positions of the fluorescent particles are determined using 2d single camera imaging , by measuring the diffraction rings generated by the out - of - focus particle . this way the particle localisation problem turns into a ring detection problem .[ [ microfluidic - apparatus ] ] microfluidic apparatus + + + + + + + + + + + + + + + + + + + + + + the experiments were conducted in a microfluidic device , implemented in polydimethylsiloxane elastomer by soft lithography , consisting of a curvilinear tube having a rectangular cross - section .the depth is measured to be , the width is approximately ( see fig.[fig : tube_geometry ] ) .the geometry consists of a concatenation of 33 co - centric pairs of half circles .the working fluid consists of polyacrylamide ( mw= at mass fraction of 80 parts per million ) in aqueous sugar syrup ( 1:2 sucrose to d - sorbitol ratio ; mass fraction of 78% ) , seeded with fluorescent particles ( 1 micron fluoresbrite yg carboxylate particles , polysciences inc . ) at number density of about 50 tracers in the observation volume .the viscosity of the newtonian solvent , without the polymers , is estimated to be 1100 times larger than water viscosity at .this leads to a polymer longest relaxation time of , which is the longest time scale characterising the relaxation of elastic stresses in the solution .the flow was driven by gravity .the maximal time - averaged velocity was measured to be roughly {250}{\um\per\second}$ ] .this results in a reynolds number and a global weissenberg number . the estimation of the lyapunov exponent ; use ( * ? ? ?20 ) [ [ imaging - system ] ] imaging system + + + + + + + + + + + + + + the imaging system consists of an inverted fluorescence microscope ( imt-2 , olympus ) , mounted with a plan - apochromat 20/0.8na objective ( carl zeiss ) and a fluorescence filter cube ; a royal - blue led ( luxeonstar ) served for the fluorophore excitation .a ccd ( gx1920 , allied vision technologies ) was mounted via zoom and 0.1 c - mount adapters ( vario - orthomate 543513 and 543431 , leitz ) , sampling at , , covering laterally and the full depth of the tube .the camera control was based on a modification of the motmot python camera interface package , expanded with a home - made plug - in , to allow real - time image analysis in the ram , recording only the time - lapse positions of the tracers to the hard drive .[ [ lagrangian - particle - tracking ] ] lagrangian particle tracking + + + + + + + + + + + + + + + + + + + + + + + + + + + + to construct trajectories , the particle localisation procedure , introduced in , has to be complemented by a linking algorithm .here we implemented a kinematic model , in which future positions are inferred from the already linked past positions .we used the code accompanying as a starting point .the algorithm was rewritten in python , generalised to n - dimensions , the kinematic model modified to account for accelerations as well , a memory feature was added to account for the occasional loss of tracers , and it was optimised for better performance .the procedure accounts for the physical process of particles advected by a smooth chaotic flow and for the uncertainties .these arise from the chaotic in time nature of the flow ( `` physical noise '' ) as well as from localisation and past linking errors ( `` experimental noise '' ) .finally , natural smoothing cubic splines are applied to smooth - out the experimental noise and estimate the velocities and accelerations .the smoothing parameter is chosen automatically , where vapnik s measure plays the role of the usual generalised cross - validation , adapted from the octave splines package . [[ pairs - analysis ] ] pairs analysis + + + + + + + + + + + + + + within the trajectories database we have identified pairs of tracers which were found at some instant at a separation distance close to a prescribed initial separation , to within . the initial time for a trajectory was set by the instant at which the separation distance was closest to .this way , each pair separation trajectory can contribute to an pairs ensemble at most once .see fig.[fig : intro ] .the number of pairs considered in each ensemble is plotted in fig.[fig : sample_size ] as function of .examining the ensemble averages of the relative separation velocity at the initial time , presented in fig.[fig : ul_ro_profile ] , we do not find an indication that our sampling method introduces a bias for converging or diverging trajectories , at least for .such a bias could manifest itself in non - vanishing relative separation velocity at these scales .we have yet to resolve the reason for the negative values observed at the larger scales .our data supports the linear flow approximation assumption at small enough scales , as indicated by the ensemble averages of the initial relative separation velocity ; see the inset of fig.[fig : uo2_ro_profile]_b _ where levels - off at .the same regime is not reached for the relative velocity , yet the data in the inset of fig.[fig : uo2_ro_profile]_a _ does not rule out this possibility for smaller scales .one note of warning should made as to the potential limitations when inferring the eulerian mean squared relative velocity from direct lagrangian particle trajectories , e.g. using in fig.[fig : uo2_ro_profile ] as an estimator for the eulerian second order structure function .the tracer number density in our experiment sets the mean separation distance to a value which is larger than the initial separations considered in this work .the analysis of the possible consequences are beyond the scope of this report .yet we have no evidence for a discrepancy in the quantities we have considered here .[ [ section-1 ] ] [ [ acknowledgements ] ] acknowledgements + + + + + + + + + + + + + + + + we thank a. frishman for the helpful and extensive discussions of the theory , o. hirschberg for useful discussions of the mathematical and statistical analysis , m. feldman for his insights regarding the implications to biology , and y. silberberg for his helpful advices .ea had fruitful discussions with j. bec , s. musacchio , d. vincenzi , ew saw and r. chetrite , kindly organised by the latter ; both authors gained from discussions with v. lebedev and g. falkovich .this work is supported by the lower saxony ministry of science and culture cooperation ( germany ) .( image ) at ( 0,0 ) ;
the leading paradigm for chaotic flows dominated by dissipation predicts an exponential growth of the mean distance between pairs of fluid elements , in the long run . this is reflected in the analysis of experimental results on tracer particles and the discussions which follow , as reported in recent experimental and numerical publications . to quantitatively validate this prediction , we have conducted a microfluidic experiment generating _ elastic turbulence _ , a flow characterised in the literature as smooth in space and random in time . to our great surprise , we discovered that the pair separation follows a much slower power - law also known as _ ballistic _ a notion overlooked so far for flows of this type . we provide conclusive experimental evidence that this scaling is well - desribed by the same coefficients derived from the short - time dynamics . our finding reinforces the role of the ballistic regime over a significant range in time and space , providing a quantitative estimation for the spreading of particles in mixing microfluidic flows based on the initial velocity snapshot . finally we note that the conditions for the asymptotic exponential pair separation are quite stringent when it comes to tracer particles and are unlikely to be realised in wall - bounded flows . therefore this work raises questions regarding the relevance and applicability of the currently leading paradigm . * keywords . * chaotic flows pair dispersion microfluidic fluid dynamics elastic turbulence [ [ section ] ] when we stir sugar in a cup of coffee we typically drive the liquid in circles using the tea - spoon , yet the flow quickly evolves into a three - dimensional chaotic one , tremendously accelerating the homogeneous distribution of the sweetener throughout the beverage . turbulent flows , typical in our everyday life , are renowned for their efficient mixing and intensification of particle dispersion , having impact on natural processes as well as wide applications in the industry . at their smaller scales the so - called dissipative or kolomogorov scales such flows are characterised by a velocity field which is smooth in space yet fluctuates strongly in time , also known as the _ batchelor regime _ . understanding transport phenomena at small scales is of importance for the study of combustion , contamination and the deformation of polymers by the flow , to name but a few physical examples , as well as for the study of biological processes such as cytoplasmic streaming , communication via chemo - attractants and food searching . moreover , it has immediate technological implications , mainly when it comes to miniature devices . microfluidic systems have become a common tool in research and industry . as a prominent part of lab - on - a - chip apparatuses , they are implemented for micro - chemistry , soft condensed matter , bioanalytics such as pcr , biomedicine and other research and engineering applications . nevertheless , there is limited knowledge of transport phenomena in these devices beyond the scenario of laminar steady flows , which is not the only case of interest . the study of particle dispersion in flows is at the basis of the understanding of transport processes . in the early 1950s , g. k. batchelor predicted that the mean length of material lines in turbulent flows would grow exponentially in the course of time , in the long time limit ; this stems from the notion that the line elements it consists of can be considered short enough such that the distance between the ends of an element remain within the dissipative scale throughout the motion . in some recent works the terms _ material line _ and _ material line element _ are used interchangeably to indicate the separation vector between two passive particles in the fluid . and indeed batchelor s prediction has been later reformulated for tracer particles in the form of exponential pair separation ; see for example . to illustrate how this comes about , let us consider a pair of passive tracers separated by the vector , and whose relative velocity is . the evolution of the squared separation distance follows where is the separation velocity , defined by the above relation . the analysis then proceeds by assuming a linear flow approximation the pair separation is considered to be small enough such that the velocity of one tracer is linearly related to that of the other . using this approximation eq.[eq : r2_eom ] reduces to where no longer depends on . for the case of chaotic flows , can be modelled as a random variable , and analyses often focus on the expectation value of such equations . additionally assuming the correlation time of to be very short compared to the observation time , a generalisation of the central limit theorem the multiplicative ergodic theorem of oseledec is applied , resulting in the exponential pair dispersion this is a relation for the time evolution of the second moment of pair separation distances . in this form one can identify with the generalised lyapunov exponent of the second order , which is generically not trivially related to the ordinary ( maximal ) lyapunov exponent ; see , , and others . much of the theoretical and numerical literature discussing pair dispersion in the dissipative sub - range is devoted to the evaluation of in terms of the typical time - scale of the flow , that is ; see ( * ? ? ? * eq.2.9 ) . the value of is still under debate as can be learnt from as well as and references therein . at the time of writing , the exponential pair dispersion is regarded as the leading paradigm for chaotic flows which are spatially smooth , as manifested by the analysis of recent experimental results and the discussions which follow . jullien studied an instance of the batchelor regime flow in 2d turbulence , where the velocity field was inferred experimentally followed by numerical integration of tracers simulated on a computer ; the initial pair separation values were set to distances smaller than the measurements grid . an exponential separation , referred to in that context as lin s law , was reported during an intermediate time interval of between one to twice the value of the estimated , after which a power - law scaling has been observed . salazar and collins estimated from measurements of in 3d turbulence reported by guala et al . ; luthi et al . employed 3d particle tracking velocimetry for direct measurements of in 3d turbulence . the authors did not infer a value for , nevertheless , in the lack of other sources we could try to get an estimation . the data in fig.10(a ) of and fig.1 of imply that saturates after about at a value of approximately , where the kolmogorov time - scale is estimated at 0.26 ; following eq . 2.1 and 2.6 in this would correspond to . based on . ] this estimate should be taken with grain of salt not only because it is unclear whether these measurements are indeed restricted to the dissipative scales but also as , although related , the quantities and are not the same 32 p.582 . even more recently , ni and xia reported measurements in 3d turbulent thermal convection and inferred reported as concluded from exponential fits to the mean squared pair separation distance ; as presented in ( * ? ? ? * fig . 1 ) , the fits are taken at time intervals of up to one kolmogorov time - scale , a time too short with respect to the underlying assumptions , and thereafter the data grows faster than the evaluated exponentials . we therefore find it safe to conclude that the literature on the subject is lacking conclusive experimental evidence . in this paper we report experimental results of particle pair dispersion where , in contrast to the above mentioned publications , the memory of the initial relative velocity prevails the dynamics , showing no signature of the asymptotic exponential growth . studying pair separation in the dissipative sub - range over long times in intense turbulence poses a technological challenge . the high velocities , typical of high reynolds flows , restrict the length of the obtained trajectories as exemplified by the above mentioned reports and other recent works . this is one of the reasons for which the experimental literature on pair dispersion in smooth chaotic flows is lagging behind the theoretical one . and has yet to reach a conclusive evidence . however , chaotic flows do not necessarily require fast velocities or large vessels . one might expect viscous flows to be regular in microfluidic tubes and the question of mixing in these apparatuses may seem irrelevant due to their low reynolds nature . nevertheless , when a minute amount of long flexible polymers , such as dna and protein filaments , are introduced , the flow may develop a series of elastic instabilities which render it irregular and twisted , a phenomenon termed _ elastic turbulence _ . the study of pair separation dynamics in elastic turbulence , taking place inside a tiny tube , presents technical challenges : the positions of tracers are needed to be resolved over long times and distances , in particular when the tracers get nearby to each other , whereas the flow is chaotic and three - dimensional ; the scales at which the dynamics takes place require the use of a microscope , where 3d imaging is non - trivial ; the flow fluctuations in time dictate a high temporal resolution ; the statistical nature of the problem demands a large sample of trajectories , which in turn requires long acquisition times and reliable automation . to satisfy these requirements a novel method was implemented . the 3d positions of the fluorescent particles are determined from a single camera 2d imaging , by measuring the diffraction rings generated by the out - of - focus particle ; this way the particle localisation problem turns into a ring detection problem , which is addressed accurately and efficiently in ref . . below we report the results of an experiment designed to measure the value of using direct lagrangian particle tracking in an elastic turbulence flow in a microfluidic tube .
facility location problems have been widely studied in the operations research and computer science communities ( see , e.g. , and the survey ) , and have a wide range of applications . in its simplest version , _ uncapacitated facility location _ ( ) , we are given a set of facilities or service - providers with opening costs , and a set of clients that require service , and we want to open some facilities and assign clients to open facilities so as to minimize the sum of the facility - opening and client - assignment costs .an oft - cited prototypical example is that of a company wanting to decide where to locate its warehouses / distribution centers so as to serve its customers in a cost - effective manner .we consider facility - location problems that abstract settings where facilities are mobile and may be relocated to destinations near the clients in order to serve them more efficiently by reducing the client - assignment costs .more precisely , we consider the _ mobile facility location _( ) problem introduced by , which generalizes the classical -median problem ( see below ) .we are given a complete graph with costs on the edges , a set of clients with each client having units of demand , and a set of initial facility locations .we use the term facility to denote the facility whose initial location is .a solution to moves each facility to a final location ( which could be the same as ) , incurring a _ movement cost _ , and assigns each client to a final location , incurring _ assignment cost _ .the total cost of is the sum of all the movement costs and assignment costs .more formally , noting that each client will be assigned to the location nearest to it in , we can express the cost of as where ( for any node ) gives the location in nearest to ( breaking ties arbitrarily ) .we assume throughout that the edge costs form a metric .we use the terms nodes and locations interchangeably .mobile facility location falls into the genre of _ movement problems _ introduced by demaine et al . . in these problems , we are given an initial configuration in a weighted graph specified by placing `` pebbles '' on the nodes and/or edges ; the goal is to move the pebbles so as to obtain a desired final configuration while minimizing the maximum , or total , pebble movement .was introduced by demaine et al . as the movement problem where facility- and client- pebbles are placed respectively at the initial locations of the facilities and clients , and in the final configurationevery client - pebble should be co - located [ [ our - results . ] ] our results .+ + + + + + + + + + + + we give the first _ local - search based _ approximation algorithm for this problem and achieve the best - known approximation guarantee .our main result is a -approximation for this problem for any constant using a simple local - search algorithm .this improves upon the previous best 8-approximation guarantee for due to friggstad and salavatipour , which is based on lp - rounding and is not combinatorial .the local - search algorithm we consider is quite natural and simple .observe that given the final locations of the facilities , we can find the minimum - cost way of moving facilities from their initial locations to the final locations by solving a minimum - cost perfect - matching problem ( and the client assignments are determined by the function defined above ) .thus , we concentrate on determining a good set of final locations . in our local - search algorithm , at each step , we are allowed to swap in and swap out a fixed number ( say ) of locations .clearly , for any fixed , we can find the best local move efficiently ( since the cost of a set of final locations can be computed in polytime ) .note that we do not impose any constraints on how the matching between the initial and final locations may change due to a local move , and a local move might entail moving all facilities .it is important to allow this flexibility , as it is known that the local - search procedure that moves , at each step , a constant number of facilities to chosen destinations has an unbounded approximation ratio .our main contribution is a tight analysis of this local - search algorithm ( section [ 3apx ] ) . our guarantee _ matches _ ( up to terms ) the best - known approximation guarantee for the -median problem .since there is an approximation - preserving reduction from the -median problem to choose arbitrary initial facility locations and give each client a huge demand improvement of our result would imply an analogous improvement for the -median problem .( in this respect , our result is a noteworthy exception to the prevalent state of affairs for various other generalizations of and -median e.g ., the data placement problem , \{matroid- , red - blue- } median , -facility - location where the best approximation ratio for the problem is worse by a noticeable factor ( compared to or -median ) ; is another exception . )furthermore , _ our analysis is tight _ ( up to factors ) because by suitably setting in the reduction of , we can ensure that our local - search algorithm for coincides with the local - search algorithm for -median in which has a tight approximation ratio of 3 . we also consider a weighted generalization of the problem ( section [ extn ] ) , wherein each facility has a weight indicating the cost incurred per - unit distance moved and the cost for moving to is .( this can be used to model , for example , the setting where different facilities move at different speeds . )our analysis is versatile and extends to this weighted generalization to yield the same performance guarantee . for the further generalization of the problem , where the facility - movement costs may be arbitrary and unrelated to the client - assignment costs ( for which a 9-approximation can be obtained via lp - rounding ; see `` related work '' ) , we show that local search based on multiple swaps has a bad approximation ratio ( section [ locgap ] ) .the analysis leading to the approximation ratio of 3 ( as also the simpler analysis in section [ 5apx ] yielding a 5-approximation ) crucially exploits the fact that we may swap multiple locations in a local - search move .it is natural to wonder then if one can prove any performance guarantees for the local - search algorithm where we may only swap in and swap out a single location in a local move .( naturally , the single - swap algorithm is easier to implement and thus may be more practical ) . in section [ oneswap ], we analyze this single - swap algorithm and prove that it also has a constant approximation ratio .[ [ our - techniques . ] ] our techniques .+ + + + + + + + + + + + + + + the analysis of our local - search procedure requires various novel ideas . as is common in the analysis of local - search algorithms, we identify a set of test swaps and use local optimality to generate suitable inequalities from these test swaps , which when combined yield the stated performance guarantee .one of the difficulties involved in adapting standard local - search ideas to is the following artifact : in , the cost of `` opening '' a set of locations is the cost of the min - cost perfect matching of to , which , unlike other facility - location problems , is a highly non - additive function of ( and as mentioned above , we need to allow for the matching from to to change in non - local ways ) . in most facility - location problems with opening costsfor which local search is known to work , we may always swap in a facility used by the global optimum ( by possibly swapping out another facility ) and easily bound the resulting change in _ facility cost _ , and the main consideration is to decide how to reassign clients following the swap in a cost - effective way ; in we do not have this flexibility and need to carefully choose how to swap facilities so as to ensure that there is a good matching of the facilities to their new destinations after a swap _ and _ there is a frugal reassignment of clients .this leads us to consider long relocation paths to re - match facilities to their new destinations after a swap , which are of the form , where and are the locations that facility is moved to in the local and global optimum , and , respectively , and is the -location closest to . by considering a swap move involving the start and end locations of such a path ,we can obtain a bound on the movement cost of all facilities where is the start of the path or serves a large number of clients . to account for the remaining facilities , we break up into suitable intervals , each containing a constant number of unaccounted locations which then participate in a multi - location swap .this _ interval - swap _ move does not at first appear to be useful since we can only bound the cost - change due to this move in terms of a significant multiple of ( a portion of ) the cost of the local optimum !one of the novelties of our analysis is to show how we can _ amortize _ the cost of such expensive terms and make their contribution negligible by considering multiple different ways of covering with intervals and averaging the inequalities obtained for these interval swaps .these ideas lead to the proof of an approximation ratio of 5 for the local - search algorithm ( section [ 5apx ] ) . the tighter analysis leading to the 3-approximation guarantee ( section [ 3apx ] ) features another noteworthy idea , namely that of using `` recursion '' ( up to bounded depth ) to identify a suitable collection of test swaps .we consider the tree - like structure created by the paths used in the 5-approximation analysis , and ( loosely speaking ) view this as a recursion tree , using which we spawn off interval - swap moves by exploring this tree to a constant depth . to our knowledge, we do not know of any analysis of a local - search algorithm that employs the idea of recursion to generate the set of test local moves ( used to generate the inequalities that yield the desired performance guarantee ) .we believe that this technique is a notable contribution to the analysis of local - search algorithms that is of independent interest and will find further application . [ [ related - work . ] ] related work .+ + + + + + + + + + + + + as mentioned earlier , was introduced by demaine et al . in the context of movement problems .friggstad and salavatipour designed the first approximation algorithm for .they gave an 8-approximation algorithm based on lp rounding by building upon the lp - rounding algorithm of charikar et al . for the -median problem ; this algorithm works only however for the unweighted case .they also observed that there is an approximation - preserving reduction from -median to .we recently learned that halper proposed the same local - search algorithm that we analyze .his work focuses on experimental results and leaves open the question of obtaining theoretical guarantees about the performance of local search .chakrabarty and swamy observed that , even with arbitrary movement costs is a special case of the matroid median problem .thus , the approximation algorithms devised for matroid median independently by and yield an 8-approximation algorithm for with arbitrary movement costs .there is a wealth of literature on approximation algorithms for ( metric ) uncapacitated and capacitated facility location ( and ) , the -median problem , and their variants ; see for a survey on . whereas constant - factor approximation algorithms for and -median can be obtained via a variety of techniques such as lp - rounding , primal - dual methods , local search , all known -approximation algorithms for ( in its full generality ) are based on local search .we now briefly survey the work on local - search algorithms for facility - location problems . starting with the work of ,local - search techniques have been utilized to devise -approximation algorithms for various facility - location problems .korupolu , plaxton , and rajaraman devised -approximation for , and with uniform capacities , and -median ( with a blow - up in ) .charikar and guha , and arya et al . both obtained a -approximation for .the first constant - factor approximation for was obtained by pl , tardos , and wexler , and after some improvements , the current - best approximation ratio now stands at . for the special case of uniform capacities ,the analysis in was refined by , and aggarwal et al . obtain the current - best 3-approximation .arya et al . devised a -approximation algorithm for -median , which was also the first constant - factor approximation algorithm for this problem based on local search .gupta and tangwongsan ( among other results ) simplified the analysis in .we build upon some of their ideas in our analysis .local - search algorithms with constant approximation ratios have also been devised for various variants of the above three canonical problems .mahdian and pl , and svitkina and tardos consider settings where the opening cost of a facility is a function of the set of clients served by it . in , this cost is a non - decreasing function of the number of clients , and in this cost arises from a certain tree defined on the client set .devanur et al . and consider -facility location , which is similar to -median except that facilities also have opening costs .hajiaghayi et al . consider a special case of the matroid median problem that they call the red - blue median problem .most recently , considered a problem that they call the -median forest problem , which generalizes -median , and obtained a -approximation algorithm .as mentioned earlier , to compute a solution to , we only need to determine the set of final locations of the facilities , since we can then efficiently compute the best movement of facilities from their initial to final locations , and the client assignments .this motivates the following local - search operation .given a current set of locations , we can move to any other set of locations such that , where is some fixed value .we denote this move by .the local - search algorithm starts with an arbitrary set of final locations . at each iteration, we choose the local - search move that yields the largest reduction in total cost and update our final - location set accordingly ; if no cost - improving move exists , then we terminate .( to obtain polynomial running time , as is standard , we modify the above procedure so that we choose a local - search move only if the cost - reduction is at least . )[ sec:5appx ] we now analyze the above local - search algorithm and show that it is a -approximation algorithm . for notational simplicity , we assume that the local - search algorithm terminates at a local optimum ; the modification to ensure polynomial running time degrades the approximation by at most a -factor ( see also remark [ polyremk ] ) .[ 5apxthm ] let and denote respectively the movement and assignment cost of an optimal solution .the total cost of any local optimum using at most swaps is at most .although this is not the tightest guarantee that we obtain , we present this analysis first since it introduces many of the ideas that we build upon in section [ 3apx ] to prove a tight approximation guarantee of for the local - search algorithm . for notational simplicity , we assume that all are 1 .all our analyses carry over trivially to the case of non - unit ( integer ) demands since we can think of a client having demand as co - located unit - demand clients . [ [ notation - and - preliminaries . ] ] notation and preliminaries .+ + + + + + + + + + + + + + + + + + + + + + + + + + + we use to denote the local optimum , where facility is moved to final location .we use to denote the ( globally ) optimal solution , where again facility is moved to . throughout, we use to index locations in , and to index locations in . recall that , for a node , is the location in nearest to .similarly , we define to be the location in nearest to . for notational similarity with facility location problems , we denote by , and by .( thus , and are the movement costs of in and respectively . ) also , we abbreviate to , and to .thus , and are the assignment costs of in the local and global optimum respectively .( so . )let be the set of clients assigned to the location , and .for a set , we define ; we define for similarly .define .we say that _ captures _ all the locations in .the following basic lemma will be used repeatedly .[ reasgn ] for any client , we have .let .the lemma clearly holds if .otherwise , where the second inequality follows since is the closest location to in . to prove the approximation ratio, we will specify a set of local - search moves for the local optimum , and use the fact that none of these moves improve the cost to obtain some inequalities , which will together yield a bound on the cost of the local optimum .we describe these moves by using the following digraph .consider the digraph .we decompose into a collection of node - disjoint ( simple ) paths and cycles as follows .repeatedly , while there is a cycle in our current digraph , we add to , remove all the nodes of and recurse on the remaining digraph . after this step , a node in the remaining digraph , which is acyclic , has : exactly one outgoing arc if ; exactly one incoming and one outgoing arc if ; and exactly one incoming , and at most one outgoing arc if .now we repeatedly choose a node with no incoming arcs , include the maximal path starting at in , remove all nodes of and recurse on the remaining digraph .thus , each triple is on a unique path or cycle in .define to be such that is an arc in ; if has no incoming arc in , then let . we will use and to define our swaps . for a path ,define to be and to be .notice that . foreach , let , , and .note that latexmath:[ z\in{\ensuremath{\mathcal p}} s'_0={\ensuremath{\mathsf{start}}}(z) o'_r={\ensuremath{\mathsf{end}}}(z) z\in{\ensuremath{\mathcal{c}}} ] if .consider each edge . if and , we simply bound by . otherwise ,if , we use the bound which is valid since .if and , then and we bound by .incorporating these bounds in , we obtain the following inequality . for every such that ,we consider the move .we move each facility to if and to otherwise .note that for every facility such that , we have , and so . we reassign all clients in to , and reassign each client in to .the resulting change in assignment cost is at most . if , we reassign all clients in to and bound the resulting change in assignment cost by . therefore , if , we obtain the inequality for every such that , we again consider .we move facilities on and reassign clients in as in case 2 ) .we reassign clients in to if , and to otherwise .the assignment - cost change due to this latter reassignment is at most if , and otherwise , where to obtain the latter inequality we use the fact that since and for all .we obtain the inequality for every such that , we pick some arbitrary and consider . we analyze this by viewing this as a combination of : ( i ) a shift along , ( ii ) moving from to , where , and ( iii ) a shift along the appropriate subpath of . the resulting inequality we obtainis therefore closely related to , .we incur an additional term for the change in movement cost due to ( i ) and( ii ) , and for reassigning clients in .also , since is no longer swapped out , we do not incur any terms that correspond to reassigning clients in .thus , we obtain the following inequality : we are finally ready to derive . an inequality subscripted with an index , like , denotes that inequality for that particular index. it will be useful to define [ segineq ] we take the following linear combination . the lhs of is 0 .we analyze the contribution from the terms to the rhs .facilities contribute at most .consider .if , we pick up from parts 1 and 2 , and we may pick up an additional from part 2 if .so overall , we obtain a contribution of at most .next suppose .if and , we gather from part 1 and from part 2 , so the total contribution is .suppose .notice then that , otherwise if , then , which contradicts our assumptions .recall that .we gather from part 1 , and \leq 6.25f^*_i-2.5f_i\bigl(1-\frac{n^*_{o_i}}{w_i}\bigr) ] from part 2 , so the total contribution is at most .we now bound the -contribution .part of this is .we proceed to analyze the remaining contribution .the clients whose remaining contribution is non - zero are of two types : ( i ) clients in , which are reassigned when a location in is swapped in or a location in is swapped out ; and ( ii ) clients in , where , which are charged when we bound the cost or when is swapped out .consider a client .let .its ( remaining ) part-1 contribution is if or , and 0 otherwise .the part-2 contribution is at most ( this happens when ) ; so the total ( remaining ) contribution is at most . for , where , we know that , and so we have already accounted for its contribution of at most above . finally , consider , where .note that .we gather from part 1 if and 0 otherwise , and from part 2 if and otherwise ; so in total we gather at most . putting everything together ,leads to inequality .[ pathlem2 ] let , and .we have we focus on the case where there exists an index such that , , and as otherwise , immediately implies .this case is significantly more involved , in part because when and , we accrue both the term in when is swapped out , and the term in when is swapped out .hence , there is no way of combining to get a compound inequality having both and on the rhs . in order to achieve this, we define a structure called a _ block _ , comprising multiple segments , using which we define additional moves that swap in but swap out neither nor , so that the extent to which is swapped in exceeds the extent to which or are swapped out .we call a set of consecutive -locations a _ block _ , denoted by , if ( i ) and for all , ( ii ) ( recall that if is non - existent , then this condition is not satisfied ) , and ( iii ) or .note that by definition , any two blocks correspond to disjoint subpaths of .we say that are _ siblings _ , denoted by , if and they belong to a common block ; note that this means that neither nor is at the start of a block .we use to denote that and are not siblings .let . before defining the additional swap moves for each block ,we first reconsider and account for the change in cost due to this move differently to come up with a slightly different inequality than .we again start with , and bound in different ways for an edge of .we use incorporating this in yields the following inequality . for every block , we consider the swap moves for all .for each such , we obtain the inequality where we bound by . also consider if or if , where is some arbitrary path in .note that if , then .this yields the inequality we now derive by taking the following linear combination of : as before , the lhs of is 0 , and we analyze the contribution from the terms to the rhs .many of the terms are similar to those that appear in , so we state these without much elaboration .facilities contribute at most .consider .if , then note that does not belong to a block , and we gather at most from parts 1 and 2 .suppose .if , we gather from part 1 . let .if does not belong to any block then we pick up from part 2 .otherwise , note that must be the start of a block , therefore , we pick up from parts 3 and 4 .so the overall contribution is at most .next , suppose .let .we pick up from part 1 , ] from part 4 .this amounts to at most total contribution .suppose .we gather from part 1 . if , we gather ] from parts 2 , 3 , and 4 .so the total contribution in both cases is at most . if and , then is the end of some block .we gather from part 1 , and + 9(f^*_i - f_i) ] from part 2 . if belongs to a block , then notice that it can only be the start of the block .so and we gather from parts 3 and 4 . accounting for both cases , we gather at most .now consider the -contribution .this includes the terms and .we bound the remaining contribution . consider a client .let .its ( remaining ) part-1 contribution is if or , and 0 otherwise .if is not in any block , we pick up at most from part 2 .if is the start of a block , we pick up from part 3 , and note that . if is an intermediate -location of a block , then and .we pick up from part 3 , and from part 4 .so the total ( remaining ) contribution is at most in all cases . now consider , where .we have .let .we accrue from part 1 if and 0 otherwise .if does not belong to a block , we gather at most from part 2 ; if belongs to a block , it must be the start of the block , and we gather at most from part 3 .so in total , we accrue at most .thus , leads to inequality .[ cyclem ] let and .define for all .then , the arguments are almost identical to those in the proofs of lemmas [ pathlem1 ] and [ pathlem2 ] .the only change is that we no longer have inequality or .instead , we start with the inequality , and we bound suitably , as in lemma [ pathlem1 ] or lemma [ pathlem2 ] , to obtain and that are analogous to and respectively . & \begin{split } 0\ \leq\ & \sum_{\substack{i\in z : s_i{\ensuremath{\approx}}{\ensuremath{\sigma}}(o_i ) \text { or } \\ { \ensuremath{\sigma}}(o_i)\in z{\ensuremath{\setminus}}a , s_i\notin b } } \negthickspace\negthickspace 2f^*_i + \sum_{\substack{i\in z : { \ensuremath{\sigma}}(o_i)\in a\text { or } \\ { \ensuremath{\sigma}}(o_i)\in c{\ensuremath{\setminus}}a , s_i\in b } } \negthickspace\negthickspace\bigl(2f^*_i-(1-{\ensuremath{\alpha}}_i)f_i\bigr ) + \sum_{\substack{i\in z : { \ensuremath{\sigma}}(o_i)\in z{\ensuremath{\setminus}}c \\ s_i\in b , s_i{\ensuremath{\not\approx}}{\ensuremath{\sigma}}(o_i ) } } \negthickspace\negthickspace(f^*_i - f_i ) \\ & + \sum_{\substack{i\in z : { \ensuremath{\sigma}}(o_i)\in a\text { or } \\s_i\in b , s_i{\ensuremath{\not\approx}}{\ensuremath{\sigma}}(o_i ) } } \sum_{j\in d^*(o_i)}\tfrac{2}{3}(c_j+c^*_j ) .\end{split } \label{cineq2}\end{aligned}\ ] ] the rest of the proof proceeds as in lemmas [ pathlem1 ] and [ pathlem2 ] : we substitute for in the proof of lemma [ pathlem1 ] , and substitute for in the proof of lemma [ pathlem2 ] .it is not hard to see then that we obtain inequality .[ 1swthm ] the cost of a local optimum using 1-swaps is at most times the optimum solution cost .let and denote respectively the total movement- and assignment- cost of the local optimum .for a set , let denote .summing for all and simplifying , we obtain that summing for every path , and or for every cycle , yields the following . adding to , we get that this section , we present an example that shows that if the facility - movement costs and the client - assignment costs come from different ( unrelated ) metrics then the -swap local - search algorithm has an unbounded locality gap ; that is , the cost of a local optimum may be arbitrarily large compared to optimal cost .we first show a simple example for a single swap case , which we will later generalize for swaps .suppose we have two clients and two facilities .some distances between these clients and facilities are shown in the fig .[ fig : locgap](a ) ; all other distances are obtained by taking the metric completion . note that in this example , in order to have a bounded movement cost for facilities , the only option is to have one of as a final location of facility and one of as a final location of facility . as can be seen from the figure ,the solution has total cost ( the movement cost is and the client - assignment cost is ) . now consider the solution which has a total cost of ( the movement cost is and the client - assignment cost is ) .this is a local optimum since if we swap out , then we have to swap in to have a bounded movement cost , which leads having assignment cost of . by symmetry , there is no improving move for solution , and the locality gap is .now consider the example shown in fig .[ fig : locgap](b ) for local - search with simultaneous swaps .suppose we have facility set and client set .the global optimum has total cost ( facility movement cost is and client - assignment cost is ) while is a local optimum whose total cost is ( facility movement cost is and client - assignment cost is ) .consider any move .note that iff ( where indices are ) to ensure bounded movement cost .let be such that and .then , has an assignment cost of in the solution .hence , is a local optimum .
we consider the _ mobile facility location _ ( ) problem . we are given a set of facilities and clients located in a common metric space . the goal is to move each facility from its initial location to a destination ( in ) and assign each client to the destination of some facility so as to minimize the sum of the movement - costs of the facilities and the client - assignment costs . this abstracts facility - location settings where one has the flexibility of moving facilities from their current locations to other destinations so as to serve clients more efficiently by reducing their assignment costs . we give the first _ local - search based _ approximation algorithm for this problem and achieve the best - known approximation guarantee . our main result is -approximation for this problem for any constant using local search . the previous best guarantee for was an 8-approximation algorithm due to based on lp - rounding . our guarantee _ matches _ the best - known approximation guarantee for the -median problem . since there is an approximation - preserving reduction from the -median problem to , any improvement of our result would imply an analogous improvement for the -median problem . furthermore , _ our analysis is tight _ ( up to factors ) since the tight example for the local - search based 3-approximation algorithm for -median can be easily adapted to show that our local - search algorithm has a tight approximation ratio of 3 . one of the chief novelties of the analysis is that in order to generate a suitable collection of local - search moves whose resulting inequalities yield the desired bound on the cost of a local - optimum , we define a tree - like structure that ( loosely speaking ) functions as a `` recursion tree '' , using which we spawn off local - search moves by exploring this tree to a constant depth . our results extend to the weighted generalization wherein each facility has a non - negative weight and the movement cost for is times the distance traveled by .
sequence comparison is an important step in many basic tasks in bioinformatics , from phylogenies reconstruction to genomes assembly .it is often realized by sequence alignment techniques , which are computationally expensive , requiring quadratic time in the length of the sequences .this has led to increased research into _ alignment - free _ techniques .hence standard notions for sequence comparison are gradually being complemented and in some cases replaced by alternative ones .one such notion is based on comparing the words that are absent in each sequence .a word is an _ absent word _ ( or a forbidden word ) of some sequence if it does not occur in the sequence .absent words represent a type of _ negative information _ : information about what does not occur in the sequence . given a sequence of length , the number of absent words of length at most is exponential in .however , the number of certain classes of absent words is only linear in .this is the case for _ minimal absent words _ , that is , absent words in the sequence whose all proper factors occur in the sequence .an upper bound on the number of minimal absent words is known to be , where is the size of the alphabet .hence it may be possible to compare sequences in time proportional to their lengths , for a fixed - sized alphabet , instead of proportional to the product of their lengths . in what follows ,we consider sequences on a _ fixed - sized alphabet _ since the most commonly studied alphabet is .an -time and -space algorithm for computing all minimal absent words on a fixed - sized alphabet based on the construction of suffix automata was presented in .the computation of minimal absent words based on the construction of suffix arrays was considered in ; although this algorithm has a linear - time performance in practice , the worst - case time complexity is .new -time and -space suffix - array - based algorithms were presented in to bridge this unpleasant gap .an implementation of the algorithm presented in is currently , and to the best of our knowledge , the fastest available for the computation of minimal absent words .a more space - efficient solution to compute all minimal absent words in time was also presented in . in this article , we consider the problem of comparing two sequences and of respective lengths and , using their sets of minimal absent words . in ,chairungsee and crochemore introduced a measure of similarity between two sequences based on the notion of minimal absent words .they made use of a length - weighted index to provide a measure of similarity between two sequences , using sample sets of their minimal absent words , by considering the length of each member in the symmetric difference of these sample sets .this measure can be trivially computed in time and space provided that these sample sets contain minimal absent words of some bounded length . for unbounded length ,the same measure can be trivially computed in time : for a given sequence , the cumulative length of all its minimal absent words can grow _ quadratically _ with respect to the length of the sequence . the same problem can be considered for two _ circular _ sequences .the measure of similarity of chairungsee and crochemore can be used in this setting provided that one extends the definition of minimal absent words to circular sequences . in section [ sec : circ_seq_comp ] , we give a definition of minimal absent words for a circular sequence from the formal language theory point of view .we believe that this definition may also be of interest from the point of view of symbolic dynamics , which is the original context in which minimal absent words have been introduced . * our contribution . * here we make the following threefold contribution : a ) : : we present an -time and -space algorithm to compute the similarity measure introduced by chairungsee and crochemore by considering _ all _ minimal absent words of two sequences and of lengths and , respectively ; thereby showing that it is indeed possible to compare two sequences in time proportional to their lengths ( section [ sec : seq_comp ] ) .b ) : : we show how this algorithm can be applied to compute this similarity measure for two circular sequences and of lengths and , respectively , in the same time and space complexity as a result of the extension of the definition of minimal absent words to circular sequences ( section [ sec : circ_seq_comp ] ) .c ) : : we provide an open - source code implementation of our algorithms and investigate potential applications of our theoretical findings ( section [ sec : imp_app ] ) .we begin with basic definitions and notation .let [1]\dd y[n-1] ] the _ factor _ ( sometimes called _ substring _ ) of that starts at position and ends at position ( it is empty if ) , and by the _ empty word _ , word of length 0 .we recall that a prefix of is a factor that starts at position 0 ( ] ) , and that a factor of is a _ proper _ factor if it is not itself .the set of all the factors of the word is denoted by .let be a word of length .we say that there exists an _ occurrence _ of in , or , more simply , that _ occurs in _ , when is a factor of .every occurrence of can be characterised by a starting position in .thus we say that occurs at the _ starting position _ in when ] .let denote the length of the longest common prefix between \dd n - 1] ] for all positions , on , and otherwise .we denote by the _ longest common prefix _ array of defined by ={\textsf{lcp}}{}(r-1 , r) ] .the inverse of the array is defined by = r ] , , and = y[i \dd j] ] , , the -th _ rotation _ of , where . given two words and , we define if and only if there exist , , such that .a _ circular word _ is a conjugacy class of the equivalence relation .given a circular word , any ( linear ) word in the equivalence class is called a _linearization _ of the circular word .conversely , given a linear word , we say that is a _ circularization _ of if and only if is a linearization of .the set of factors of the circular word is equal to the set of factors of whose length is at most , where is any linearization of .note that if and are two rotations of the same word , then the factorial languages and coincide , so one can unambiguously define the ( infinite ) language as the language , where is any linearization of . in section [ sec : circ_seq_comp ] , we give the definition of the set of minimal absent words for a circular word .we will prove that the following problem can be solved with the same time and space complexity as its counterpart in the linear case .the goal of this section is to provide the first linear - time and linear - space algorithm for computing the similarity measure ( see section [ sec : prem ] ) between two words defined over a fixed - sized alphabet . to this end , we consider two words and of lengths and , respectively , and their associated sets of minimal absent words , and , respectively .next , we give a linear - time and linear - space solution for the maw - sequencecomparison problem .it is known from and that we can compute the sets and in linear time and space with respect to the two lengths and , respectively .the idea of our strategy consists of a merge sort on the sets and , after they have been ordered with the help of suffix arrays . to this end, we construct the suffix array associated to the word , together with the implicit array corresponding to it .all of these structures can be constructed in time and space , as mentioned earlier .furthermore , we can preprocess the array for range minimum queries , which we denote by . with the preprocessing complete ,the longest common prefix of two suffixes of starting at positions and can be computed in constant time , using the formula + 1,\textsf{isa}[q])]. ] , can be done in time ( using bucket sort , for example ) , we can sort now each of the sets of minimal absent words , taking into consideration the letter on the first position and these ranks .thus , from now on , we assume that where is lexicographically smaller than , for , and , where is lexicographically smaller than , for .provided these tools , we now proceed to do the merge .thus , considering that we are analysing the tuple in and the tuple in , we note that the two are equal if and only if =y_j[0] ] be a linearization of .the word obtained by appending to its first letter , [1 ] \dd x[m-1]x[0] ] belong to .the same argument shows that for any rotation [i+1 ] \dd x[m-1]x[0]\dd x[i-1] ] , obtained by appending to its first letter , belongs to .conversely , if a word of maximal length is in , then its maximal proper prefix and its maximal proper suffix are words of length in , so they must be consecutive rotations of . therefore , the number of words of maximal length in equals the number of distinct rotations of , hence the statement follows .this is in sharp contrast with the situation for linear words , where the set of minimal absent words can be represented on a trie having size linear in the length of the word .indeed , the algorithm mf - trie , introduced in , builds the tree - like deterministic automaton accepting the set of minimal absent words for a word taking as input the factor automaton of , that is the minimal deterministic automaton recognizing the set of factors of .the leaves of the trie correspond to the minimal absent words for , while the internal states are those of the factor automaton .since the factor automaton of a word has less than states ( for details , see ) , this provides a representation of the minimal absent words of a word of length in space .this algorithmic drawback leads us to the second definition .this second definition of minimal absent words for circular strings has been already introduced in .first , we give a combinatorial result which shows that when considering circular words it does not make sense to look at absent words obtained from more than two rotations . [ lem : general ] for any positive integer and any word , the set is empty .this obviously holds for all words of length 1 .assume towards a contradiction that this is not the case in general .hence , there must exist a word of length that fulfills the conditions in the lemma , thus and .furthermore , since the length prefix and the length suffix of every minimal absent word occur in the main word at non - consecutive positions , there must exist positions such that =u^{k+1}[i+1\dd i+m-2]=u^{k+1}[j+1\dd j+m-2].\end{aligned}\ ] ] obviously , following equation ( [ eq:1 ] ) , since , we have that ] is also -periodic . thus , following a direct application of the periodicity lemma we have that ] , which leads to a contradiction with the fact that is a minimal absent word , whenever is defined .thus , it must be the case that . using the same strategy and looking at positions ] , we conclude that . therefore , in this case , we have that , which is a contradiction with the fact that the word fulfills the conditions of the lemma .this concludes the proof .observe now that the set consists in fact of all extra minimal absent words generated whenever we look at more than one rotation , that do not include the length arguments .that is , does not include the words bounding the maximum length that a word is allowed , nor the words created , or lost , during a further concatenation of an image of .however , when considering an iterative concatenation of the word , these extra elements determined by the length constrain cancel each other . as observed in section[ sec : prem ] , two rotations of the same word generate two languages that have the same set of factors .so , we can unambiguously associate to a circular word the ( infinite ) factorial language .it is therefore natural to define the set of minimal absent words for the circular word as the set .for instance , if , then we have this second definition is much more efficient in terms of space , as we show below . in particular, the length of the words in is bounded from above by , hence is a finite set .recall that a word is _ a power _ of a word if there exists a positive integer such that is expressed as consecutive concatenations of , denoted by .conversely , a word is _ primitive _ if implies .notice that a word is primitive if and only if any of its rotation is .we can therefore extend the definition of primitivity to circular words .the definition of does not allow one to uniquely reconstruct from , unless is known to be primitive , since it is readily verified that and therefore also the minimal absent words of these two languages coincide .however , from the algorithmic point of view , this issue can be easily managed by storing the length of a linearization of together with the set . moreover , in most practical cases , for example when dealing with biological sequences , it is highly unlikely that the circular word considered is not primitive .the difference between the two definitions above is presented in the next lemma .[ lem : twodef ] clearly , .the statement then follows from the definition of minimal absent words .based on the previous discussion , we set , while the following corollary comes straightforwardly as a consequence of lemma [ lem : general ] .[ lem : circ ] let be a circular word. then .corollary [ lem : circ ] was first introduced as a definition for the set of minimal absent words of a circular word in . using the result of corollary [ lem : circ ], we can easily extend the algorithm described in the previous section to the case of circular words .that is , given two circular words of length and of length , we can compute in time and space the quantity .we obtain the following result .[ the : cmaw ] problem maw - circularsequencecomparison can be solved in time and space .we implemented the presented algorithms as programme to perform pairwise sequence comparison for a set of sequences using minimal absent words .uses programme for linear - time and linear - space computation of minimal absent words using suffix array .was implemented in the programming language and developed under gnu / linux operating system .it takes , as input argument , a file in multifasta format with the input sequences , and then any of the two methods , for _ linear _ or _ circular _ sequence comparison , can be applied .it then produces a file in phylip format with the distance matrix as output .cell $ ] of the matrix stores ( or for the circular case ) .the implementation is distributed under the gnu general public license ( gpl ) , and it is available at http://github.com/solonas13/maw , which is set up for maintaining the source code and the man - page documentation . notice that _ all _ input datasets and the produced outputs referred to in this section are publicly maintained at the same web - site .an important feature of the proposed algorithms is that they require space linear in the length of the sequences ( see theorem [ the : maw ] and theorem [ the : cmaw ] ) .hence , we were also able to implement using the open multi - processing ( openmp ) pi for shared memory multiprocessing programming to distribute the workload across the available processing threads without a large memory footprint . * application . *recently , there has been a number of studies on the biological significance of absent words in various species . in ,the authors presented dendrograms from dinucleotide relative abundances in sets of minimal absent words for prokaryotes and eukaryotic genomes .the analyses support the hypothesis that minimal absent words are inherited through a common ancestor , in addition to lineage - specific inheritance , only in vertebrates . very recently , in , it was shown that there exist three minimal words in the ebola virus genomes which are absent from human genome .the authors suggest that the identification of such species - specific sequences may prove to be useful for the development of both diagnosis and therapeutics . in this section ,we show a potential application of our results for the construction of dendrograms for dna sequences with circular structure .circular dna sequences can be found in viruses , as plasmids in archaea and bacteria , and in the mitochondria and plastids of eukaryotic cells .circular sequence comparison thus finds applications in several contexts such as reconstructing phylogenies using viroids rna or mitochondrial dna ( mtdna ) .conventional tools to align circular sequences could yield an incorrectly high genetic distance between closely - related species . indeed ,when sequencing molecules , the position where a circular sequence starts can be totally arbitrary . due to this _ arbitrariness _ , a suitable rotation of one sequence would give much better results for a pairwise alignment .in what follows , we demonstrate the power of minimal absent words to pave a path to resolve this issue by applying corollary [ lem : circ ] and theorem [ the : cmaw ] .next we do not claim that a solid phylogenetic analysis is presented but rather an investigation for potential applications of our theoretical findings .we performed the following experiment with synthetic data .first , we simulated a basic dataset of dna sequences using indelible .the number of taxa , denoted by , was set to ; the length of the sequence generated at the root of the tree , denoted by , was set to 2500bp ; and the substitution rate , denoted by , was set to .we also used the following parameters : a deletion rate , denoted by , of _ relative _ to substitution rate of ; and an insertion rate , denoted by , of _ relative _ to substitution rate of .the parameters were chosen based on the genetic diversity standard measures observed for sets of mtdna sequences from primates and mammals .we generated another instance of the basic dataset , containing one _ arbitrary _ rotation of each of the sequences from the basic dataset .we then used this randomized dataset as input to by considering as the distance metric .the output of was passed as input to , an efficient implementation of neighbor - joining , a well - established hierarchical clustering algorithm for inferring dendrograms ( trees ) .we thus used to infer the respective tree under the neighbor - joining criterion .we also inferred the tree by following the same pipeline , but by considering as distance metric , as well as the tree by using the _ basic _ dataset as input of this pipeline and as distance metric . hence , notice that represents the original tree .finally , we computed the pairwise robinson - foulds ( rf ) distance between : and ; and and .let us define _ accuracy _ as the difference between 1 and the relative pairwise rf distance .we repeated this experiment by simulating different datasets and measured the corresponding accuracy .the results in table [ tab : accuracy ] ( see vs. ) suggest that by considering we can always re - construct the original tree even if the sequences have first been arbitrarily rotated ( corollary [ lem : circ ] ) .this is not the case ( see vs. ) if we consider .notice that accuracy denotes a ( relative ) pairwise rf distance of 0 .in this article , complementary to measures that refer to the composition of sequences in terms of their constituent patterns , we considered sequence comparison using minimal absent words , information about what does not occur in the sequences .we presented the first linear - time and linear - space algorithm to compare two sequences by considering _all _ their minimal absent words ( theorem [ the : maw ] ) . in the process, we presented some results of combinatorial interest , and also extended the proposed techniques to circular sequences .the power of minimal absent words is highlighted by the fact that they provide a tool for sequence comparison that is as efficient for circular as it is for linear sequences ( corollary [ lem : circ ] and theorem [ the : cmaw ] ) ; whereas , this is not the case , for instance , using the general edit distance model .finally , a preliminary experimental study shows the potential of our theoretical findings . our immediate target is to consider the following _ incremental _ version of the same problem : given an appropriate encoding of a comparison between sequences and ,can one incrementally compute the answer for and , and the answer for and , efficiently , where is an additional letter ?incremental sequence comparison , under the edit distance model , has already been considered in .in , the authors considered a more powerful generalization of the -gram distance ( see for definition ) to compare and .this generalization comprises partitioning and in blocks each , as evenly as possible , computing the -gram distance between the corresponding block pairs , and then summing up the distances computed blockwise to obtain the new measure .we are also planning to apply this generalization to the similarity measure studied here and evaluate it using real and synthetic data .we warmly thank alice heliou for her inestimable code contribution and antonio restivo for useful discussions .gabriele fici s work was supported by the prin 2010/2011 project `` automi e linguaggi formali : aspetti matematici e applicativi '' of the italian ministry of education ( miur ) and by the `` national group for algebraic and geometric structures , and their applications '' ( gnsaga indam ) .robert merca s work was supported by the p.r.i.m.e .programme of daad co - funded by bmbf and eu s 7th framework programme ( grant 605728 ) .solon p. pissis s work was supported by a research grant ( # rg130720 ) awarded by the royal society .barton , c. , iliopoulos , c.s ., kundu , r. , pissis , s.p . ,retha , a. , vayani , f. : accurate and efficient methods to improve multiple circular sequence alignment . in : sea , lncs ,vol . 9125 , pp .247258 ( 2015 )
sequence comparison is a prerequisite to virtually all comparative genomic analyses . it is often realized by sequence alignment techniques , which are computationally expensive . this has led to increased research into alignment - free techniques , which are based on measures referring to the composition of sequences in terms of their constituent patterns . these measures , such as -gram distance , are usually computed in time linear with respect to the length of the sequences . in this article , we focus on the complementary idea : how two sequences can be efficiently compared based on information that does not occur in the sequences . a word is an _ absent word _ of some sequence if it does not occur in the sequence . an absent word is _ minimal _ if all its proper factors occur in the sequence . here we present the first linear - time and linear - space algorithm to compare two sequences by considering _ all _ their minimal absent words . in the process , we present results of combinatorial interest , and also extend the proposed techniques to compare circular sequences .
the aim of this paper is to provide a method to determine the location and scale relationship between two groups of one - dimensional observations for two samples , say and , such as the responses of two different products on different subjects , the scores of people on two examinations and so on .suppose are independent identity distributed according to and are independent identity distributed according to , where and are two unknown distribution functions . when the test for normality is not passed , nonparametric analysis methods should be applied . usually , the mean difference or midian difference of the two samples is used to determine the location difference and use mean ratio or midian ratio to obtain the scale .they are not reliable since only a small information of the two samples are extracted and the results are not meaningful .based on the idea of location - scale transformation , freitag , munk and vogt has developed an approach to access the structure relationship between distributions , in which the whole information of the samples are used . however , the problem we concern are the location difference and scale between two distributions rather than the model structure . that s to say , we have to determine the values of location difference or scale or both for any two distributions . using the same transformation idea ,our approach can be described as follows .let a linear function , i.e. , , where and . is a given measure of discrepancy between two distributions .denote the distribution function of by .let .we want to find the value which minimize .that s to say , if we transform to be , the two groups of observations are closest and under the `` closest '' mean we can not tell there are any location difference or scale between them .therefore , we can say is at least larger than times of . there are situations where is not unique .let , and .conservatively , at first we can take if , else ; then take to be the value in satisfying .when is a continues region , it is easy to see that the selected is unique .therefore , we should find certain that is a continues region .besides , if we let in , the location difference between and could be determined .if we let in , the scale between and could be determined . in practice, we could use the empirical distributions of the two group of data for and , respectively . the discrepancy measure we consider in this paper will be focused on mallows distance and expand to kolmogorov - smirnov distance .mallows distance was presented in the formulation of statistics framework in 1972 , however , an independent physics research work had involved such a related concept a little earlier in 1940s .the rest of the paper is unfolded as follows . in section 2the main results under mallows distance for the location transformation , scale transformation or both are presented , showing that we can uniquely determine the location and scale relationship between two distributions and thus mallows distance is suitable discrepancy measure to use . in section 3the similar results can be obtained under kolmogorov - smirnov distance but only for location transformation .section 4 gives the application of this approach to determine the location and scale relationship on real data .in this subsection , we consider the proposed approach under mallows distance . formally , the mallows -distance ( also known as wasserstein -distance ) between distributions and regarding to random variables and , respectively , is defined as where the infimum is taken over the set ( denoted by ) of all joint distributions of and with marginals and . herewe require that and have finite moment , i.e. , and . for , the mallows -distance has the two properties .* mallows distance , i.e. satisfies axioms of a metric on . * the convergence of distributions in mallows distance is equivalent to weak convergence plus moment convergence.(lavina and bickel , 2001 ) let be an uniform random variable , , and is the inverse of a distribution function , .according to johnson and samworth ( 2005 ) we know equation ( [ eq - u ] ) gives an easier computation formula to calculate the distance , that is particularly , when , we have a further relationship for computation which is especially useful when calculating mallows 1-distance using empirical distribution for real data , in order to circumvent the unknown real distribution .let , where and , and be its distribution function , then it is easy to obtain that .the purpose of our approach is to find the optimal shift and scale values to minimizing the mallows -distance between and , that is then the following result can be obtained . [ thr-1 ] for distribution functions and , let with .then the mallows -distance ( ) between and , denoted by , a function of two variables and , is a continuous and convex function on half plane , i.e. , for any , and , , it holds that it can be easily obtained that . from ( [ eq - uu ] ), we know then the continuity of the is trivial . besides , using minkowski unequality we have according to the definition of theorem [ thr-1 ] , scaled parameter should be greater than zero , but we can easily give an apparent analysis of transformed distribution function if . also , the theorem shows that under mallows -distance ( ) is a convex function of , thus ( [ eq-1 ] ) is a continues region. we can select according to the plan in section [ sec : intro ] . if we only consider the shifted case or scaled case , let or in ,then we can obtain the following results .for distribution functions and , let .then the mallows -distance ( ) between and , denoted by , a function of , is a continuous and convex function on , i.e. for any , and , it holds that for distribution functions and , let with .the mallows distance ( ) between the scaled distribution and , denoted as , a function of , is a continuous and convex function on , i.e. for any , and , it holds that in order to illustrate may not be strictly convex , let distributions and to be actually , is the uniform distribution over two half unit intervals ] , and is uniform over ] .the optimal shifted value for is not unique .therefore , is not strictly convex on , nor is .since our approach is successful under mallows distance , there is nothing preventing us from exploring other discrepancy measure . here , we are able to realize our approach for shifted case under kolmogorov - smirnov distance ( k - s distance ) , . for k - s distance and distribution functions and ,our purpose is to find the optimal shift value to minimize .let and for distribution .define ] , then we have .and we denote and . due to these definitions ,the statement is apparently hold and we have the following result .[ thr-2 ] if , then the function decreases on ] . similarly , , if , then we have if , then holds and . by ( [ eq-3 ] ) , we can obtain .thus . therefore , increases on . the proof can follow the similar method as trivially .theorem [ thr-2 ] shows that our approach under k - s distance can also provide a reasonable , possibly unique location difference between two distributions .in hair study , we need to assess effets of hair care products in changing hair diameters after a period of use . there are two treatments , say and .the experiments are conducted as follows .there are 30 subjects and each subject use and on the left and right head , respectively .there are two study visit , baseline and 8 weeks later . at each study visit ,hair diameters are measured on several hundred of hairs on left and right head from a subject .comparison between visits is to compare the distributions of hair diameters at two visit point .the diameters from one subject often follows non - traditional distributions . for example , a subject at baseline and 8 weeks later hair diameter frequency plot for treatment are shown in figure [ fig1 ] and the distributions are shown in figure [ fig2 ] .it is of importance to know holistically how much diameters have changed . for each subject ,the optimal shift amount of the distributions of hair diameters at the two visit point for each treatment under mallows distance and k - s distance can be obtained .for instance , the shift plots for and of a subject are displayed in figure [ fig3 ] and shift plots of all subjects for are displayed in figure [ fig4 ] .the shift corresponding to the minimum distance is the difference between two distributions , for comparison analysis .shift plot of all subject for ,width=345 ] use the optimal values for each subject each treatment as responses and perform wilcoxon signed rank test on differences in shifts between treatments for all subjects to detect a difference between the treatments .the results are shown in table [ tab-1 ] , from which we concludes that the two treatments are difference at 0.01 level ..[tab-1]comparison of treatments under distance shift [ cols="^ , < , < , < , < , < , < " , ]in this paper , we demonstrated a significant theorem relating to how to measure two distribution within the probabilistic interpretation under mallows distance or k - s distance , and a well studied simulation on real data had been implemented for an illustration of this method . the solid theoretical foundation would be beneficial to others who would have a further understanding or research on mallows distance measures the discrepancy between two distributions , especially two similar distributions with inner relationship .besides those distances , there might be possibility to use other divergence measures to be minimized after proper transformations .comparison among various underlying divergence measures will be of interest .this is an area of research that we continue to pursue provided available resources and interests .in addition , a comparison of this approach versus other methods is another topic to be investigated .freitag , g. , munk , a. , vogt , m. 2003 ._ assessing structural relationships between distributions - a quantile process approach based on mallow s distance ._ in : recent advances and trends in nonparametric statisticsakritas , m. g. , politis , d. n. , amsterdam : elsevier b. v. , 123 - 137 .
despite of many measures applied for determine the difference between two groups of observations , such as mean value , median value , sample standard deviation and so on , we propose a novel non parametric transformation method based on mallows distance to investigate the location and variance differences between the two groups . the convexity theory of this method is constructed and thus it is a viable alternative for data of any distributions . in addition , we are able to establish the similar method under other distance measures , such as kolmogorov - smirnov distance . the application of our method in real data is performed as well . mallows distance ; shift and scaled ; kolmogorov - smirnov distance
a network formalism is often very useful for describing complex systems of interacting entities .scholars in a diverse set of disciplines have studied networks for many decades , and network science has experienced particularly explosive growth during the past 20 years .the most traditional network representation is a static graph , in which nodes represent entities and edges represent pairwise connections between nodes .however , many networks are time - dependent or multiplex ( include multiple types of connections between nodes ) .moreover , network structure is influenced profoundly by spatial effects . to avoid discarding potentially important information , which can lead to misleading results , it is thus crucial to develop methods that incorporate features such as time - dependence , multiplexity , and spatial embeddedness in a context - dependent manner . because of the newfound wealth of rich data , it has now become possible to validate increasingly complicated network structures and methods using empirical data . in the present paper ,we study a mesoscale network structure known as _ community structure_. a `` community '' is a set of nodes with dense connections among themselves , and with only sparse connections to other communities in a network .communities arise in numerous applications .for example , social networks typically include dense sets of nodes with common interests or other characteristics , networks of legislators often contain dense sets of individuals who vote in similar ways , and protein - protein interaction networks include dense sets of nodes that constitute functional units .the algorithmic detection of communities and the subsequent investigation of both their aggregate properties and the properties of their component members can provide novel insights into the relationship between network structure and function ( e.g. , functional groupings of newly discovered proteins ) .myriad community detection methods have been developed .the most popular family of methods entails the optimization of a quality function known as _modularity _ . to optimize modularity ,one compares the actual network structure to some _ null model _ , which quantifies what it means for a pair of nodes to be connected `` at random '' .traditionally , most studies have randomized only network structure ( while preserving some structural properties ) and not incorporated other features ( such as spatial or other information ) . the standard null model for modularityoptimization is the `` newman - girvan '' ( ng ) null model , in which one randomizes edge weights such that the expected strength distribution is preserved .it is thus related to the classical configuration model .it has become very popular due to its simplicity and effectiveness , and it has been derived systematically through the consideration of laplacian dynamics on networks . however , it is also a naive choice , as it does not incorporate domain - specific information .the choice of a null model is an important consideration because ( 1 ) it can have a significant effect on the community structure obtained via optimization of a quality function , and ( 2 ) it changes the interpretation of communities .the best choice for a null model depends on both one s data set and scientific question . in the present paper, we explore the issue of null model choice in detail in the context of spatially embedded and temporal networks . most existing research on community detectiondoes not incorporate metadata about nodes ( or edges ) or information about the timing and location of interactions between nodes .however , with the increasing wealth of space - resolved and time - resolved data sets , it is important to develop community detection techniques that take advantage of the additional spatial and temporal information ( and of domain - specific information , such as generative models for human interactions ) .indeed , community detection in temporal networks has become increasingly popular , but the majority of methods use networks that are constructed from either static snapshots of data or aggregations of data over time windows .few investigations of community structure in temporal networks have used methods that exploit temporal structure ( see , e.g. , ) .there is also starting to be more work on the influence of space on community structure , but much more research is necessary . in the present paper ,we use modularity maximization to study communities in spatially embedded and time - dependent networks .we compare the results of community detection using two different spatial null models a _ gravity null model _ and a new _ radiation null model _ to the standard ng null model using novel synthetic benchmark networks that incorporate spatial effects via distance decay or disease flux as well as temporal correlation networks that we constructed using time - series data of recurrent epidemic outbreaks in peru .we also evaluate a recently - proposed _ correlation null model _ , which was developed specifically for correlation networks that are constructed from time series , on the epidemic - correlation data .our direct analysis of disease data in the present paper provides a complementary ( e.g. , more systemic ) approach to the majority of studies using network science methodology in this field , which focus on the importance of interpersonal contact networks on the disease spread on an individual level .these types of network methods have become increasingly prevalent in the modeling of infectious diseases .our work also complements other approaches , such as large - scale compartmental models that incorporate transportation networks to link local populations .such models have been used to study large - scale spatial disease spread ( e.g. , to examine the influence of features such as spatial location , climate , and facility of transportation on phenomena such as disease persistence and synchronization of disease spread ) .the rest of the present paper is organized as follows . in section[ section : networks ] , we give an overview of networks and community detection .we also discuss the gravity null model and introduce a new radiation null model .we give our results for synthetic spatial networks in section [ section : benchmarks ] , and we give our results for correlation networks that we construct from disease data in section [ section : dengue ] .we summarize our results in section [ section : conclusions ] . in appendices ,we include the results of additional numerical experiments from varying parameters in the benchmark networks .we also include an additional examination of the similarity between network partitions for the benchmarks and the dengue fever correlation networks .a network describes a set of entities ( called _ nodes _ ) that are connected by pairwise relationships ( called _ edges _ ) . in the present paper ,we study weighted networks which are _ spatially embedded _ : each node represents a location in space .one can represent a weighted network with nodes as an adjacency matrix , where an edge represents the strength of the relationship between nodes and .we seek to find _ communities _ , which are sets of nodes that are densely connected to each other but sparsely connected to other dense sets in a network .we wish to study the evolution of network structure through time .the simplest way to represent temporal data is through an ordered set of _ static networks _, which can arise either as snapshots at different points in time or as a sequence of aggregations over consecutive time windows ( which one can take either as overlapping or nonoverlapping ) .static networks provide a good starting point for the development and investigation of new methods which , in our case , entails how to incorporate spatial information into null models for community detection via modularity maximization .however , they do not take full advantage of temporal information in data that changes in time .for example , it can be hard to track the identity of communities in temporal sequences of networks . to mitigate the community - tracking problem, we also use a type of _ multilayer network _ known as a multislice network .this gives an adjacency tensor that has layers and nodes in each layer , where each layer has a copy each node .the intralayer edges in the network are exactly the same as they were for the sequence of static networks : the tensor element gives the weight of an intralayer edge between nodes and in layer .additionally , each node is connected to copies of itself in consecutive layers and using interlayer edges of weight . in this paper , we will suppose for simplicity that , but one can also consider more general situations .a multislice network can have up to ( ) _ multilayer nodes _( i.e. , node - layer tuples ) , each of which corresponds to a specific ( node , time ) pair .hence , this structure makes it possible to detect temporally evolving communities in a natural way . for our computations of community structure ,we flatten the adjacency tensor into a adjacency matrix , such that the intralayer connections are on the main block diagonal and the interlayer connections occur on the off - block - diagonal entries .we detect communities by maximizing modularity , which we use to describe the `` quality '' of a particular network partition into communities in terms of its departure from a null model .the null model amounts to a prior belief regarding influences on network structure , so it is important to carefully consider the choice of null model . for a weighted static network , modularity is where is the total edge weight , denotes the community that contains node , the function is the kronecker delta , and is the -th element of the null model matrix .one can examine different scales of community structure by incorporating a resolution parameter .smaller values of tend to yield larger communities and vice versa . for multislice networks ,modularity is given by \delta ( { \mkern 1.5mu\overline{\mkern-1.5muc\mkern-1.5mu}\mkern 1.5mu}_{is } , { \mkern 1.5mu\overline{\mkern-1.5muc\mkern-1.5mu}\mkern 1.5mu}_{jr})}\,,\ ] ] where , the quantity denotes the community that contains node in layer , and is the -th element of the null model tensor in layer . to detect communities via modularity maximization ,one searches the possible network partitions for the one with the highest modularity score .because exhaustive search over all possible partitions is computationally intractable , practical algorithms invariably use approximate optimization methods ( e.g. , greedy algorithms , simulated annealing , or spectral optimization ) , and different approaches offer different balances between speed and accuracy . in the present paper , we optimize modularity using a two - phase iterative procedure similar to the louvain method . however , rather than using the adjacency matrix , we work with the modularity matrix with elements for static networks and with the modularity tensor with elements for multislice networks . the employed louvain - like algorithm is stochastic , and a modularity landscape for empirical networks typically includes a very large number of nearly - optimal partitions .for each of our numerical experiments , we thus apply the computational heuristic 100 times to obtain a _ consensus community structure _ by constructing an _ association matrix _ ( where the entries represent the fraction of times that nodes and are classified together in the 100 partitions ) and performing community detection on using the uniform null model ] by dividing each entry by the maximum weight in the network . with our _ flux benchmark _ , we aim to mimic the spread of disease on a network .we allocate its edge weights depending on the mean flux between pairs of nodes that is predicted by the radiation model .we place nodes uniformly at random on the lattice , and we assign populations and communities in the same manner as for the distance benchmark . again as with the distance benchmark , we consider both uniform - population and random - population versions of the flux benchmark . now , however , the edge probability is directly proportional to the mean predicted radiation - model flux between nodes and ( , which is turn is inversely proportional to distance ) : where is a normalization constant to ensure that . in table [ table - benchmarks ] , we summarize the four synthetic benchmark networks that we have just introduced ..primary characteristics ( i.e. , population and edge probability ) for the distance and flux benchmarks for static networks .the quantity signifies that select a number uniformly at random from the set .additionally , if nodes and are in the same community and otherwise , is the distance between nodes and in space , and and are normalization constants . [cols="<,<,<",options="header " , ] we create both static ( i.e. , single - layer ) and multilayer benchmarks networks .the static benchmarks enable us to study the performance of modularity maximization using a chosen null model in a simple setting without the additional complications of a multilayer network . however , the multilayer benchmarks are ultimately more appropriate for disease data because they can incorporate temporal evolution .we begin by placing nodes in space and assigning populations in the same manner as for the static benchmarks .we then assign nodes uniformly at random into one of two communities , and we extend this structure into a multilayer planted community structure with layers .for the `` temporally stable '' benchmarks , the planted community structure is the same for each layer . for the `` temporally evolving '' multilayer benchmarks , we change the community assignment of a fraction of the nodes . for each of these nodes, we select a new community assignment uniformly at random , and we change the community of the node in each layer ; we start at a layer that we select uniformly at random , and we also change the assignment ( to the same new community ) in all remaining layers .we then generate the edges for each layer independently , in the same manner as we generate a static benchmark and using identical parameter values for each ; see fig .[ figure : benchmarks - multislice ] .independent generation of each layer based on the same starting conditions represents differences between observations due to noise and experimental variation . for each of the above types of multilayer benchmarks, we set the value of the interlayer edges between corresponding nodes in consecutive layers to be ] facilitates interpretation and comparisons .we use nmi in the following sections , and we obtain the same qualitative conclusions using variation of information , which is a different normalized measure of similarity. see appendix [ appendix : vi ] for our comparisons using vi . , and ( right ) , , edge density parameter and uniform populations of for different bin sizes ( colored curves ) .we detect communities by optimizing modularity using the ( top ) ng , ( middle ) gravity , and ( bottom ) radiation null models .[ figure : bench - static - even - d - nmi - spacesizevsnm ] ] to emphasize the difference between the gravity and radiation null models , we take and to obtain a relatively densely filled lattice .( see appendix [ appendix : cities ] for the results for a synthetic network with parameter values and . )we first compare this benchmark versus a situation with parameter values and ( which are the parameter values that were used in expert et al .we test varying bin sizes in uniformly - spaced bins using the parameter values , and , .we find that bin width makes a large difference on both benchmarks : produces the highest nmi scores ( i.e. , it has the `` best performance '' ) and increasing bin width leads to a decrease in performance of both spatial null models ( see fig . [ figure : bench - static - even - d - nmi - spacesizevsnm ] ) .this effect is especially pronounced for the gravity null model . in both cases ,the best aggregate performance of the spatial null models at optimal bin sizes is similar for and , so we henceforth use the benchmark with to lower computational time and memory usage .however , one needs to keep the strong influence of bin size on algorithm results in mind for applications .m0.03| m0.16 m0.16 m0.16 m0.16 pop .& uniform & uniform & random & random + & distance & flux & distance & flux + ng & , , and ( columns 1 , 2 ) uniform populations of or ( columns 3 , 4 ) populations determined uniformly at random from the set .we plot nmi for different values of the resolution parameter ( colored curves ) as a function of inter - community connectivity ] .we examine both distance benchmarks ( in columns 1 , 3 ) and flux benchmarks ( in columns 2 , 4 ) .we detect communities by optimizing modularity using the ( top ) ng , ( middle ) gravity , and ( bottom ) radiation null models .[ figure : benchmark : static - even - d - nmi - gammavsnm - tab ] , title="fig : " ] & , , and ( columns 1 , 2 ) uniform populations of or ( columns 3 , 4 ) populations determined uniformly at random from the set .we plot nmi for different values of the resolution parameter ( colored curves ) as a function of inter - community connectivity ] .we examine both distance benchmarks ( in columns 1 , 3 ) and flux benchmarks ( in columns 2 , 4 ) .we detect communities by optimizing modularity using the ( top ) ng , ( middle ) gravity , and ( bottom ) radiation null models .[ figure : benchmark : static - even - d - nmi - gammavsnm - tab ] , title="fig : " ] + grav .& , , and ( columns 1 , 2 ) uniform populations of or ( columns 3 , 4 ) populations determined uniformly at random from the set .we plot nmi for different values of the resolution parameter ( colored curves ) as a function of inter - community connectivity ] .we examine both distance benchmarks ( in columns 1 , 3 ) and flux benchmarks ( in columns 2 , 4 ) .we detect communities by optimizing modularity using the ( top ) ng , ( middle ) gravity , and ( bottom ) radiation null models .[ figure : benchmark : static - even - d - nmi - gammavsnm - tab ] , title="fig : " ] & , , and ( columns 1 , 2 ) uniform populations of or ( columns 3 , 4 ) populations determined uniformly at random from the set .we plot nmi for different values of the resolution parameter ( colored curves ) as a function of inter - community connectivity ] .we examine both distance benchmarks ( in columns 1 , 3 ) and flux benchmarks ( in columns 2 , 4 ) .we detect communities by optimizing modularity using the ( top ) ng , ( middle ) gravity , and ( bottom ) radiation null models .[ figure : benchmark : static - even - d - nmi - gammavsnm - tab ] , title="fig : " ] + rad .& , , and ( columns 1 , 2 ) uniform populations of or ( columns 3 , 4 ) populations determined uniformly at random from the set .we plot nmi for different values of the resolution parameter ( colored curves ) as a function of inter - community connectivity ] .we examine both distance benchmarks ( in columns 1 , 3 ) and flux benchmarks ( in columns 2 , 4 ) .we detect communities by optimizing modularity using the ( top ) ng , ( middle ) gravity , and ( bottom ) radiation null models .[ figure : benchmark : static - even - d - nmi - gammavsnm - tab ] , title="fig : " ] & , , and ( columns 1 , 2 ) uniform populations of or ( columns 3 , 4 ) populations determined uniformly at random from the set .we plot nmi for different values of the resolution parameter ( colored curves ) as a function of inter - community connectivity ] .we examine both distance benchmarks ( in columns 1 , 3 ) and flux benchmarks ( in columns 2 , 4 ) .we detect communities by optimizing modularity using the ( top ) ng , ( middle ) gravity , and ( bottom ) radiation null models .[ figure : benchmark : static - even - d - nmi - gammavsnm - tab ] , title="fig : " ] + we then study the performance of the three null models using several values of the resolution parameter and the inter - community connectivity on static benchmarks with nodes and lattice size parameter .smaller values of tend to yield larger communities and vice versa .considering larger increases the level of mixing between the communities and makes community detection more difficult . for simplicity , we fix the density parameter . as we discuss in appendix [ appendix : mu ] , the value of has little effect on the results of community detection when it is above a certain minimum . for the uniform population distance benchmark ,the only factor that influences edge placement is the distance between nodes . on this benchmark , the gravity null model has the best performance , as it is able to find the correct partitions for ( see fig .[ figure : benchmark : static - even - d - nmi - gammavsnm - tab ] ) .the radiation null model has the second best performance and is able to find partially meaningful partitions for , above which we observe a plateau of `` near - singleton '' partitions in which most nodes are placed into singleton communities .( we use the term `` singleton partition '' to refer to a partition in which every node is assigned to its own community . ) the ng null model , which does not incorporate spatial information , does much worse than either of the spatial null models ; it suffers a sharp decline in performance at .this demonstrates that , although incorporating spatial influence is beneficial for its own sake , we see that using a null model that incorporates population information to study community structure in networks whose structure does not depend on population decreases the performance of community detection .that is , incorporating spatial information is important , but it needs to be done intelligently . on the uniform population flux benchmark in which we include the population density in the region between two nodes in the flux prediction ( so the population density influences edge structure ) the radiation null model outperforms the other null models .the gravity null model comes in second place , and the ng null model is a distant third . for the random population distance benchmark , we observe a fast deterioration in quality of the detected communities for for all null models , and all null models reach a `` near - singleton '' regime by .the ng null model has the best performance among the three null models for . for ,the gravity null model has the best performance , although the partitions consist largely of singletons for . for the random population flux benchmark, the radiation null model has the best performance of the three null models .it has the slowest decrease in nmi scores with the increase in .the gravity null model has the second - best performance , and ng fails even when there is no mixing between the two communities ( see fig . [ figure : benchmark : static - even - d - nmi - gammavsnm - tab ] ) .however , even the best performance is much worse on random population benchmarks than it is on the uniform population benchmarks .note additionally that including population information into the edge placement probability by taking ( `` distance and population benchmark '' ) brings back the advantage for the gravity null model ( see appendix [ app : benchmark - distpop ] ) . among the parameter valuesthat we consider ( ) , appears to give the best results ( i.e. , the largest nmi scores ) . in the near - singleton regime, outperforms it slightly ( see fig .[ figure : benchmark : static - even - d - nmi - gammavsnm - tab ] ) , however this partition is vastly different from the planted partition .we now study the influence of the resolution parameters and on the community quality of multilayer benchmarks .we first study the performance of the ng , gravity , and radiation null models on temporally stable uniform population benchmarks ( see fig .[ figure : benchmark : multi - gammavsnm ] ) with parameter values , , and layers using and .we expect that for larger values the weight of the interlayer edges outweighs the intralayer edges , leading to each node being assigned to the same community as its copies in other layers .however , for the temporally stable benchmarks we did not observe this effect ; here , we only show figures for , as different values of give very little difference in results ( in some plots nearly unnoticeable ) .we also experimented with `` random population '' benchmarks ( see appendix [ appendix : multi - random ] ) and smaller and larger values of .our results on multilayer benchmarks follow our findings from static benchmarks .once again , we find that the choice of has a large influence on the quality of the algorithmic partitions , and ( as with our findings for static benchmarks ) seems to yield the best performance ( i.e. , the highest nmi scores ) in most cases , except the near - singleton regime , where outperforms it slightly . for all ) multilayer temporally stable spatial benchmarks with , , , and for and various values of ( colored curves ) as a function of for ( left ) the distance benchmark and ( right ) the flux benchmark .we detect communities by optimizing modularity using the ( top ) ng , ( middle ) gravity , and ( bottom ) radiation null models .[ figure : benchmark : multi - gammavsnm ] ] we now examine the nmi between algorithmic versus planted partitions on temporally stable multilayer benchmarks while varying and for fixed . as we show in fig .[ figure : benchmark : multi - omegavsnm ] , we find that the value of usually has little effect on our ability to detect the planted communities via modularity maximization on benchmarks with a temporally stable community structure .this suggests that the small interlayer variation due to the independent creation of layers is not enough to observe the influence of on community detection . for all ) multilayertemporally stable spatial benchmarks with , , , and for and different values of interlayer edge weights ( colored curves ) as a function of for ( left ) the distance benchmark and ( right ) the flux benchmark .we detect communities by optimizing modularity using the ( top ) ng , ( middle ) gravity , and ( bottom ) radiation null models .[ figure : benchmark : multi - omegavsnm ] ] we then study the performance of the three null models on temporally evolving uniform population benchmarks ( see fig .[ figure : benchmark : multi - t - gammavsnm ] ) with parameter values of nodes , a lattice parameter of , a fraction of nodes that change community over the whole timeline , and layers .we show results for for and for for .compare fig .[ figure : benchmark : multi - t - gammavsnm ] to the left panels of figs .[ figure : benchmark : multi - gammavsnm ] and [ figure : benchmark : multi - omegavsnm ] . ontemporally evolving benchmarks varying makes a difference , where the structures for for the gravity null model and for the radiation null model are the most similar to the planted partitions .this is in accordance with our expectation that algorithmically detected community structure becomes overly biased towards connecting copies of nodes across layers above a critical value ( which depends on network structure ) .for all i ) multilayer temporally evolving spatial distance benchmarks with , , , and for ( left ) and different values of the resolution parameter ( colored curves ) and ( right ) and different values of the interlayer weights ( colored curves ) as a function of .we detect communities by optimizing modularity using the ( top ) ng , ( middle ) gravity , and ( bottom ) radiation null models .[ figure : benchmark : multi - t - gammavsnm ] ] we also perform a `` province - level '' community detection on the multilayer benchmarks in which we seek assignments of nodes ( regardless of what layer they are in ) to communities and compare the results to benchmark networks with planted community structure .this is analogous to trying to detect community structure in disease data that persists over time e.g. , to seek the influence of climate on disease patterns .this is easiest to apply to temporally stable multilayer networks .we successfully detect the underlying communities , and we obtain similar performance results as with the multilayer communities that we discussed above ( see the discussion in appendix [ appendix : bench - regions ] ) .our results on synthetic benchmark networks suggest that using a spatial null model on a spatial network does not necessarily assure a better result for community detection .the quality of results with different null models depends strongly on the data and the choice of parameter values .for example , incorporating population information into a null model in a situation in which the population is not influencing connectivity structure might cause community detection to yield spurious communities ( as we discussed in the context of random population benchmarks ) .the level of influence of different node properties or events ( such as disease flux on edge placement ) and the extent of mixing between communities is often unknown for networks that are constructed from real data . for such networks ,we recommend to try both spatial and non - spatial null models over a wide parameter range and to study the results carefully in light of any other known information about the network . in section [ section : dengue ] , we will present an example of using such a procedure to study the community structure of correlation networks that are created from time series of dengue fever cases .in this section , we assess the performance of the ng , gravity , radiation , and correlation . ] null models on multilayer correlation networks that we construct from disease incidence data that describe the spatiotemporal spread of dengue fever ( a tropical , mosquito - borne viral infection ) in peru from 1994 to 2008 .disease dynamics are strongly influenced by space , as the distance between regions affects the migration of both humans and mosquitos .disease dynamics are also affected by climate due to the temperature dependence of the mosquito life cycle , and different regions of peru have substantially different climates .therefore it is important to examine and evaluate the performance of different spatial null models when examining communities in networks that are constructed from disease data .dengue is a human viral infection that is prevalent in most tropical countries and is carried primarily by the __ mosquito .the dengue virus has four strains ( denv-1denv-4 ) .infection with one strain is usually mild or asymptomatic , and it gives immunity to that strain , but subsequent infection with another strain is usually associated with more severe disease .although dengue was considered to be nearing extinction in the 1970s , increased human mobility and mosquito abundance have led to its resurgence in many countries often as recurrent epidemics with an increasing number of cases and severity of disease .dengue is a rising threat in tropical and subtropical climates due to the introduction of new virus strains into many countries and to the rise in mosquito prevalence since the cancellation of mosquito eradication programs .it is currently the most prevalent vector - borne disease in the americas .peru is located on the pacific coast of south america .its population of about 29 million people is distributed heterogeneously throughout the country .the majority live in the western coastal plain , and there are much smaller population densities in the andes mountains in the center and the amazon jungle in the east .the climate varies from dry along the coast to tropical in the amazon and cold in the andes .such heterogeneities influence dengue transmission .for example , temperature and rain affect the life cycle of the main dengue vector _ ae .aegypti _ , and temperature affects its role in disease transmission .the jungle forms a reservoir of endemic disease ; from there , the disease occasionally spreads across the country in an epidemic .additionally , as _ ae .aegypti _ typically only travels short distances , human mobility can contribute significantly to the heterogeneous transmission patterns of dengue at all spatial scales .our dengue data set consists of 15 years of weekly measurements of the number of disease cases across 79 provinces of peru collected by the peruvian ministry of health between 1994 and 2008 .these data have previously been analyzed by chowell et al . to study the relationship between the basic reproductive number , disease attack rate , and climate and populations of provinces . until 1995 ,the denv-1 strain dominated peru ; it mostly caused rare and isolated outbreaks .the denv-2 strain was first observed in 19951996 , when it caused an isolated large epidemic .denv-3 and 4 entered peru in 1999 and led to a countrywide epidemic in 20002001 , and there was subsequent sustained yearly transmission .the data contains a total of 86,631 dengue cases ; most of them are in jungle and coastal provinces ( 47% and 49% , respectively ) , and only 4% of the cases occur in the mountains .the disease is present in 79 of the 195 provinces across the data set , and never in all 79 provinces at once . in this paper, we use the definition of `` epidemic '' from the us agency for international development ( usaid ) : an _ epidemic _ occurs when the disease count is two standard deviations above the baseline ( i.e. , mean ) .when stating countrywide epidemics , we apply this definition when considering all nodes . when stating local epidemics , we apply this definition to individual provinces ( though one could also consider particular sets of provinces ) . our data set consists of time series of weekly disease counts over weeks .the quantity denotes the number of disease cases in province at time .( see fig .[ figure : multislice ] for a plot of the number of cases versus time . )we create networks from this data by calculating the pearson correlation coefficient between each pair of time series .we seek to study the temporal evolution of the correlations by constructing separate networks for different time windows we either construct a set of static networks or a multislice network .to create these networks , we divide each of the time series into time windows by explicitly defining a list of the starting points for each time window and the time window width . in the present paper , we use unless we state otherwise .the starting point and window width define a time window that we use to select a portion of the disease time series .for example , for the time series of disease cases in province , the time - series portion represents the numbers of disease cases in province at times . by considering all provinces, one can use such time series either to construct a set of static networks or a multislice network . for a static network ,we define a set of nodes , where node corresponds to province .the edge weight represents the similarity between the time series and ; the kronecker delta removes self - edges .the quantity is the pearson correlation coefficient between the disease time series for provinces and .that is , where indicates averaging over the time window under consideration , and is the standard deviation of .our construction yields a fully connected ( or almost fully connected ) network with elements ] and , the algorithmically detected partitions have a relatively high z - rand score when compared to the temporal partition [ see fig . [figure : ngog](c ) ] . when looked at in detail, the partitions exhibit a mixture of spatial and temporal features .[ see fig .[ figure : ngog](a ) for an example . ]\(a ) ( b ) ( c ) when studying the qualitative features of the partitions for ] and removing self - edges .we construct networks by selecting time windows and calculating pearson correlations in the same manner as in section [ sec : netcreate ] , but the here edge weights are left as raw correlations : ( eq . [ equation : pearson ] ) .because of the special structure of correlation matrices , modularity using the standard ng null model assigns importance to pairs of nodes and whose pearson correlation is larger than the product of the correlations of each node with the time series of the total number of disease cases in the country over the chosen time window : , where .by contrast , the correlation null model that we adopt from ref . uses ideas from rmt to detect communities of nodes that are more connected than expected under the null hypothesis that all time series are independent of each other . for a given correlation matrix constructed from time series that each have length ( with ) , one posits based on rmt that any eigenvalues that are smaller than the eigenvalue are due to noise . here, is the maximum eigenvalue predicted for a correlation matrix that is constructed from the same number of entirely random time series .additionally , for many empirical correlation matrices , the largest eigenvalue is much larger than the others , and its corresponding eigenvector has all positive signs . in this situation , there is a common factor , which is called the `` market mode '' in financial applications , that influences all of the time series .we can thus decompose our correlation matrix as follows : , where is the `` random '' component of the matrix , is the `` market mode '' , and the `` group mode '' is embodies the meaningful correlations between time series .we write and , where and are an eigenvalue and its corresponding eigenvector , is the outer product of the two vectors ( a special case of the kronecker product for matrices ) , and is the maximum observed eigenvalue in the correlation matrix .we can construct a correlation null model either by removing both the `` random '' component of the matrix and the influence of the `` market mode '' ( i.e. , by using the null model ) or by only removing the random component ( i.e. , by using the null model ) . to satisfy the requirement to applying the rmt approach of ref . , we require . for subsequent calculations , we use for static networks , and for multilayer networks ( which have a maximum of 59 nodes per slice , as not all provinces experience disease at the same time ) , unless stated otherwise .although our maximum eigenvalue is larger than the other eigenvalues and every component of the associated eigenvector is positive , the eigenvector does not appear to affect all nodes to the same extent .the above construction thus yields a non - uniform null model for our data in practice , so we are unable to identify the analog of a market mode .we thus do not incorporate such a mode into the null models that we employ for community detection .we use the correlation null model where is the resolution parameter . for the multilayer setting , we write where and are an eigenvalue and its corresponding eigenvector for layer .we test the performance of this correlation null model on correlation networks that we construct from dengue fever time series with . in most of the static networks ,the community structures appear to be affected by spatial proximity especially for post-2000 networks , as illustrated by the high z - rand scores versus the climate partition ( particularly in 19951996 , 20002001 , 20032004 , 20052006 ) .see fig .[ figure : temporal - static - boxes](a ) .these high z - rand scores result from ( 1 ) the classification of the majority of jungle provinces into one community and ( 2 ) the existence of a community that contains many of the northern coastal provinces [ see figs . [figure : temporal - static - boxes](b , c ) ] .\(a ) ( b ) ( c ) we also perform community detection on multilayer networks using the correlation null model for \times [ 0.1,3] ] is the mutual information , and are the respective marginal probabilities of observing communities and in partitions and , and is the joint probability of observing communities and simultaneously in partitions and ) .vi is equal to if partitions and are identical , and , where is the number of nodes in the whole network . normalizing vi yields nvi , which is given by \,.\ ] ]see refs . for additional discussions . as one can see in fig .[ figure : benchmark : static - vi - gammavsnm ] , both nmi and nvi perform similarly and neither gives visibly better precision .cities , a grid size of , and a density parameter of .we examine the partitions for different values of the resolution parameter as a function of inter - community connectivity using the ( top ) ng null model , ( middle ) gravity null model , and ( bottom ) radiation null model .[ figure : benchmark : static - vi - gammavsnm ] ]we now vary the number of cities in benchmarks with a fixed size of , density parameter of , and a uniform population of people in each city . in fig .[ figure : benchmark : static - even - d - nmi - binsizevsnm ] , we plot the nmi of algorithmic partitions versus planted partitions for several values of the resolution parameter using the ng null model and both spatial null models . in combination with fig .[ figure : benchmark : static - even - d - nmi - gammavsnm - tab ] in the main text , which has cities , we observe no qualitative changes in nmi aside from an expected increase in variability when is small . , a density parameter of , and uniform populations of for different numbers of cities in an underlying space of the same size .the number of cities is ( left ) , and ( right ) .we use the ng ( top ) , gravity ( middle ) , and ( bottom ) radiation null models . see fig .[ figure : benchmark : static - even - d - nmi - gammavsnm - tab ] in the main text for plots with .[ figure : benchmark : static - even - d - nmi - binsizevsnm ] ]we present the results of varying the edge density parameter in static benchmarks .the edge density has a strong effect on the ability of the modularity - maximization methods to detect communities . for , we obtain smaller nmi scores than the maximum attained for each particular for larger values .( see figs .[ figure : benchmark : static - even - rovsnm ] and [ figure : benchmark : static - varypop - rovsnm ] . )we therefore focus on using a density parameter of in the main text to follow the choice that was used for the benchmarks networks in ref . . , and planted partitions for uniform population static spatial benchmarks with , a size parameter of , uniform city populations of , and several values of inter - community connectivity .we plot the nmi scores as a function of the edge density parameter for ( left ) the distance benchmark and ( right ) the flux benchmark .[ figure : benchmark : static - even - rovsnm ] ] , and planted partitions for random population static spatial benchmarks with , a size parameter of , city populations selected uniformly at random from $ ] , and several values of inter - community connectivity .we plot the nmi scores as a function of the edge density parameter for ( left ) the distance benchmark and ( right ) the flux benchmark .[ figure : benchmark : static - varypop - rovsnm ] ]in this section , we construct a `` distance and population '' spatial benchmark . in fig . [ figure : benchmark : static - even - d - nmi - gammavsnm - tab ] in the main text we observed that the gravity null model performs best on the uniform population distance benchmark , but the ng null model performs better than spatial null models on the random population distance benchmark because the edge placement in that benchmark does not include population information . here , we study the effects of incorporating population into edge probabilities for the `` distance and population '' benchmark .we construct the new type of benchmark network in the same manner as the distance benchmark in section [ section : benchmarks ] , but we now incorporate population into the edge - placement probability by taking .as expected , this brings back the advantage that the gravity null model has for the uniform population distance benchmark ( compare fig .[ figure : benchmark : distpop ] with fig .[ figure : benchmark : static - even - d - nmi - gammavsnm - tab ] in the main text ) .the radiation null model has the second - best performance on this benchmark , with a better performance than on the random population distance benchmark .however , it does not do as well as it did on the random population flux benchmark ( see fig .[ figure : benchmark : static - even - d - nmi - gammavsnm - tab ] ) . , , , , and for various values of ( colored curves ) as a function of .we detected communities by optimizing modularity using the ( top ) ng , ( middle ) gravity , and ( bottom ) radiation null models [ figure : benchmark : distpop ] ]we now study the influence of the parameters and on the community structure for random - population multilayer temporally - stable benchmarks .we first compare the results to our findings from static benchmarks by varying and for fixed values of .we study the performance of modularity maximization with the ng , gravity , and radiation null models on random population benchmarks ( see fig .[ figure : benchmark : multi - gammavsnm ] ) with parameter values of , , and layers using and .we only show plots for , as the values of do not noticeably influence the results .we obtain results that are similar to our results for the corresponding static benchmarks inn fig .[ figure : benchmark : static - even - d - nmi - gammavsnm - tab ] .once again , we find that the choice of has a large influence on the quality of algorithmic partitions , and ( as with our findings for static benchmarks ) that seems to yield the best performance ( i.e. , the largest nmi scores ) for low values of , whereas larger values of perform better for larger ( see fig .[ figure : benchmark : multi - rand - gammavsnm ] ) .the effect of varying is most pronounced for the radiation null model on flux benchmarks .cities uniformly at random from the set .we consider various values of the resolution parameter , and the other parameter values are , , , and .we plot nmi as a function of for ( left ) the distance benchmark and ( right ) the flux benchmark using the ( top ) ng , ( middle ) gravity , and ( bottom ) radiation null models .[ figure : benchmark : multi - rand - gammavsnm ] ] we now examine the nmi of algorithmic versus planted partitions in temporally - stable multilayer benchmarks for fixed while varying and . as we show in fig .[ figure : benchmark : multi - rand - omegavsnm ] , we find that the value of usually has very little effect on our ability to detect the planted communities via modularity maximization the same as for the uniform population temporally stable multilayer benchmarks ( see fig .[ figure : benchmark : multi - omegavsnm ] ) .the parameter becomes important for the random - population , temporally - evolving multilayer benchmarks in the same manner as what we observed in the main text for uniform population benchmark networks ( not shown ; see fig .[ figure : benchmark : multi - t - gammavsnm ] in the main text for the uniform population results ) .cities uniformly at random from the set .we consider various values of the parameter , and the other parameter values are , , , and .we plot nmi as a function of for ( left ) the distance benchmark and ( right ) the flux benchmark using the ( top ) ng , ( middle ) gravity , and ( bottom ) radiation null models .[ figure : benchmark : multi - rand - omegavsnm ] ]in fig . [ figure : benchmark : multi - nodelevel - u - gammavsnm ] , we present our results for province - level community detection on uniform population temporally stable multilayer benchmarks . as one can see by comparing these results to those in fig . [figure : benchmark : multi - gammavsnm ] , we obtain similar nmi scores for the performance of community detection for province - level communities as we did for the ordinary community detection in multilayer networks . for all ) temporally stable multilayer spatial benchmarks with layers .each layer has a single - layer planted partition with cities , a size parameter of , and a density parameter of .we use and consider various values of the resolution parameter , and we plot nmi as a function of the inter - community connectivity for ( left ) the distance benchmark and ( right ) the flux benchmark .[ figure : benchmark : multi - nodelevel - u - gammavsnm ] ]in fig . [ figure : dengue - fullyaggregated ] , we show additional results of community detection on fully aggregated networks ( i.e. , we use and ) from the dengue fever times series . in section [ dengue - region - ng ] of the main text , in fig . [figure : dengue - regionlevel - map](a ) we showed the results of modularity maximization using the ng null model .we now also show similar results for the gravity , radiation , and correlation null models .the gravity , radiation , and correlation null models find one large community and a few small communities . because of the aggregation, we have lost the rich set of information that we were able to study using multilayer community detection .
in the study of networks , it is often insightful to use algorithms to determine mesoscale features such as `` community structure '' , in which densely connected sets of nodes constitute `` communities '' that have sparse connections to other communities . the most popular way of detecting communities algorithmically is to optimize the quality function known as modularity . when optimizing modularity , one compares the actual connections in a ( static or time - dependent ) network to the connections obtained from a random - graph ensemble that acts as a null model . the communities are then the sets of nodes that are connected to each other densely relative to what is expected from the null model . clearly , the process of community detection depends fundamentally on the choice of null model , so it is important to develop and analyze novel null models that take into account appropriate features of the system under study . in this paper , we investigate the effects of using null models that incorporate spatial information , and we propose a novel null model based on the radiation model of population spread . we also develop novel synthetic spatial benchmark networks in which the connections between entities are based on distance or flux between nodes , and we compare the performance of both static and time - dependent radiation null model to the standard ( `` newman - girvan '' ) null model for modularity optimization and a recently - proposed gravity null model . in our comparisons , we use both the above synthetic benchmarks and time - dependent correlation networks that we construct using countrywide dengue fever incidence data for peru . we also evaluate a recently - proposed correlation null model , which was developed specifically for correlation networks that are constructed from time series , on the epidemic - correlation data . our findings underscore the need to use appropriate generative models for the development of spatial null models for community detection . community detection , spatial null model + 87.19.xx,89.20.-a,89.75.fb,05.45.tp
the ongoing search for the direct detection of gravitational waves using earth based experiments involves the analysis of observations made at a variety of different detectors .these observations are time series samples of the detector state that are then processed by various means to identify gravitational wave candidates . broadly speaking the searches for gravitational wavesmay be broken up into two categories , those searches that are based upon a model , such as the search for gravitational waves from inspiraling binaries , and those searches that endeavor to find gravitational waves without using a model .the latter category of searches are often referred to as `` burst '' searches .these searches typically seek to identify portions of data that are , for a short period , anomalously `` loud '' in comparison to the surrounding data . identifying a gravitational wave burst in the absence of a source modelis an involved and potentially computationally expensive process .this is especially true when the ratio of signal power to noise power is low .a convenient and natural approach to mitigating the computational expense of identifying such bursts divides the problem of detection into two parts . in the first part, an inexpensive procedure is used to identify candidate sections of data that trigger the second part of the analysis .the second part of the analysis focuses on the subintervals of data identified by the first .it is a more computationally complex and expensive analysis that either discards the candidate or identifies it as a gravitational wave burst . in this way the first part of the analysis carried out by a so - called `` event - trigger generator '' , performs triage on the data that must be analyzed by the more complex second stage of the analysis .blocknormal is an event trigger generator that analyzes data in the time domain and searches for moments in time where the statistical character of the time series data changes . in particular , blocknormal characterizes the time series between change - points by the mean( ) and variance( ) of the samples .change - points are thus demarcation points , separating `` blocks '' of data that are consistent with a distribution having a given mean and variance , which differs from the mean and/or variance that best characterizes the data in an adjacent block .the onset of a signal in the data will , because it is uncorrelated with the detector noise , increase the variance of the time series for as long as the signal is present with significant power . in this way blocks with variance greater than a `` background '' variance mark candidate gravitational wave bursts .since candidate gravitational wave bursts are identified with changes in the detector noise character it is best if the detector noise is itself stationary and white .the blocknormal analysis thus starts by identifying long segments of data , epochs , which are relatively stationary .this process involves comparing adjacent stretches of data of fixed a duration long relative to the expected duration of a gravitational wave burst and asking whether adjacent stretches have consistent means and variances . in order to avoid any bias that might come from analyzing outliers in the tails of the distribution ( where one might expect any true signal to be located ) , the mean and varianceare computed on only those samples that are within the 2.5th and 97.5th percentiles of the sample values .if two consecutive stretches are inconsistent at the confidence level , the begining of the first stretch is used to define a new stationary segment .segments thus defined are split into a set of frequency bands whose lower band edge is heterodyned to zero frequency .this base - banding allows for a crude determination of the frequency of any identified triggers .line and other spectral features are removed , either by kalman filtering or by regression against diagnostic channels , and the final data whitened with a linear filter .once the data has undergone the base - banding and whitening process , the search for change - points begins in earnest .the method employed is similar to that described in and relies on a bayesian analysis of the relative probability of two different hypotheses : * , that the time series segment is drawn from a distribution characterized by a single mean and variance ; and * , that the time series segment consists of two continuous and adjacent subsegments each drawn from a distribution characterized by a different mean and/or variance . given the time series segment , consisting of samples , we write the probability of hypothesis as and the probability of hypothesis as .the odds of compared to is thus : applying bayes theorem and simplifying this becomes : where , is independent of the data itself , although it does depend on the number of samples .if is true , then will be a good hypothesis for two subsets of the data , one from sample 1 to , denoted and another from to , . then we can write : to calculate we thus need only be able to calculate for arbitrary time series the probability that a given data set is drawn from a normal distribution with unknown mean and variance is equal to : -\mu)^{2}}{2\sigma^{2}}}\end{aligned}\ ] ] where is the a priori probability that the mean takes on a value and the variance a value . with the usual uninformative priors for and ( and )the integral for can be evaluated in closed form : ^{-\frac{(n-1)}{2}}i_{n-2 } \\i_{n } \equiv ( n-1 ) ! ! \left\{\begin{array}{ll}1& \mbox{ odd } \\ \sqrt{\frac{\pi}{2 } } & \mbox{ even } \end{array } \right.\end{aligned}\ ] ] where an .the value of is therefore the odds that there is a change - point in to there not being any change - points , and the value is related to the odds that there is a change - point at sample to there not being one anywhere in .the calculated value of is compared to a threshold , , and if greater than this threshold , a change - point is considered to be at the sample with the largest value of .figure [ signal ] shows some simulated data along with for that data .the two peaks in the value of corresponds to where the mean of the simulated noise changes .[ signal ] as a function of the hypothetical change point time.,title="fig : " ] this process either leaves free of change - points , or it divides the data into two subsets . in the latter case , the blocknormal pipeline repeats the change - point analysis on these subsets and all subsequent subsets , until either no more change - points are found , or the subset is less than 4 data points long . by this iteration methodblocknormal breaks the data at these change - points into a set of blocks , each of which is free of any change point .a final refinement step is taken where successive pairs of blocks are analyzed to check that the change - point would still be considered significant over the subset of the data that is contained within the two blocks .blocknormal identifies blocks time series segments between successive change - points are well characterized by a mean , a variance , a frequency band , a start time , and a duration . to identify unusual blocks , the means and variancesare compared to the mean and variance , and of all the data in their band from the epoch in which they were found .blocknormal defines `` events '' to be blocks in which the following condition holds true for a value , , that is used to characterize the block : here , , is called the event threshold and is a free parameter in the algorithm which adjusts the sensitivity in defining what `` unusual '' means .there are a limited number of reasonable other possible threshold requirements based on the three characteristics of a block , ,, , however , at this time these have not been explored .once events have been identified , immediately adjacent events in the same frequency band are clustered together , with a peak - time for the cluster defined by the central time of the block with the largest value of .a cluster `` energy '' is a sum over the blocks that comprise it : \end{aligned}\ ] ] where is the duration in samples .a key factor in building confidence in any identification of gravitational waves is the presence of a signal in different detectors . using this `` coincidence '' as the basis for further reducing the number of periods of interest , blocknormal requires that there be coincidence in time between events in the same band but different detectors before a `` trigger ''is formed . triggers from different bandsare merged if they overlap in time into a single trigger .figure [ coinc ] illustrates how this coincidence and merging step works using the three ligo interferometers .the blocknormal event trigger generator is a time domain analysis in base - banded data that identifies blocks in time which are well characterized by a mean and variance .blocks with means and variances that are unusually large compared to the mean and variance of the much longer data segment containing them are marked for consideration as being unusual events .several coincident events in different detectors together form a trigger which can be used to define periods of interest that a more computationally intense analysis can use to reduce the total computing time needed to search for unmodeled gravitational wave events .this work was supported by the center for gravitational wave physics , the international virtual data grid laboratory , and the national science foundation under award phy 00 - 99559 .the international virtual data grid laboratory is supported by the national science foundation under cooperative agreement phy-0122557 ; the center for gravitational wave physics is supported by the national science foundation under cooperative agreement phy 01- 14375 .1 smith a f m 1975 _ biometrika _ * 62*,2 p. 407416
in the search for unmodeled gravitational wave bursts , there are a variety of methods that have been proposed to generate candidate events from time series data . block normal is a method of identifying candidate events by searching for places in the data stream where the characteristic statistics of the data change . these change - points divide the data into blocks in which the characteristics of the block are stationary . blocks in which these characteristics are inconsistent with the long term characteristic statistics are marked as event - triggers which can then be investigated by a more computationally demanding multi - detector analysis .
three characteristics are computed for each institution ( fig . [ fig2 ] ) : the institution size ( ) , representing the total number of distinct authors that published at least one paper at institution ; the number of papers ( ) published under affiliation ; the cumulative number of citations collected by all papers .we find that follows a fat tailed distribution , indicating significant population heterogeneity among different institutions ( fig .[ fig2]a ) . while most institutions are small , a few have a large number of scientists , often corresponding to large institutes or universities .we observe similar disparity in ( fig .[ fig2]b ) : few institutions acquire a large number of citations , while most research labs or universities receive few citations .figures [ fig2]c - d show the correlation between the institution size and both the average publications impact and the average productivity of institutions .the average productivity and impact of an institution are different but complementary measures of scientific performance .we find the institution size has little influence on productivity ( ) ( fig .[ fig2]d ) , yet it positively correlates with the impact of publications ( ) , indicating that large institutions offer a more innovative / higher impact environment than smaller ones as captured by citations per paper ( fig .[ fig2]c ) .also , as larger institutions have more internal collaborations , the number of co - authors in publications from large institutions might be larger and , as a consequence , attracts more citations .many institutions are small with few citations , hence they account for very small portion of the data . for the rest of the paper , we will focus on the thousand most cited institutions , accounting for more than of papers .they correspond to institutions with at least 698 citations within the aps data over the 120-year period ( shaded area in fig .[ fig2 ] ) .mobility is often important in furthering a professional career . in science ,the best lab for the type of research you are doing is usually not where you are .nowadays changing countries is a rite of passage for many young researchers who follow the resources and facilities .as the patterns and characteristics of these migrations are blurry , we need to systematically study the mobility of scientists .thanks to the large disambiguated data spanning the last 120 years that we have compiled , a systematic study of scientific mobility is now possible . the strong correlations between the three quantities ( ) indicate any of the three could characterise an institution , serving as a proxy of its ranking against others . here , we choose ( the total number of citations ) as our parameter to approximate the ranking by reputation .other parameters such as the h - index of an institution or the number of papers could also be used .but the results should be insensitive to this choice owing to good correlations between these quantities ( and respectively ) .the top - ranked institutions all correspond to well - known universities or research labs with long tradition of excellence in physics ( fig .[ fig3 ] ) , corroborating our hypothesis that is a reasonable proxy for ranking .we can also observe the similarity and stability of other rankings when comparing with other metrics .is a reasonable proxy for ranking , scaledwidth=45.0% ] we focus on authors with similar career longevity , restricting our corpus to those who began their career between 1950 and 1980 and published for at least 20 years without any interruption exceeding 5 years . following these criteria , we arrived at a subset of 2,725 scientists to study the mobility patterns and their impact on their careers .a total of 5,915 career movements are recorded for this corpus . in figure[ fig4]a we select three individuals as exemplary career histories .each line represents one individual , with circles denoting his / her publications , allowing us to observe his / her location .the size of the circle is proportional to citations the paper acquires in five years , approximating the impact of the work . by studying the whole corpus, we compute , the probability for a scientist to have visited different institutions along his career ( fig .[ fig4]c ) , finding that career movements are common but infrequent : only of them never moved at all ( ) .for the ones that move , they mostly move once or twice , decaying quickly as increases .we also compute , the probability to observe a movement at time , where corresponds to the date of the scientist s first publication .we find that most movements occurred in the early stage of the career ( fig .[ fig4]b ) , supporting the hypothesis that changing affiliations is a rite of passage for young researchers .this likely corresponds to the postdoc period where graduates broaden their horizons through mobility .this may also reflect the increasing cost of relocation and family constraints as family developed .a third characteristic is the geographical distance of movements , .existing literature hints for somewhat competing hypothesis in the role geography plays in career movements .indeed , research on human mobility suggests that regular human movements mostly cover short distances with occasional longer trips , characterized by a power law distance distribution ; in contrast , country - level surveys find increasing cross - country movements mostly due to cultural exposure and life quality concerns , indicating potential dominance in long distance moves in career choices comparing with typical human travels .we measure the distance distribution over all moves observed in our dataset , finding that our result is supported by a combination of both hypothesis .we find the probability to move to further locations decays as a power law , whereas the null model predicts this probability to be flat ( fig .[ fig4]d ) .this observation is consistent with studies on human mobility , that short distance moves dominate career choices .yet , when comparing the power law exponents , we find the exponent characterizing career moves ( ) is much smaller than those observed in human travel ( ) , corresponding to higher likelihood of observing long range movements .this observation might be explained by the influence that scientific collaborations can have on career movements as similar low exponents are observed for collaboration network between cities . taken together, the preceding results indicate that career moves mostly happen during the early stage of a career and are more likely to cover short distances .the observed location in both time and space raises the question of how individual moves as a function of institutional rankings . to this end , denoting with the number of transitions from the institution of rank to the one of rank , we measure , the probability to have a transition from rank to rank as interestingly , we find that most movements involve elite institutions ( rank is small ) , and transitions between bottom institutions are rare ( fig .[ fig5]a ) .this is due to the fact that elite institutions are characterised by larger populations , hence translating into more events . to account for the population based heterogeneity , we compare the observed with the probability expected in a random model where we randomly shuffle the transitions from institution to while preserving the total number of transitions from and to each institution . formally , in this null model , we have and we compare with the null model by computing the matrix is the ratio between the probability to have a transition from rank to divided by the probability when the movements are shuffled , measuring the likelihood for a move to take place by accounting for the size of the institutions .hence , indicates the amount of observed movements is about what one would expect if movements were random .similarly , indicates that we observe more transitions from to than we expected , whereas corresponds to transitions that are underrepresented .we find that career moves are characterized by a high degree of stratification in institutional rankings ( fig .[ fig5]b ) .indeed , we observe two distinct clubs ( red spots in fig . [ fig5]b ) , indicating that the overrepresented movements are the ones within elite institutions ( lower - left corner ) or within lower - rank institutions ( upper - right corner ) , and scientists belonging to one of the two groups tend to move to institutions within the same group . on the other hand , both upper - left and lower - right corners are colored blue , indicating cross group movements ( transitions from elite to lower - rank institutions and vice - versa ) are significantly underrepresented .also , scientists from medium - ranked institutions move to the next institution with a probability that is indistinguishable from the random case . in other words ,their movements indicate no bias towards middle , elite or lower - ranked institutions .the high intensity of stratification in career movements raises an interesting question : how does individual performance in science relate to their moves across different institutional rankings ? to answer this question , we need to quantify the performance change for each individual before and after the move .imagine that a scientist moves from to , and published papers at location and papers at .the impact of a paper can be approximated by , the number of citations cumulated within 5 years after its publication .let and be the lists of number of citations for papers published before ( ) and after ( ) the transition from to ( ) . to quantify the change in performance , we introduce where and are the average of and , respectively , and corresponds to the standard deviation of the concatenation of both and while preserving the moment when the movement took place ( see sm for more information about ) .therefore , captures the statistical difference in the average citations between papers published before and after the movement normalized by the random expectation when the same author s publications were shuffled .a positive indicates papers following the move on average result in higher citation impact , hence representing an improvement in scientific performance .a negative value corresponds to a decline in performance . to quantify the influence of movements on individual performance , we divide all movements into two categories based on the performance change : movements associated with positive and negative , and measure and .we find the observed stratification in career moves is robust against individual performance ( fig .[ fig5]c - d ) .that is , the two clubs emerge for both categories in a similar fashion as in figure [ fig5]b , indicating the pattern of moving within elite or lower - rank institutions is nearly universal for people whose performance is improved or decreased following the move .comparing figure [ fig5]c and figure [ fig5]d , we find the red spot in lower - left corner is more concentrated in figure [ fig5]d than in figure [ fig5]c , hinting that being more mobile in the space of rankings may lead to variable performance . to test this hypothesis ,for each transition we calculate the rank difference between the origin and destination ( ) .a positive value of indicates , hence a movement to a lower - rank institution , whereas corresponds to transitions into institutions with a higher rank . in figure [ fig6 ]we measure the relation between and .when scientists move to institutions with a lower rank ( ) , we find that their average change in performance is negative , corresponding to a decline in the impact of their work .yet , what is particularly interesting lies in the regime . indeed ,when people move from lower rank location to elite institutions , we observe no performance change on average .this is rather unexpected , as transitioning from lower - rank institutions to elite institutions is thought to provide better access to ideas and lab resources , which in turn should fuel scientific productivity .a possible explanation may be that scientist who have the opportunity to make big jumps in the ranking space may have already had an excellent performance in their previous institutions . a movetherefore will not affect their impact . ) and the ranking difference ( ) associated to a transition shows that , when people move to institutions with a lower rank ( ) , their average change in performance is negative , corresponding to a decline in the impact of their work .yet , what is particularly interesting lies in the regime .indeed , when people move from lower rank location to elite institutions , we observe no performance change on average . ]in summary , we extracted affiliation information from the publications of each scientist , allowing us to reconstruct their career moves between different institutions as well as the body of work published at each location .we find career movements are common yet infrequent .most people move only once or twice , and usually in the early stage of their career .career movements are affected by geography . the distance covered by the movecan be approximated with a power law distribution , indicating that most movements are local and moving to faraway locations is less probable .we also observe a high degree of stratification in career movements .people from elite institutions are more likely to move to other elite institutions , whereas people from lower rank institutions are more likely to move to places with similar ranks .we further confirm that the observed stratification is robust against the change in individual performance before and after the move .when cross - group movement occurs , we find that while going from elite to lower - rank institutions on average results in a modest decrease in scientific impact , transitioning into elite institutions , does not result in gain in impact .the nature of our dataset restricted our study on a sample of scientists . as a result of this selection process, our results are biased towards physicists from 1960s to 1980s with high career longevity . yet, these limitations also suggest new avenues for further investigations .indeed , as datasets become more comprehensive and of higher resolution , newly available data sources like web of science or google scholar can provide new and deeper insights towards generalization of the results across different disciplines , temporal trends , and more .further investigations regarding the influence of career longevity on scientific mobility should also be considered as it could reveal as well results of importance .taken together our results offer the first systematic empirical evidence on how career moves affect scientific performance and impact .* dataset . *the data provided by the american physical society ( aps ) contains over publications , each identified with a unique number , corresponding to all papers published in 9 different journals , namely physical review a , b , c , d , e , i , l , st and review of modern physics , spanning a period of 117 years from 1893 to 2010 . for each paperthe dataset includes title , date of publication ( day , month , year ) , author names and affiliations of each of the authors .a separate dataset also provides list of citations within the aps data only , using unique paper identifiers .about 5% of publications with ambiguous author - affiliation links or massively authored were removed from this dataset ( see sm for more details ) . *author name disambiguation .* to derive individual information , one has to reconnect papers belonging to a single scientist . since no unique author identifier is present in the data , author names must be disambiguated .the dataset contains about millions of author - paper pairs . to overcome the ambiguities present in the data , we design a procedure that uses information about the author but also metadata about the paper such as coauthors and citations . by computing similarities between authors ,our procedure can successfully detect single authors as well as homonymies ( see sm for more details about the disambiguation method ) .a total of distinct scientists are detected by our method .* affiliation disambiguation . *a major disadvantage when dealing with publication data is the inconsistencies and errors associated with affiliation names on papers .a total of different affiliation names are identified in the dataset .the disambiguation procedure for affiliations uses geocoded information as well as a similarity measure between affiliation names in order to disambiguate institutions .the disambiguated set of authors also plays a crucial role in the procedure ( see sm for more details about the disambiguation method ) .a total of distinct institutions are identified by our algorithm . * resolving individual career trajectory * based on the information present in the publications of a scientist , we can reconstruct his / her career trajectory . in order to detect career movements , i.e. changes in a scientist s institution, one has to remove artificial movements induced by short - term stays and by errors and typos in the affiliation names on the papers .to do so , only institutions reported in at least two consecutive papers are considered in a career trajectory .* ranking the institutions * three variables are considered to rank an institution : ( i ) the total number of papers , , published with institution , ( ii ) the cumulated number of citations , , corresponding to institution , ( iii ) the h - index , , of institution . the variable is defined as where is the number of citations within the aps data of paper cumulated within years after its publications .an institution has an h - index if of its papers have at least citations each , and the other papers have no more than citations each . for papers indicates the cumulative number of citations obtained within years after the publication .* binning the institutions . * about transitions between institutions are detected for our subset of scientists . in order to have a statistically significant number of transitions to derive the values of and ( fig .[ fig5 ] ) , institutions are binned logarithmically according to their rank ( ) into five groups .we thank nicolas boumal and colleagues from the center for complex network research ( ccnr ) for the valuable discussions and comments .dw , cs , and alb are supported by lockheed martin corporation ( sra 11.18.11 ) , the network science collaborative technology alliance is sponsored by the u.s .army research laboratory under agreement w911nf-09 - 2 - 0053 , defense advanced research projects agency under agreement 11645021 , and the future and emerging technologies project 317 532 `` multiplex '' financed by the european commission .pd is supported by the national fund for scientific research ( fnrs ) and by the research department of the communaut franaise de belgique ( _ large graph _ concerted research action ) .rs acknowledges support from the james s. mcdonnell foundation .pd designed research , analysed the data and wrote the paper .dw , rs , cs , vb and alb , designed research and wrote the paper .* competing financial interests : * the authors declare no competing financial interests .
changing institutions is an integral part of an academic life . yet little is known about the mobility patterns of scientists at an institutional level and how these career choices affect scientific outcomes . here , we examine over 420,000 papers , to track the affiliation information of individual scientists , allowing us to reconstruct their career trajectories over decades . we find that career movements are not only temporally and spatially localized , but also characterized by a high degree of stratification in institutional ranking . when cross - group movement occurs , we find that while going from elite to lower - rank institutions on average associates with modest decrease in scientific performance , transitioning into elite institutions does not result in subsequent performance gain . these results offer empirical evidence on institutional level career choices and movements and have potential implications for science policy . despite their importance for education , scientific productivity , reward and hiring procedures , our quantitative understandings of how individuals make career moves and relocate to new institutions , and how such moves shape and affect performance , remains limited . indeed , previous research on migration patterns of scientists tended to focus on large - scale surveys on country - level movements , revealing long - term cultural and economical priorities . at a much finer scale , research on human dynamics and mobility has emerged as an active line of enquiry , owing to new and increasingly available massive datasets providing time resolved individual trajectories . while these studies cover a much shorter time scale than a typical career , they uncover a set of regularities and reproducible patterns behind human movements . less is known about patterns behind career moves at an institutional level and how these moves affect individual performance . here we take advantage of the fact that scientists publish somewhat regularly along their career , and for each publication , the institution in which the work was performed is listed as an affiliation in the paper , documenting career trajectories at a fine scale and in great detail . these digital traces , offering data on not only individual scientific output at each institution but also career moves from one institution to another , can provide insights for science policy , helping us understand how institutions shape knowledge , the typical moves of individual career development and help us evaluate scientific outcomes associated with professional mobility . we use the physical review dataset to extract mobility information , publication record , and citations for individual scientists . the data consists of 237,038 physicists and 425,369 scientific papers , out of which 4,052 different institutions are extracted after the disambiguation process for authors and affiliations ( see sm for disambiguation process ) . to reconstruct the career trajectory of a scientist , we use the affiliation given in each of his / her publications ( fig [ fig1 ] ) . for authors with multiple affiliations listed on a paper we consider the first affiliation as primary institution . we compute the impact of each paper by counting its cumulative citations collected 5 years after its publication .
with the information superhighway fast becoming a reality , the problem of designing networks capable of accommodating multimedia ( both audio and video ) traffic in a multicast ( simultaneous transmission of data to multiple destinations ) environment has come to assume paramount importance . as discussed in kompella ,pasquale and polyzos , one of the popular solutions to multicast routing involves tree construction .two optimization criteria ( 1 ) the minimum worst - case transmission delay and ( 2 ) the minimum total cost are typically sought to be minimized in the construction of these trees .network design problems where even one cost measure must be minimized , are often -hard .( see section a2 on network design in . ) but , in real - life applications , it is often the case that the network to be built is required to minimize multiple cost measures simultaneously , with different cost functions for each measure .for example , as pointed out in , in the problem of finding good multicast trees , each edge has associated with it two edge costs : the construction cost and the delay cost .the construction cost is typically a measure of the amount of buffer space or channel bandwidth used and the delay cost is a combination of the propagation , transmission and queuing delays .such multi - criteria network design problems , with separate cost functions for each optimization criterion , also occur naturally in information retrieval and vlsi designs ( see and the references therein ) . with the advent of deep micron vlsi designs , the feature size has shrunk to sizes of 0.5 microns and less . as a result , the interconnect resistance , being proportional to the square of the scaling factor , has increased significantly .an increase in interconnect resistance has led to an increase in interconnect delays thus making them a dominant factor in the timing analysis of vlsi circuits .therefore vlsi circuit designers aim at finding minimum cost ( spanning or steiner ) trees given delay bound constraints on source - sink connections .the above applications set the stage for the formal definition of multicriteria network design problems .we explain this concept by giving a formal definition of a bicriteria network design problem .a generic bicriteria network design problem , ( , , ) , is defined by identifying two minimization objectives , - and , - from a set of possible objectives , and specifying a membership requirement in a class of subgraphs , - .the problem specifies a budget value on the first objective , , under one cost function , and seeks to find a network having minimum possible value for the second objective , , under another cost function , such that this network is within the budget on the first objective .the solution network must belong to the subgraph - class .for example , the problem of finding low - cost and low - transmission - delay multimedia networks can be modeled as the ( diameter , total cost , spanning tree)-bicriteria problem : given an undirected graph with two weight functions and for each edge modeling construction and delay costs respectively , and a bound ( on the total delay ) , find a minimum -cost spanning tree such that the diameter of the tree under the -costs is at most .it is easy to see that the notion of bicriteria optimization problems can be easily extended to the more general multicriteria optimization problems . in this paper , we will be mainly concerned with bicriteria network design problems . in the past, the problem of minimizing two cost measures was often dealt with by attempting to minimize some combination of the two , thus converting it into a unicriterion problem .this approach fails when the two criteria are very disparate .we have chosen , instead , to model bicriteria problems as that of minimizing one criterion subject to a budget on the other .we argue that this approach is both general as well as robust .it is more general because it subsumes the case where one wishes to minimize some functional combination of the two criteria .it is more robust because the quality of approximation is independent of which of the two criteria we impose the budget on .we elaborate on this more in sections [ sec : equivalence ] and [ sec : general ] .the organization of the rest of the paper is as follows : section [ sec : contributions ] summarizes the results obtained in this paper ; section [ sec : previous ] discusses related research work ; section [ sec : hardness ] contains the hardness results ; section [ sec : equivalence ] shows that the two alternative ways of formulating a given bicriteria problem are indeed equivalent ; section [ sec : general ] demonstrates the generality of the bicriteria approach ; section [ sec : parametric ] details the parametric search technique ; section [ sec : diameter ] presents the approximation algorithm for diameter constrained steiner trees ; section [ sec : treewidth ] contains the results on treewidth - bounded graphs ; section [ sec : concluding ] contains some concluding remarks and open problems .the area of unicriterion optimization problems for network design is vast and well - explored ( see and the references therein . ) .ravi et al . studied the degree - bounded minimum cost spanning tree problem and provided an approximation algorithm with performance guarantee ( ) .the ( degree , diameter , spanning tree ) problem was studied by ravi in the context of finding good broadcast networks . therehe provides an approximation algorithm for the ( degree , diameter , spanning tree ) problem with performance guarantee ( ) , on the degree he finds a tree whose total cost is at most times the optimal and whose degree is at most . ] .the ( diameter , total cost , spanning tree ) entry in table 1 corresponds to the diameter - constrained minimum spanning tree problem introduced earlier .it is known that this problem is -hard even in the special case where the two cost functions are identical .awerbuch , baratz and peleg gave an approximation algorithm with performance guarantee for this problem - i.e. the problem of finding a spanning tree that has simultaneously small diameter ( i.e. , shallow ) and small total cost ( i.e. , light ) , both under the same cost function .khuller , raghavachari and young studied an extension called _ light , approximate shortest - path trees ( last ) _ and gave an approximation algorithm with performance guarantee .kadaba and jaffe , kompella et al . , and zhu et al . considered the ( diameter , total cost , steiner tree ) problem with two edge costs and presented heuristics without any guarantees .it is easy to construct examples to show that the solutions produced by these heuristics in , can be arbitrarily bad with respect to an optimal solution .a closely related problem is that of finding a diameter - constrained shortest path between two pre - specified vertices and , or ( diameter , total cost , - path ) .this problem , termed the multi - objective shortest path problem ( mosp ) in the literature , is -complete and warburton presented the first fully polynomial approximation scheme ( ) for it .hassin provided a strongly polynomial for the problem which improved the running time of warburton .this result was further improved by phillips .the ( total cost , total cost , spanning tree)-bicriteria problem has been recently studied by ganley et al .they consider a more general problem with more than two weight functions .they also gave approximation algorithms for the restricted case when each weight function obeys triangle inequality .however , their algorithm does not have a bounded performance guarantee with respect to each objective .many -hard problems have exact solutions when attention is restricted to the class of treewidth - bounded graphs and much work has been done in this area ( see and the references therein ) . independently , bern , lawler and wong introduced the notion of decomposable graphs .later , it was shown that the class of decomposable graphs and the class of treewidth - bounded graphs are equivalent .bicriteria network design problems restricted to treewidth - bounded graphs have been previously studied in .in this paper , we study the complexity and approximability of a number of bicriteria network design problems .the three objectives we consider are : ( i ) total cost , ( ii ) diameter and ( iii ) degree of the network .these reflect the price of synthesizing the network , the maximum delay between two points in the network and the reliability of the network , respectively .the _ total cost _ objective is the sum of the costs of all the edges in the subgraph .the _ diameter _objective is the maximum distance between any pair of nodes in the subgraph .the _ degree _ objective denotes the maximum over all nodes in the subgraph , of the degree of the node .the class of subgraphs we consider in this paper are mainly _ steiner trees _ ( and hence _ spanning trees _ as a special case ) ; although several of our results extend to more general connected subgraphs such as generalized steiner trees .as mentioned in , most of the problems considered in this paper , are -hard for arbitrary instances even when we wish to find optimum solutions with respect to a single criterion . given the hardness of finding optimal solutions , we concentrate on devising approximation algorithms with worst case performance guarantees . recall that an approximation algorithm for a minimization problem provides a * performance guarantee * of if for every instance of , the solution value returned by the approximation algorithm is within a factor of the optimal value for .here , we extend this notion to apply to bicriteria optimization problems .an -approximation algorithm for an ( , , )-bicriteria problem is defined as a polynomial - time algorithm that produces a solution in which the first objective ( ) value , is at most times the budget , and the second objective ( ) value , is at most times the minimum for any solution that is within the budget on .the solution produced must belong to the subgraph - class .analogous definitions can be given when and/or are maximization objectives .table 1 contains the performance guarantees of our approximation algorithms for finding spanning trees , , under different pairs of minimization objectives , and . for each problemcataloged in the table , two different costs are specified on the edges of the undirected graph : the first objective is computed using the first cost function and the second objective , using the second cost function .the rows are indexed by the budgeted objective .for example the entry in row , column , denotes the performance guarantee for the problem of minimizing objective with a budget on the objective .all the results in table 1 extend to finding steiner trees with at most a constant factor worsening in the performance ratios .for the diagonal entries in the table the extension to steiner trees follows from theorem [ better - scale - thm ] .algorithm dcst of section [ sec : diameter ] in conjunction with algorithm bicriteria - equivalence of section [ sec : equivalence ] yields the ( diameter , total cost , steiner tree ) and ( total cost , diameter , steiner tree ) entries .the other nondiagonal entries can also be extended to steiner trees and these extensions will appear in the journal versions of .our results for arbitrary graphs can be divided into three general categories . * table 1 .performance guarantees for finding spanning trees in an arbitrary graph on nodes .asterisks indicate results obtained in this paper . is a fixed accuracy parameter . *+ first , as mentioned before , there are two natural alternative ways of formulating general bicriteria problems : ( i ) where we impose the budget on the first objective and seek to minimize the second and ( ii ) where we impose the budget on the second objective and seek to minimize the first .we show that an -approximation algorithm for one of these formulations naturally leads to a -approximation algorithm for the other. thus our definition of a bicriteria approximation is independent of the choice of the criterion that is budgeted in the formulation .this makes it a robust definition and allows us to fill in the entries for the problems ( , , ) by transforming the results for the corresponding problems ( , , ) .second , the diagonal entries in the table follow as a corollary of a general result ( theorem [ better - scale - thm ] ) which is proved using a parametric search algorithm .the entry for ( degree , degree , spanning tree ) follows by combining theorem [ better - scale - thm ] with the -approximation algorithm for the degree problem in . in actually provide an -approximation algorithm for the weighted degree problem .the weighted degree of a subgraph is defined as the maximum over all nodes of the sum of the costs of the edges incident on the node in the subgraph .hence we actually get an -approximation algorithm for the ( weighted degree , weighted degree , spanning tree)-bicriteria problem .similarly , the entry for ( diameter , diameter , spanning tree ) follows by combining theorem [ better - scale - thm ] with the known exact algorithms for minimum diameter spanning trees ; while the result for ( total cost , total cost , spanning tree ) follows by combining theorem [ better - scale - thm ] with an exact algorithm to compute a minimum spanning tree .finally , we present a cluster based approximation algorithm and a solution based decomposition technique for devising approximation algorithms for problems when the two objectives are different .our techniques yield -approximation algorithms for the ( diameter , total cost , steiner tree ) and the ( degree , total cost , steiner tree ) problems . ** * consider , for example , the conjunction of this theorem with the results of goemans et al .this leads to a host of bicriteria approximation results when two costs are specified on edges for finding minimum - cost generalized steiner trees , minimum -edge connected subgraphs , or any other network design problems specified by weakly supermodular functions .thus for example , we get -approximation algorithms for the ( total cost , total cost , generalized steiner tree ) and ( total cost , total cost , -edge connected subgraph)-bicriteria problems .( see for the results on the corresponding unicriterion problems . ) similarly , given an undirected graph with two costs specified on each node we can get logarithmic approximations for the minimum node - cost steiner tree using the result of klein and ravi . as another example , with two edge - cost functions , and an input number , we can use the result of blum et al . to obtain an -approximation for the minimum cost ( under both functions ) tree spanning at least nodes . * * * * * * * * * * * we also study the bicriteria problems mentioned above for the class of treewidth - bounded graphs. examples of treewidth - bounded graphs include trees , series - parallel graphs , -outerplanar graphs , chordal graphs with cliques of size at most , bounded - bandwidth graphs etc .we use a dynamic programming technique to show that for the class of treewidth - bounded graphs , there are either polynomial - time or pseudopolynomial - time algorithms ( when the problem is * np*-complete ) for several of the bicriteria network design problems studied here . a * polynomial time approximation scheme * ( ptas ) for problem is a family of algorithms such that , given an instance of , for all , there is a polynomial time algorithm that returns a solution which is within a factor of the optimal value for . a polynomial time approximation scheme in which the running time grows as a polynomial function of called a * fully polynomial time approximation scheme*. here we show how to convert these pseudopolynomial - time algorithms for problems restricted to treewidth - bounded graphs into fully polynomial - time approximation schemes using a general scaling technique. stated in our notation , we obtain polynomial time approximation algorithms with performance of , for all .the results for treewidth - bounded graphs are summarized in table 2 .as before , the rows are indexed by the budgeted objective .all algorithmic results in table 2 also extend to steiner trees in a straightforward way .our results for treewidth - bounded graphs have an interesting application in the context of finding optimum broadcast schemes .kortsarz and peleg gave -approximation algorithms for the minimum broadcast time problem for series - parallel graphs . combining our results for the ( degree , diameter , spanning tree ) for treewidth - bounded graphs with the techniques in , we obtain an -approximation algorithm for the minimum broadcast time problem for treewidth - bounded graphs ( series - parallel graphs have a treewidth of ) , improving and generalizing the result in .note that the best known result for this problem for general graphs is by ravi who provides an approximation algorithm performance guarantee ( ) . [cols="^,^,^,^",options="header " , ] * table 2 .bicriteria spanning tree results for treewidth - bounded graphs . *the problem of finding a minimum degree spanning tree is strongly -hard .this implies that all spanning tree bicriteria problems , where one of the criteria is degree , are also strongly -hard . in contrast , it is well known that the minimum diameter spanning tree problem and the minimum cost spanning tree problems have polynomial time algorithms ( see and the references therein ) .the ( diameter , total cost , spanning tree)-bicriteria problem is strongly -hard even in the case where both cost functions are identical .here we give the details of the reduction to show that ( diameter , total cost , spanning tree ) is weakly -hard even for series - parallel graphs ( i.e. graphs with treewidth at most ) .similar reductions can be given to show that ( diameter , diameter , spanning tree ) and ( total cost , total cost , spanning tree ) are also weakly -hard for series - parallel graphs .we first recall the definition of the * partition * problem . as an instance of the * partition *problem we are given a set of positive integers and the question is whether there exists a subset such that .[ np - thm ] ( diameter , total cost , spanning tree ) is -hard for series - parallel graphs .reduction from the * partition * problem . given an instance of the * partition * problem, we construct a series parallel graph with vertices , and edges .we attach a pair of parallel edges , and , between and , .we now specify the two cost functions and on the edges of this graph ; .let .now it is easy to show that has a spanning tree of -diameter at most and total -cost at most if and only if there is a solution to the original instance of the * partition * problem . * * * * * * * * * * we now show that the ( diameter , total - cost , steiner tree ) problem is hard to approximate within a logarithmic factor . an approximation algorithm provided in section [ sec : diameter ] .there is however a gap between the results of theorems [ fix - dia ] and [ spanning - thm ] .our non - approximability result is obtained by an approximation preserving reduction from the * min set cover*. an instance of the * min set cover * problem consists of a universe and a collection of subsets , each set having an associated cost .the problem is to find a minimum cost collection of the subsets whose union is .[ th : feige ] recently have independently shown the following non - approximability result : + it is -hard to find an approximate solution to the * min set cover * problem , with a universe of size , with performance guarantee better than . * * * * recently feige has shown that , unless , the * min set cover * problem , with a universe of size , can not be approximated to better than a factor . * * * * * * * * * * * [ fix - dia ] there is an approximation preserving reduction from * min set cover * problem to the ( diameter , total cost , steiner tree ) problem . thus : unless , given an instance of the ( diameter , total cost , steiner tree ) problem with sites , there is no polynomial - time approximation algorithm that outputs a steiner tree of diameter at most the bound , and cost at most times that of the minimum cost diameter- steiner tree , for .we give an approximation preserving reduction from the * min set cover * problem to the ( diameter , total cost , steiner tree ) problem .given an instance of the * min set cover * problem where and , where the cost of the set is , we construct an instance of the ( diameter , total cost , steiner tree ) problem as follows . the graph has a node for each element of , a node for each set , and an extra `` enforcer - node '' . for each set , we attach an edge between nodes and of -cost , and -cost .for each element and set such that we attach an edge of -cost , 0 , and -cost , .in addition to these edges , we add a path made of two edges of -cost , 0 , and -cost , , to the enforcer node ( see figure [ fig : hardapprox ] ) .the path is added to ensure that all the nodes are connected to using a path of -cost at most 2 . all other edges in the graph are assigned infinite and -costs .the nodes along with and the two nodes of are specified to be the terminals for the steiner tree problem instance .we claim that has a -cost steiner tree of diameter at most and cost if and only if the original instance has a solution of cost .note that any steiner tree of diameter at most must contain a path from to , for all , that uses an edge for some such that .hence any steiner tree of diameter at most provides a feasible solution of equivalent -cost to the original set cover instance .the proof now follows from theorem [ th : feige ] .in section [ sec : motivation ] , we claimed that our formulation for bicriteria problems is robust and general . in this section , we justify these claims . in this section , we show that our formulation for bicriteria problems is robust and general .let be a graph with two ( integral ) cost functions , and ( typically edge costs or node costs ) .let ( ) be a minimization objective computed using cost function ( ) .let the budget bound on the -cost '' or `` -cost '' in this section to mean the value of the objective function computed using , and not to mean the total of all the costs in the network . ]( -cost ) of a solution subgraph be denoted by ( ) .there are two natural ways to formulate a bicriteria problem : ( i ) ( , , ) - find a subgraph in whose -objective value ( under the -cost ) is at most and which has minimum -objective value ( under the -cost ) , ( ii ) ( , , ) - find a subgraph in whose -objective value ( under the -cost ) is at most and which has minimum -objective value ( under the -cost ) .note that bicriteria problems are generally hard , when the two criteria are _ hostile _ with respect to each other - the minimization of one criterion conflicts with the minimization of the other .a good example of hostile objectives are the degree and the total edge cost of a spanning tree in an unweighted graph .two minimization criteria are formally defined to be hostile whenever the minimum value of one objective is monotonically nondecreasing as the budget ( bound ) on the value of the other objective is decreased .let be any -approximation algorithm for ( , , ) on graph with budget under the -cost .we now show that there is a transformation which produces a -approximation algorithm for ( , , ) .the transformation uses binary search on the range of values of the -cost with an application of the given approximation algorithm , , at each step of this search .let the minimum -cost of a -bounded subgraph in be .let be an upper bound on the -cost of any -bounded subgraph in .note that is at most some polynomial in times the maximum -cost ( of an edge or a node ) .hence is at most a polynomial in terms of the input specification .let ( ) denote the -cost ( -cost ) of the subgraph output by algorithm bicriteria - equivalence given below .if contains a -bounded subgraph in then algorithm bicriteria - equivalence outputs a subgraph from whose -cost is at most times that of the minimum -cost -bounded subgraph and whose -cost is at most . since and are hostile criteria it follows that the binary search in step [ binsearch - step ] is well defined .assume that contains a -bounded subgraph .then , since returns a subgraph with -cost at most , it is clear that algorithm bicriteria - equivalence outputs a subgraph in this case . as a consequence of step [ step1 ] and the performance guarantee of the approximation algorithm , we get that . by step [ step2 ]we have that and .thus algorithm bicriteria - equivalence outputs a subgraph from whose -cost is at most times that of the minimum -cost -bounded subgraph and whose -cost is at most .note however that in general the resulting -approximation algorithm is , not _strongly _ polynomial since it depends on the range of the -costs .but it is a _ polynomial - time _algorithm since its running time is linearly dependent on the largest -cost .the above discussion leads to the following theorem .[ equiv - thm ] any -approximation algorithm for ( , , ) can be transformed in polynomial time into a -approximation algorithm for ( , , ) .our formulation is more general because it subsumes the case where one wishes to minimize some functional combination of the two criteria .we briefly comment on this next . for the purposes of illustrationlet and be two objective functions and let us say that we wish to minimize the sum of the two objectives and . call this an ( , ) problem .let be any -approximation algorithm for ( , , ) on graph with budget under the -cost .we show that , there is a polynomial time -approximation algorithm for the ( , ) problem .the transformation uses simple linear search in steps of over the range of values of the -cost with an application of the given approximation algorithm , , at each step of this search .let the optimum value for the ( , ) problem on a graph be , where and denote respectively the contribution of the two costs and for and .let ( ) denote the -cost ( -cost ) of the subgraph output by .finally , let denote the value computed by algorithm convert .let be any -approximation algorithm for ( , , ) on graph with budget under the -cost .then , for all , there is a polynomial time -approximation algorithm for the ( , ) problem .* proof sketch : * consider the iteration of the binary search in which the bound on the -cost is such that .notice that such a bound is considered as a result of discretization of the interval $ ] .then as a consequence of the performance guarantee of the approximation algorithm , we get that by step [ stepa ] , the performance guarantee of the algorithm , and the hostility of and , we have that .thus .since algorithm convert outputs a subgraph from the sum of whose -cost and -cost is minimized , we have that } \left ( heu_c({\cal c } ' ) + heu_d({\cal c } ' ) \right ) \leq ( 1 + \epsilon ) \max \ { \alpha,\beta \ } ( opt_{c + d}).\ ] ] a similar argument shows that an -approximation algorithm , for a ( , , ) problem can be used to find devise a polynomial time approximation algorithm for the ( , ) problem .a similar argument can also be given for other basic functional combinations .we make two additional remarks .1 . algorithms for solving ( ( , ) , ) problems can not in general guarantee any bounded performance ratios for solving the ( , , ) problem .for example , a solution for the ( total cost + total cost , spanning tree ) problem or the ( total cost / total cost , spanning tree ) problem can not be directly used to find a good -approximation algorithm for the ( total cost , total cost , spanning tree)-bicriteria problem .2 . the use of approximation algorithms for ( , , )-bicriteria problems , to solve ( ( , ) , ) problems ( denotes a function combination of the objectives ) does not always yield the best possible solutions . for example problems such as ( total cost + total cost , spanning tree ) and ( total cost / total cost , spanning tree ) can be solved exactly in polynomial time by direct methods but can only be solved approximately using any algorithm for the ( total cost , total cost , spanning tree)-bicriteria problem . * * * * the above discussion points out that while a good solution to the ( , , )-bicriteria problem yields a `` good '' solution to any unicriterion version , the converse is not true .it is in this sense that we say our formulation of bicriteria network design problems is general and subsumes other functional combinations . ** * * * * * * * *in this section , we present approximation algorithms for a broad class of bicriteria problems where both the objectives in the problem are of the same type ( e.g. , both are total edge costs of some network computed using two different costs on edges , or both are diameters of some network calculated using two different costs etc . ) . asbefore , let be a graph with two ( integral ) cost functions , and .let denote the budget on criteria .we assume that the and cost functions are of the same kind ; i.e. , they are both costs on edges or , costs on nodes .let be any -approximation algorithm that on input produces a solution subgraph in minimizing criterion , under the single cost function . in a mild abuse of notation, we also let denote the ( -)cost of the subgraph output by when running on input under cost function .we use the following additional notation in the description of the algorithm and the proof of its performance guarantee . given constants and and two cost functions and , defined on edges ( nodes ) of a graph, denotes the composite function that assigns a cost to each edge ( node ) in the graph .let denote the cost of the subgraph , returned by ( under the -cost function ) .let the minimum -cost of a -bounded subgraph in be .let ( ) denote the -cost ( -cost ) of the subgraph output by algorithm parametric - search given below .let be a fixed accuracy parameter .in what follows , we devise a -approximation algorithm for ( , , ) , under the two cost functions and .the algorithm consists of performing a binary search with an application of the given approximation algorithm , , at each step of this search .[ binsearch - justify ] the binary search , in step [ psbinsearch - step ] of algorithm parametric - search is well - defined .since is the same as , we get that .hence is a monotone nonincreasing function of .thus the binary search in step [ psbinsearch - step ] of algorithm parametric - search is well - defined .if contains a -bounded subgraph in then algorithm parametric - search outputs a subgraph from whose -cost is at most times that of the minimum -cost -bounded subgraph and whose -cost is at most . by claim [ binsearch - justify ]we have that the binary search in step [ psbinsearch - step ] of algorithm parametric - search is well - defined .assume that contains a -bounded subgraph .then , since returns a subgraph with cost at most , under the -cost function , it is clear that algorithm parametric - search outputs a subgraph in this case . as a consequence of step [ psstep1 ] and the performance guarantee of the approximation algorithm , we get that by step [ psstep2 ] we have that the subgraph output by algorithm parametric - search has the following bounds on the -costs and the -costs. thus algorithm parametric - search outputs a subgraph from whose -cost is at most times that of the minimum -cost -bounded subgraph and whose -cost is at most .note however that the resulting -approximation algorithm for ( , , ) may not be _ strongly _ polynomial since it depends on the range of the -costs .but it is a _ polynomial - time _algorithm since its running time is linearly dependent on .note that is at most some polynomial in times the maximum -cost ( of an edge or a node ) .hence is at most a polynomial in terms of the input specification .the above discussion leads to the following theorem .[ better - scale - thm ] any -approximation algorithm that produces a solution subgraph in minimizing criterion can be transformed into a -approximation algorithm for ( ,, ) .the above theorem can be generalized from the bicriteria case to the multicriteria case ( with appropriate worsening of the performance guarantees ) where all the objectives are of the same type but with different cost functions .in this section , we describe algorithm dcst , our -approximation algorithm for ( diameter , total cost , steiner tree ) or the diameter - bounded minimum steiner tree problem .note that ( diameter , total cost , steiner tree ) includes ( diameter , total cost , spanning tree ) as a special case .we first state the problem formally : given an undirected graph , with two cost functions and defined on the set of edges , diameter bound and terminal set , the ( diameter , total cost , steiner tree ) problem is to find a tree of minimum -cost connecting the set of terminals in with diameter at most under the -cost .the technique underlying algorithm dcst is very general and has wide applicability .hence , we first give a brief synopsis of it .the basic algorithm works in phases ( iterations ) .initially the solution consists of the empty set . during each phase of the algorithmwe execute a subroutine to choose a subgraph to add to the solution .the subgraph chosen in each iteration is required to possess two desirable properties .first , it must not increase the budget value of the solution by more than ; second , the solution cost with respect to must be no more than , where denotes the minimum -cost of a bounded subgraph in * s*. since the number of iterations of the algorithm is we get a -approximation algorithm .the basic technique is fairly straightforward .the non - trivial part is to devise the right subroutine to be executed in each phase . must be chosen so as to be able to prove the required performance guarantee of the solution .we use the solution based decomposition technique in the analysis of our algorithm .the basic idea ( behind the solution based decomposition technique ) is to use the existence of an optimal solution to prove that the subroutine finds the desired subgraph in each phase .we now present the specifics of algorithm dcst .the algorithm maintains a set of connected subgraphs or _clusters _ each with its own distinguished vertex or _ center_.initially each terminal is in a cluster by itself . in each phase , clusters are merged in pairs by adding paths between their centers . since the number of clusters comes down by a factor of each phase , the algorithm terminates in phases with one cluster .it outputs a spanning tree of the final cluster as the solution .we make a few points about algorithm dcst : 1 .the clusters formed in step [ dcst - match ] need not be disjoint .2 . all steps , except step [ dcst - paths ] , in algorithm dcst can be easily seen to have running times independent of the weights . we employ hassin s strongly polynomial for step [ dcst - paths ] .hassin s approximation algorithm for the -bounded minimum -cost path runs in time .thus algorithm dcst is a strongly polynomial time algorithm .3 . instead of finding an exact minimum cost matching in step [ dcst - minwtmatching ], we could find an approximate minimum cost matching .this would reduce the running time of the algorithm at the cost of introducing a factor of to the performance guarantee .we now state some observations that lead to a proof of the performance guarantee of algorithm dcst .assume , in what follows , that contains a diameter -bounded steiner tree .we also refer to each iteration of step [ dcst - repeat ] as a phase .[ log - lemma ] algorithm dcst terminates in phases. let denote the number of clusters in phase .note that since we pair up the clusters ( using a matching in step [ dcst - match ] ) .hence we are left with one cluster after phase and algorithm dcst terminates .the next claim points out as clusters get merged , the nodes within each cluster are not too far away ( with respect to -distance ) from the center of the cluster .this intuitively holds for the following important reasons .first , during each phase , the graph has as its vertices , the centers of the clusters in that iteration . as a result, we merge the clusters by joining their centers in step [ dcst - match ] .second , in step [ dcst - match ] , for each pair of clusters and that are merged , we select one of their centers , or as the center for the merged cluster .this allows us to inductively maintain two properties : ( i ) the required distance of the nodes in a cluster to their centers in an iteration is and ( ii ) the center of a cluster at any given iteration is a terminal node .[ claim : radius ] let be any cluster in phase of algorithm dcst .let be the center of .then any node in is reachable from by a diameter- path in under the -cost .note that the existence of a diameter -bounded steiner tree implies that all paths added in step [ dcst - match ] have diameter at most under -cost .the proof now follows in a straightforward fashion by induction on .[ dcostlemma ] algorithm dcst outputs a steiner tree with diameter at most under the -cost .the proof follows from claims [ log - lemma ] and [ claim : radius ] .this completes the proof of performance guarantee with respect to the -cost .we now proceed to prove the performance guarantee with respect to the -costs .we first recall the following pairing lemma . [ pairing - claim ] let be an edge - weighted tree with an even number of marked nodes .then there is a pairing , , of the marked nodes such that the paths in are edge - disjoint .* * * * * a pairing of the marked nodes that minimizes the sum of the lengths of the tree - paths between the nodes paired up can be seen to obey the property in the claim above .* * * * * * * * * * * * * [ cost - lemma ] let be any minimum -cost diameter- bounded steiner tree and let denote its -cost .the weight of the largest cardinality minimum - weight matching found in step [ dcst - match ] in each phase of algorithm dcst is at most .consider phase of algorithm dcst .note that since the centers at stage are a subset of the nodes in the first iteration , the centers are terminal nodes .thus they belong to .mark those vertices in that correspond to the matched vertices , , of in step [ dcst - minwtmatching ] .then by claim [ pairing - claim ] there exists a pairing of the marked vertices , say , and a set of edge - disjoint paths in opt between these pairs .since these paths are edge - disjoint their total -cost is at most .further these paths have diameter at most under the -cost .hence the sum of the weights of the edges in , which forms a perfect matching on the set of matched vertices , is at most .but in step [ dcst - minwtmatching ] of algorithm dcst , a minimum weight perfect matching in the graph was found .hence the weight of the matching found in step [ dcst - match ] in phase of algorithm dcst is at most .[ ccostlemma ] let be any minimum -cost diameter- bounded steiner tree and let denote its -cost .algorithm dcst outputs a steiner tree with total -cost at most .from claim [ cost - lemma ] we have that the -cost of the set of paths added in step [ dcst - match ] of any phase is at most . by claim [ log - lemma ] there are a total of phases and hence the steiner tree output by algorithm dcst has total -cost at most . from lemmas [ dcostlemma ] and [ ccostlemma ] we have the following theorem . [ spanning - thm ] there is a strongly polynomial - time algorithm that , given an undirected graph , with two cost functions and defined on the set of edges , diameter bound , terminal set and a fixed , constructs a steiner tree of of diameter at most under the -costs and of total -cost at most times that of the minimum--cost of any steiner tree with diameter at most under .in this section we consider the class of treewidth - bounded graphs and give algorithms with improved time bounds and performance guarantees for several bicriteria problems mentioned earlier .we do this in two steps .first we develop pseudopolynomial - time algorithms based on dynamic programming .we then present a general method for deriving fully polynomial - time approximation schemes ( ) from the pseudopolynomial - time algorithms .we also demonstrate an application of the above results to the minimum broadcast time problem . a class of treewidth - bounded graphs can be specified using a finite number of primitive graphs and a finite collection of binary composition rules .we use this characterization for proving our results . a class of treewidth - bounded graphs is inductively defined as follows . 1the number of primitive graphs in is finite .each graph in has an ordered set of special nodes called * terminals*. the number of terminals in each graph is bounded by a constant , say .3 . there is a finite collection of binary composition rules that operate only at terminals , either by identifying two terminals or adding an edge between terminals .a composition rule also determines the terminals of the resulting graph , which must be a subset of the terminals of the two graphs being composed .[ decomp - thm ] every problem in table 2 can be solved exactly in -time for any class of treewidth bounded graphs with no more than terminals , for fixed and a budget on the first objective .the above theorem states that there exist pseudopolynomial - time algorithms for all the bicriteria problems from table 2 when restricted to the class of treewidth - bounded graphs .the basic idea is to employ a dynamic programming strategy .in fact , this dynamic programming strategy ( in conjunction with theorem [ equiv - thm ] ) yields polynomial - time ( not just pseudopolynomial - time ) algorithms whenever one of the criteria is the degree .we illustrate this strategy by presenting in some detail the algorithm for the diameter - bounded minimum cost spanning tree problem .[ thm : dmsttwdth ] for any class of treewidth - bounded graphs with no more than terminals , there is an -time algorithm for solving the diameter -bounded minimum -cost spanning tree problem .let be the cost function on the edges for the first objective ( diameter ) and , the cost function for the second objective ( total cost ) .let be any class of decomposable graphs .let the maximum number of terminals associated with any graph in be .following , it is assumed that a given graph is accompanied by a parse tree specifying how is constructed using the rules and that the size of the parse tree is linear in the number of nodes .let be a partition of the terminals of .for every terminal let be a number in . for every pair of terminals and in the same block of the partition let be a number in .corresponding to every partition , set and set we associate a cost for defined as follows : + = minimum total cost under the function of any forest containing + a tree for each block of , such that the terminal nodes + occurring in each tree are exactly the members of the corresponding + block of , no pair of trees is connected , every vertex in + appears in exactly one tree , is an upper bound on the maximum + distance ( under the -function ) from to any vertex in the same + tree and is an upper bound the distance ( under the -function ) + between terminals and in their tree . for the above defined cost, if there is no forest satisfying the required conditions the value of is defined to be .note that the number of cost values associated with any graph in is .we now show how the cost values can be computed in a bottom - up manner given the parse tree for . to begin with ,since is fixed , the number of primitive graphs is finite . for a primitive graph ,each cost value can be computed in constant time , since the number of forests to be examined is fixed .now consider computing the cost values for a graph constructed from subgraphs and , where the cost values for and have already been computed .notice that any forest realizing a particular cost value for decomposes into two forests , one for and one for with some cost values .since we have maintained the best cost values for all possibilities for and , we can reconstruct for each partition of the terminals of the forest that has minimum cost value among all the forests for this partition obeying the diameter constraints .we can do this in time independent of the sizes of and because they interact only at the terminals to form , and we have maintained all relevant information . hence we can generate all possible cost values for by considering combinations of all relevant pairs of cost values for and .this takes time per combination for a total time of . as in , we assume that the size of the given parse tree for is .thus the dynamic programming algorithm takes time .this completes the proof .the pseudopolynomial - time algorithms described in the previous section can be used to design fully polynomial - time approximation schemes ( ) for these same problems for the class of treewidth - bounded graphs .we illustrate our ideas once again by devising an for the ( diameter , total cost , spanning tree)-bicriteria problem for the class of treewidth - bounded graphs .the basic technique underlying our algorithm , algorithm fpas - dcst , is approximate binary search using rounding and scaling - a method similar to that used by hassin and warburton . as in the previous subsection ,let be a treewidth - bounded graph with two ( integral ) edge - cost functions and .let be a bound on the diameter under the -cost .let be an accuracy parameter .without loss of generality we assume that is an integer .we also assume that there exists a -bounded spanning tree in .let be any minimum -cost diameter -bounded spanning tree and let denote its -cost .let be a pseudopolynomial time algorithm for the ( total cost , diameter , spanning tree ) problem on treewidth - bounded graphs ; i.e. , outputs a minimum diameter spanning tree of with total cost at most ( under the -costs ) .let the running time of be for some polynomial . for carrying out our approximate binary search we need a testing procedure procedure test(v ) which we detail below: we now prove that procedure test( ) has the properties we need to do a binary search .[ claim : testerhilo ] if then procedure test( ) outputs low . and , if then procedure test( ) outputs high .if then since therefore procedure test( outputs low .let be the -cost of any diameter bounded spanning tree .then we have . if then since therefore procedure test( ) outputs high .[ claim : testerpoly ] the running time of procedure test( ) is .procedure test( ) invokes only times . andeach time the budget is bounded by , hence the running time of procedure test( ) is .we are ready to describe algorithm fpas - dcst - which uses procedure test( ) to do an approximate binary search .[ correct ] if contains a -bounded spanning tree then algorithm fpas - dcst outputs a spanning tree with diameter at most under the -cost and with -cost at most .it follows easily from claim [ claim : testerhilo ] that the loop in step [ fpas - stepbound ] of algorithm fpas - dcst executes times before exiting with . since get that step [ fpas - stepout ] of algorithm fpas - dcst definitely outputs a spanning tree . let be the tree output .then we have that but since step [ fpas - stepout ] of algorithm fpas - dcst outputs the spanning tree with minimum -cost we have that therefore this proves the claim .[ runtime ] the running time of algorithm fpas - dcst is .from claim [ claim : testerpoly ] we see that step [ fpas - stepbound ] of algorithm fpas - dcst takes time while step [ fpas - stepout ] takes time .hence the running time of algorithm fpas - dcst is .lemmas [ runtime ] and [ correct ] yield : for the class of treewidth - bounded graphs , there is an for the ( diameter , total cost , spanning tree)-bicriteria problem with performance guarantee .as mentioned before , similar theorems hold for the other problems in table 2 and all these results extend directly to steiner trees . the polynomial - time algorithm for the ( degree , diameter , spanning tree)-bicriteria problem for treewidth - bounded graphscan be used in conjunction with the ideas presented in to obtain near - optimal broadcast schemes for the class of treewidth - bounded graphs .as mentioned earlier , these results generalize and improve the results of kortsarz and peleg . given an unweighted graph and a root , a _ broadcast scheme _ is a method for communicating a message from to all the nodes of .we consider a telephone model in which the messages are transmitted synchronously and at each time step , any node can either transmit or receive a message from at most one of its neighbors .minimum broadcast time problem _ is to compute a scheme that completes in the minimum number of time steps .let denote the minimum broadcast time from root and let denote the minimum broadcast time for the graph from any root .the problem of computing - the _ minimum rooted broadcast time problem _ - and that of computing - the _ minimum broadcast time problem _ are both -complete for general graphs .it is easy to see that any approximation algorithm for the minimum rooted broadcast time problem automatically yields an approximation algorithm for the minimum broadcast time problem with the same performance guarantee .we refer the readers to for more details on this problem . combining our approximation algorithm for ( diameter ,total cost , spanning tree)-bicriteria problem with performance guarantee for the class of treewidth bounded graphs with the observations in yields the following theorem .* * * * * * * * * let denote the set of all spanning trees of graph . for spanning tree let denote the maximum , over all nodes , of the degree of the node in .let the -eccentricity of , , denote the maximum , over all nodes , of the distance of the node from in .we generalize the definition of the _ poise _ of a graph from to that of the _ rooted poise _ of a graph : [ th : poise_log ] the proof of this theorem is constructive and implies that a good solution to the rooted poise of a graph can be used to construct a good solution for the minimum rooted broadcast time problem with an -factor overhead . [th : poise_approx ] given a treewidth bounded graph and root , in polynomial time a spanning tree can be found such that . since is an unweighted treewidth - bounded graph we can obtain a ( strongly ) polynomial - time algorithm to solve the ( degree , -eccentricity , spanning tree)-bicriteria problem on , using the ideas outlined in subsection [ exact - algos ] .run this algorithm with all possible degree bounds and choose the tree with the least value of . from theorem[ th : poise_log ] and lemma [ th : poise_approx ] we get the following theorem * * * * * * * * * * * * * [ th : broadcast ] for any class of treewidth - bounded graphs there is a polynomial - time -approximation algorithm for the minimum rooted broadcast time problem and the minimum broadcast time problem .we have obtained the first polynomial - time approximation algorithms for a large class of bicriteria network design problems . the objective function we considered were ( i ) degree , ( ii ) diameter and ( iii ) total cost .the connectivity requirements considered were spanning trees , steiner trees and ( in several cases ) generalized steiner trees .our results were based on the following three ideas : 1 . a binary search method to convert an -approximation algorithm for ( , , )-bicriteria problems to a -approximation algorithm for ( , , )-bicriteria problems .2 . a parametric search technique to devise approximation algorithms for ( ,,)-bicriteria problems .we note that theorem [ better - scale - thm ] is very general .given _ any _ -approximation algorithm for minimizing the objective in the subgraph - class , theorem [ better - scale - thm ] allows us to produce a -approximation algorithm for the ( , , )-bicriteria problem .3 . a cluster based approach for devising approximation algorithms for certain categories of ( ,,)-bicriteria problems . during the time when this paper was under review, important progress has been made in improving some of the results in this paper .recently , ravi and goemans have devised a approximation scheme for the ( total cost , total cost , spanning tree ) problem .their approach does not seem to extend to devising approximation algorithms for more general subgraphs considered here . in , kortsarz and peleg consider the ( diameter , total cost , steiner tree ) problem .they provide polynomial time approximation algorithms for this problem with performance guarantees for constant diameter bound and for any fixed for general diameter bounds . in , naor and schieber provide an elegant approximation algorithm for the ( diameter , total cost , spanning tree ) problem .it is not clear how to extend their algorithms to the steiner tree case . improving the performance guarantees for one or more of the problems considered hereremains an interesting direction for future research . 1 .devise algorithms that improve upon the performance guarantees presented in this paper . as a step in this direction ,recently ravi and goemans have devised a approximation scheme for the ( total cost , total cost , spanning tree ) problem .2 . devise approximation algorithms for the ( diameter , total cost , generalized steiner tree ) problem . for more information on the generalized steiner tree problemlook at and the references therein .find good broadcast schemes which minimize the total cost of the spanning tree . in our languagethis problem translates to finding an approximation algorithm for the ( degree , diameter , total cost , spanning tree ) problem .* acknowledgements : * we would like to thank an anonymous referee for several useful comments and suggestions .we thank sven krumke ( university of wrzberg ) for reading the paper carefully and providing several useful comments . in particular , both pointed an error in the original proof of theorem 5.3 .we thank professors s. arnborg and h. l. bodlaender for pointing out to us the equivalence between treewidth bounded graphs and decomposable graphs .we thank a. ramesh for bringing to our attention .we also thank dr . v. kompella for making his other papers available to us .finally , we thank the referees of _icalp 95 _ for their constructive comments and suggestions . ** * a. blum , r. ravi and s. vempala , `` a constant - factor approximation algorithm for the -mst problem , '' to appear in the _ proceedings of the 28th annual acm symposium on the theory of computation _ ( 1996 ) . * * * * * * * * * * * * * * * * * m. frer and b. raghavachari , `` an approximation algorithm for the minimum degree spanning tree problem , '' _ proceedings of the 28th annual allerton conference on communication , control and computing _ , pp .274 - 281 ( 1990 ) .* * * * * * * * * * j. l. ganley , m. j. golin and j. s. salowe , `` the multi - weighted spanning tree problem , '' _ proceedings of the first conference on combinatorics and computing ( cocoon ) _ , springer verlag , lncs pp .141 - 150 ( 1995 ) .* * * m. x. goemans , a. v. goldberg , s. plotkin , d. b. shmoys , e. tardos and d. p. williamson , `` improved approximation algorithms for network design problems , '' _ proceedings of the fifth annual acm - siam symposium on discrete algorithms _223 - 232 ( 1994 ) .* * * * * * * * * * * * * * v.p .kompella , j.c .pasquale and g.c .polyzos , `` two distributed algorithms for the constrained steiner tree problem , '' technical report cal-1005 - 92 , computer systems laboratory , university of california , san diego ( oct . 1992 ) . * * * * * * * r. ravi , m. v. marathe , s. s. ravi , d. j. rosenkrantz , and h.b . hunt iii , `` many birds with one stone : multi - objective approximation algorithms , '' _ proceedings of the 25th annual acm symposium on the theory of computing _ , pp .438 - 447 ( 1993 ) .( expanded version appears as brown university technical report tr - cs-92 - 58 . )r. raz and s. safra , `` a sub - constant error - probability low - degree test , and a sub - constant error - probability pcp characterization of np , '' _ proc .29th annual acm symposium on theory of computing _, 475 - 484 ( 1997 ) . ** * r. ravi , r. sundaram , m. v. marathe , d. j. rosenkrantz , and s. s. ravi , `` spanning trees short or small , '' in _ proceedings of the 5th annual acm - siam symposium on discrete algorithms _ ,546 - 555 ( 1994 ) .journal version to appear in _ siam journal on discrete mathematics_.
we study a general class of bicriteria network design problems . a generic problem in this class is as follows : given an undirected graph and two minimization objectives ( under different cost functions ) , with a budget specified on the first , find a < subgraph from a given subgraph - class that minimizes the second objective subject to the budget on the first . we consider three different criteria - the total edge cost , the diameter and the maximum degree of the network . here , we present the first polynomial - time approximation algorithms for a large class of bicriteria network design problems for the above mentioned criteria . the following general types of results are presented . first , we develop a framework for bicriteria problems and their approximations . second , when the two criteria are the same we present a `` black box '' parametric search technique . this black box takes in as input an ( approximation ) algorithm for the unicriterion situation and generates an approximation algorithm for the bicriteria case with only a constant factor loss in the performance guarantee . third , when the two criteria are the diameter and the total edge costs we use a cluster - based approach to devise a approximation algorithms the solutions output violate both the criteria by a logarithmic factor . finally , for the class of treewidth - bounded graphs , we provide pseudopolynomial - time algorithms for a number of bicriteria problems using dynamic programming . we show how these pseudopolynomial - time algorithms can be converted to fully polynomial - time approximation schemes using a scaling technique . * ams 1980 subject classification . * 68r10 , 68q15 , 68q25 + * keywords . * approximation algorithms , bicriteria problems , spanning trees , + network design , combinatorial algorithms . + = 1.2
a major challenge of contemporary biology is to ascribe interpretation to high - throughput experimental or computational results , where each considered entity ( gene or protein ) is assigned a value .biological information is often summarized through controlled vocabularies such as gene ontology ( go ) , where each annotated term includes a list of entities .let denote a collection of values , each associated with an entity .given and a controlled vocabulary , enrichment analysis aims to retrieve the terms that by statistical inference best describe , that is , the terms associated with entities with atypical values .many enrichment analysis tools have been developed primarily to process microarray data . in terms of biological relevance ,the performance assessment of those tools is generally difficult .it requires a large , comprehensive ` gold standard ' vocabulary together with a collection of s processed from experimental data , and with true / false positive terms corresponding to each correctly specified .this invariably introduces some degree of circularity because the terms often come from curating experimental results . before declaring efficacy in biological information retrieval that is nontrivial to assess , an enrichment method should pass at least the statistical accuracy and internal consistency test . in their recent survey , list 68 distinct bioinformatic enrichment tools introduced between 2002 and 2008 .most tools share a similar workflow : given obtained by suitably processing experimental data , they sequentially test each vocabulary term for enrichment to obtain its p - value ( the likelihood of a false positive given the null hypothesis ) . since many terms are tested , a multiple hypothesis correction , such as bonferroni or false discovery rate ( fdr ) , is applied to p - value of each to obtain the final statistical significance .the results are displayed for the user in a suitable form outlining the significant terms and possibly relations between them .note that the latter steps are largely independent from the first . to avoid confounding factors, we will focus exclusively on the original enrichment p - values .based on the statistical methods employed , the existing enrichment tools can generally be divided into two main classes . the singular enrichment analysis ( sea ) class contains numerous tools that form the majority of published ones . by ordering values in ,these tools require users to select a number of top - ranking entities as input and mostly use hypergeometric distribution ( or equivalently fisher s exact test ) to obtain the term p - values .after the selection is made , sea treats all entities equally , ignoring their value differences .the gene set analysis ( gsa ) class was pioneered by the gsea tool .tools from this class use all values ( entire ) to calculate p - values and do not require pre - selection of entities .some approaches in this group apply hypergeometric tests to all possible selections of top - ranking entities .the final p - value is computed by combining ( in a tool - specific manner ) the p - values from the individual tests .other approaches use non - parametric approaches : rank - based statistics such as wilcoxon rank - sum or kolmogorov - smirnov - like . when weights are taken into account , such as in gsea , statistical significancemust be determined from a sampled ( shuffled ) distribution .unfortunately , limited by the number of shuffles that can be performed , the smallest obtainable p - value is bounded away from 0 . the final group of gsa methods computes a score for each vocabulary term as a sum of the values ( henceforth used interchangeably with weights ) of the entities it annotates . in general , the score distribution for the experimental data is unknown . by central limit theorem ,when is large , gaussian or student s t - distribution can be used to approximate .unfortunately , when the weight distributions are skewed , the required may be too large for practical use .evidently , this undermines the p - value accuracy of small terms ( meaning terms with few entities ) , which are biologically most specific .it is generally found that , given the same vocabulary and , different enrichment analysis tools report diverse results .we believe this may be attributed to disagreement in p - values reported as well as that different methods have different degree of robustness ( internal consistency ) . instead of providing a coherent biological understanding , the array of diverse results questions the confidence of information found . furthermore , other than microarray datasets , there exist experimental or computational results such as those from chip - chip , deep sequencing , quantitative proteomics and _ in silico _ network simulations , that may benefit from enrichment analysis .it is thus imperative to have an enrichment method that report accurate p - values , preserves internal consistency , and allows investigations of a broader range of datasets . to achieve these goals , we have developed a novel enrichment tool , called saddlesum , that founds on the well - known lugananni - rice formula and derives its statistics from approximating asymptotically the distribution function of the scores used in the parametric gsa class .this allows us to obtain accurate statistics even in the cases where the distribution function generating is very skewed and for terms containing few entities .the latter aspect is particularly important for obtaining biologically specific information .we distinguish two sets : the set of entities of size and the controlled vocabulary . each term from maps to a set of size . from experimental results ,we obtain a set and ask how likely it is to randomly pick entities whose sum of weights exceeds the sum .assume that the weights in come independently from a continuous probability space with the density function such that the moment generating function exists for in a neighborhood of 0 .the density of , sum of weights arbitrarily sampled from , can be expressed by the fourier inversion formula where denotes the cumulant generating function of .the tail probability or p - value for a score is given by we propose to use an asymptotic approximation to ( [ eqt : pval1 ] ) , which improves with increasing and . derived an asymptotic approximation for the density through saddlepoint expansion of the integral ( [ eqt : int3 ] ) while the corresponding approximation to the tail probability was obtained by .let and denote respectively the density and the tail probability of gaussian distribution .let be a solution of the equation then , the leading term of the lugananni - rice approximation to the tail probability takes the form where and .appropriate summary of derivation of ( [ eqt : tail ] ) is provided in supplementary materials . has shown that eq.([eqt : saddle1 ] ) has a unique simple root under most conditions and that increases with , with for where is the mean of .while the approximation ( [ eqt : tail ] ) is uniformly valid over the whole domain of , its components need to be rearranged for numerical computation near the mean . when , dominates and the overall error is .saddlesum , our implementation of lugananni - rice approximation for computing enrichment p - values , first solves eq.([eqt : saddle1 ] ) for using newton s method and then returns the p - value using ( [ eqt : tail ] ) .the derivatives of the cumulant generating function are estimated from : we approximate the moment generating function by , and then and . since the same is used to sequentially evaluate p - values of all terms in , we retain previously computed values in a sorted array .this allows us , using binary search , to reject many terms with p - values greater than a given threshold without running newton s method or to bracket the root of ( [ eqt : saddle1 ] ) for faster convergence .more details on the saddlesum implementation and evaluations of its accuracy against some well - characterized distributions are in section 2 of supplementary materials . when run as a term enrichment tool , saddlesum reports e - value for each significant term by applying bonferroni correction to the term s p - value .the assignment of human genes to go terms was taken from the ncbi gene2go file ( ftp://ftp.ncbi.nih.gov/gene/data/gene2go.gz ) downloaded on 07 - 02 - 2009 .after assigning all genes to terms , we removed all redundant terms if several terms mapped to the same set of genes , we kept only one such term . for our statistical experiments we kept only the terms with no less than five mapped genes within the set of weights considered and hence the number of processed terms varied for each realization of sampling ( see below ) . is an implementation of the framework for exploring information flow in interaction networks .information flow is modeled through discrete time random walks with damping at each step the walker has a certain probability of leaving the network .although offers three modes : emitting , absorbing and channel , we only used the simplest , emitting mode , to provide examples illustrating issues of significance assignment .the emitting mode takes as input one or more network proteins , called sources , and a damping factor . for each protein node in the network, the model outputs the expected number of visits to that node by random walks originating from the sources , thus highlighting the network neighborhoods of the sources .the damping factor determines the average number of steps taken by a random walk before termination : corresponds to no termination while leads to no visits apart from the originating node . for our protein - protein interaction network examples, we used the set of all human physical interactions from the biogrid , version 2.0.54 ( july 2009 ) .the network consists of 7702 proteins and 56400 unique interactions .each interaction was represented by an undirected link .a link carries weight 2 if its two ends connect to the same protein and 1 otherwise . from the ncbi gene expression omnibus ( geo ) , we retrieved human microarray datasets with expression ratios ( weights ) provided , resulting in 34 datasets and 136 samples in total . for each sample , when multiple weights for the same gene were present , we took their mean instead .this resulted in a where each gene is assigned a unique raw weight . for evaluations, we also used another version of where negative weights were set to zero .this version facilitated investigation of up - regulation while keeping the down - regulated genes as part of statistical background . by definition ,a p - value associated with a score is the probability of that score or better arising purely by chance .we tested the accuracy of reported p - values reported by enrichment methods via simulations on ` decoy ' databases , which contained only terms with random gene assignments . for each term from the decoy dataset and each set of weights based on network or microarray data , we recorded the reported p - value and thus built an empirical distribution of p - values . if a method reports accurate p - values , the proportion of runs , which we term empirical p - value , reporting p - values smaller than or equal to a p - value cutoff , should be very close to that cutoff .we show the results graphically by plotting on the log - log scale the empirical p - value as a function of the cutoff . for each given list of entities , be it from the target gene set of a microarray dataset or the set of participating human proteins in the interaction network, we produced two types of decoy databases .the first type was based on go .we shuffled gene labels 1000 times .for each shuffle , we associated all terms from go with the shuffled labels to retain the term dependency .this resulted in a database with approximately terms ( 1000 shuffles times about 5000 go terms ) . in the second type , each term , having the same size , was obtained by sampling without replacement genes from .the databases from this type ( one for each term size considered ) contained exactly terms .the evaluation query set of 100 s from interaction networks was obtained by randomly sampling 100 proteins out of 7702 and running with each protein as a single source .the weights for source proteins were not considered since they were prescribed , not resulting from simulation .each run used , without excluding any nodes from the network . for microarrays , the set of 136 samples was used . since both query sets are of size , the total number of matches was .similar to saddlesum , t - test approaches are based on sum - of - weights score but use the student s t - distribution to infer p - values .as before , let denote the weight associated with entity , let denote the set of entities associated with a term from vocabulary and let .for any set of size , let denote the mean weight of entities in and let be their sample variance .gage enrichment tool uses two sample t - test assuming unequal variances and equal sample sizes to compare the means over and .the test statistic is and the p - value is obtained from the upper tail of the student s t - distribution with degrees of freedom t - profiler compares the means over and using two sample t - test assuming equal variances but unequal sample sizes .the pooled variance estimate is given by and the test statistic is the t - profiler p - value is then obtained from the tail of the student s t - distribution with degrees of freedom .methods based on hypergeometric distribution or equivalently , fisher s exact test , use only rankings of weights and require selection of ` significant ' entities prior to calculation of p - value .we first rank all entities according to their weights and consider the set of entities with largest weights .the number can be fixed ( say 50 ) , correspond to a fixed percentage of the total number of weights , depend on the values of weights , or be calculated by other means .the score for the term is given by the size of the intersection , , between and .this is equivalent to setting with for and 0 otherwise .the p - value for score is hence , the p - value measures the likelihood of score or better over all possible ways of selecting entities out of , with entities associated with the term investigated . in each of our p - value accuracy experiments we used two variants of the hypergeometric method , one taking a fixed percentage of nodes and the other taking into account the values of weights . for microarray datasets , the fist variant took 1% of available genes ( hgem - pn1 ) while the second select genes with four fold change or more ( hgem - f2 ) . in experiments based on protein networks, we took 3% of available proteins ( 231 entities ) for the first variant ( hgem - pn3 ) and used the participation ratio formula to determine in the second ( hgem - pr ) .participation ratio is given by the formula we chose a smaller percentage of weights for microarray - based data ( 1% vs 3% for data derived for networks ) because the microarray datasets generally contained measurement for more genes than the number of proteins in the network .instead of making a single , arbitrary choice of and applying hypergeometric score , mhg method implemented in the gorilla package considers all possible s .the mhg score is defined as where is the number of entities annotated by the term among the top - ranked entities . the exact p - value for mhg score is then calculated by using a dynamic programming algorithm developed by eden __ . for our experiments we used an implementation in c programming language that was derived from the java implementation used by gorilla .the implementation uses a truncated algorithm that gives an approximate p - value with improved running speed . to evaluate consistency of investigated methods, we compared the sets of significant terms retrieved from go using different numbers of nonzero weights as input . for each , we sort in descending order the weights associated with entities .with each selected , we kept largest weights unchanged and set the remaining to 0 to arrive at a modified set of weights .we did not totally exclude the lower weights but kept them under consideration to provide statistical background .we submitted for analysis and obtained from each statistical method a set of enriched terms ordered by their p - value . in fig.[fig : mainexamples1]a and supplementary fig.s3 , we displayed the actual five most significant terms retrieved with their p - values for selected examples of weight sets . to investigate on a larger scale the retrieval stability to changes, we computed for each method the overlap between sets of top ten terms from two different s for the sets mentioned in ` evaluating accuracy of p - values ' and then took the average ( fig.[fig : mainexamples1]b ) .we compared our saddlesum approach against the following existing methods : fisher s exact test ( hgem ) , two sample student s t - test with equal ( t - profiler ) and unequal ( gage ) variances , and mhg score . based on data from both microarrays and simulations of information flow in protein networks ,the comparison shown here encompassed ( in order of importance ) evaluation of p - value accuracy , ranking stability and running time .accurate p - value reflects the likelihood of a false identification and thus allows for comparison between terms retrieved even across experiments .incorrect p - values therefore render ranking stability and algorithmic speed pointless .accurate p - values without ranking stability question the robustness of biological interpretation . for pragmatic use of an enrichment method , even with accurate statistics and stability , it is still important to have reasonable speed. the term p - value reported by an enrichment analysis method provides the likelihood for that term to be enriched within . to infer biological significance using statistical analysis ,it is essential to have accurate p - values .we analyzed the accuracy of p - values reported by the investigated approaches through simulating queries and comparing their reported and empirical p - values .results based on querying databases with fixed term sizes are shown in supplementary figs.s1 and s2 . shown in fig.[fig : stats1 ] are the results for querying go - based gene - shuffled term databases , which retain the structure of the original go as a mixture of terms of different sizes organized as a directed acyclic graph where small terms are included in larger ones .the curves for all methods in fig.[fig : stats1 ] therefore resemble a mixture of curves from supplementary figs.s1 and s2 albeit weighted towards smaller - sized terms .for weights from both network simulations and microarrays , saddlesum as well as the methods based on fisher s exact test ( hgem and mhg ) report p - values that are acceptable ( within one order of magnitude from the theoretical values ) . for hgem and mhg , this is not surprising because our experiments involved shuffling entity labels and hence followed the null model of the hypergeometric distribution . on the other hand , the null model of saddlesum andthe t - test methods assumes weights drawn independently from some distribution ( sampling with replacement ) . for terms with few entities ( ) ,the difference between the two null models is minimal and the p - value accuracy assessment curves for saddlesum run as close to the theoretical line as those for hgem methods . for , saddlesum gives more conservative p - values for terms with large sums of weights ( supplementary figs.s1 and s2 ) . in practice, this has no significant effect to biological inference .large terms would be still selected as significant given a reasonable p - value cutoff and accurate p - values are assigned to small terms that are biologically specific .two - sample t - test with unequal variances as used by gage package reports p - values so conservative that they are often larger than 0.01 and hence not always visible in our accuracy plots .this effect persists even for as large as 500. this might be because the number of degrees of freedom used is considerably small .in addition , its test statistic ( eq.([eq : ttst2 ] ) ) emphasizes the estimated within - term variance that is typically larger than the overall variance . on the other hand ,t - profiler generally exaggerates significance because it uses the t - distribution with a large number of degrees of freedom ( ) .although some small terms may appear biologically relevant ( as in fig.[fig : mainexamples1 ] ) , one should not equate these exaggerated p - values with sensitivity . for microarray data , the ratios are almost symmetrically distributed about 0 ( supplementary fig.s4 ) .the distribution of their sum is close to gaussian .however , t - profiler still significantly exaggerates p - values for terms whose ( supplementary fig.s2 ) .the statistical accuracy of t - profiler worsens when negative ratios are set to 0 .the reason for doing so is that allowing weights within each term to cancel each other may not be biologically appropriate .go terms may cover a very general category where annotations may not always be available for more specific subterms .subsequently , terms may get refined and new terms may emerge .in such situation , it is desirable to discover terms that have genes that are significantly up - regulated even if many genes from the same term are down - regulated .p - value accuracy , although the most important criterion , measures only performance with respect to non - significant hits , that is , the likelihood of a false positive .it is also necessary to consider the quality of enrichment results in terms of the underlying biology .testing the quality directly , as described in the introduction , is not yet feasible .instead we evaluated internal consistency of each method with respect to the number of top - ranked entities used for analysis .fig.[fig : mainexamples1]a shows the change of p - values reported for the top five go terms with respect to the number of selected entities using two examples with weights respectively from network flow simulation and microarray .additional examples are shown in supplementary fig.s3 .results from evaluating the overall consistency of the best ten terms retrieved are shown in fig.[fig : mainexamples1]b . both hgem and mhg methods are highly sensitive to the choice of , the number of entities deemed significant . with a small ,their sets of significant terms resemble the top terms obtained by saddlesum , while large values of render very small p - values for large - sized terms ( often biologically non - specific ) .this is mainly because hgem and mhg treat all selected significant entities as equally important without weighting down less significant entities , the collection of which may out vote the most significant ones .hence , although mhg considers all possible values , to obtain biologically specific interpretation , it might be necessary to either remove very large terms from the vocabulary or to impose an upper bound on . in that respect, mhg is very similar to the original gsea method , which also ignored weights .the authors of gsea noted that the genes ranked in middle of the list had disproportionate effect to their results and produced an improved version of gsea with weights considered .gage does not show strong consistency because many p - values it reports are too conservative and fall above the 0.01 threshold we used .consequently , the best overlap between various cutoffs is about 5 ( out of 10 ) for network flow examples and 4 for microarray examples( fig.[fig : mainexamples1]b ) .t - profiler shows great internal consistency .unfortunately , as shown in fig.[fig : stats1 ] , supplementary figs.s1 and s2 , it reports inaccurate p - values , especially for small terms .this is illustrated in the top panel of fig.[fig : mainexamples1]a , where t - profiler selects as highly significant the small terms ( with 5,6 and 9 entities ) , which are deemed insignificant by all other methods .the same pattern can be observed in supplementary fig.s3 , although the severity is tamed for microarrays . using weights for scoring terms , saddlesum is also stable with respect to the choice of but with accurate statistics . in terms of algorithmic running time ( table [ tbl : speed ] ) , parametric methods relying on normal or student st - distribution require few computations .methods based on hypergeometric distribution , if properly implemented , are also fast . on the other hand ,non - parametric methods can take significant time if many shufflings are performed . based on dynamic programming, mhg method can also take excessive time for large terms .saddlesum has running time that is only slightly longer than that of parametric methods . .running times of evaluated enrichment statistics algorithms ( in seconds ) .we queried go ten times with each of the five examined enrichment methods using weights from 100 network simulation results and 136 microarrays ( same datasets used for p - value accuracy experiments ) . running times for p - value calculations on dual - core 2.8 ghz amd opteron 254 processors ( using a single core for each run ) aggregated over all samplesare shown on the left , while average times per query are shown on the right . the hgem method used 100-object cutoff .[ cols= " < , > , > , > , > " , ]approximating the distribution of sum of weights by saddlepoint method , our saddlesum is able to adapt itself equally well to distributions with widely different properties .the reported p - values have accuracy comparable to that of the methods based on the hypergeometric distribution while requiring no prior selection of the number of significant entities .while our results show that gage method suffers from reduced sensitivity , it should be noted that it forms only a part of gage algorithm .gage was designed to compare two groups of microarrays ( for example disease and control ) by obtaining an overall p - value . in that scheme ,the p - values we evaluated are used only for one - on - one comparisons between members of two groups . by combining one - on - one p - values ( which are assumed independent ), the overall p - value obtained by gage can become quite small .the assumed null distribution by t - profiler is close to gaussian .it has been commented that its statistics are similar to that of page , which uses z - test .naturally , the smallest , and likely exaggerated , p - values occur when evaluating small terms .for that reason , page does not consider terms with less than 10 entities , which we included in our evaluation solely for the purpose of comparison .our network simulation experiments produce very different weight profiles ( supplementary fig.s4 ) than that of microarrays .these weights are always positive and skewedly distributed . even after summing many such weights ,the distribution of the sum is still far from gaussian in the tail . therefore , t - profiler and gage are unable to give accurate statistics .overall , our evaluations clearly illustrate the inadequacy , even for large terms , of assuming nearly gaussian null distribution when the data is skewed .while central limit theorem does guarantee convergence to gaussian for large , the convergence may not be sufficiently fast in the tail regions , which influence the statistical accuracy the most . as presented here, saddlesum uses given both for estimating the -dependent score distribution and for scoring each term .if a certain distribution of weights are prescribed , it is possible to adapt our algorithm to take a histogram for that distribution as input and use experimentally obtained weights for scoring only .a possible way to improve biological relevance in retrieval is to allow for term - specific weight assignment .for example , a gene associated with a go term can be assigned a ` not ' qualifier to indicate explicitly that this gene product is not associated with the term considered .a way to use this information would be to change the sign of the weight for such a gene ( from positive to negative or vice versa ) , but only when scoring the terms where the qualifier applies .hence , potentially every term could be associated with a specific weight distribution .while all methods using weights can implement this scheme , saddlesum is particularly suitable for it because it handles well the small terms and skewed distributions , where changing the sign for a single weight can have a considerable effect .this procedure can be generalized so that each gene in a term carries a different weight .several authors have raised the issue of correlation between weights of entities : generally the weights of biologically related genes or proteins change together and therefore a null model assuming independence between weights may result in exaggerated p - values . in principle , a good null model is one that can bring out the difference between signal and noise .to what level of sophistication a null model should be usually is a trade - off between statistical accuracy and retrieval sensitivity . using protein sequence comparison for example ,ungapped alignment enjoys a theoretically characterizable statistics but is not as sensitive as the gapped alignment , where the score statistics is known only empirically because the null model allows for insertions and deletions of amino acids . incorporating insertion and deletion into the null model made all the difference in retrieval sensitivity .this is probably because insertions / deletions do occur abundantly in natural evolution of protein sequences .the ignorance of protein sequence correlations , assumed by both ungapped and gapped alignments , does not seem to cause much harm in retrieval efficacy .although saddlesum assumes weight independence and thus bears the possibility of exaggerating statistical significance of an identified term , it mitigates this issue by incorporating the entire in the null distribution .it includes the entities with extreme weights that clearly represent ` signal ' and not ` noise ' , bringing higher the tail of the score distribution and thus larger p - values .indeed , as shown by examples in fig.[fig : mainexamples1]a and supplementary fig.s3 , saddlesum does not show unreasonably small p - values .it should also be noted that saddlesum is designed for the simple case where a summary value is available for each entity considered its use for analyzing complex microarray experiments with many subjects divided into several groups is beyond the scope of this paper and care must be exercised when using it in this context .saddlesum is a versatile enrichment analysis method .researchers are free to process appropriately their experimental data , produce a suitable as input , and receive accurate term statistics from saddlesum .since it does not make many assumptions about the distribution of data , we foresee a number of additional applications not limited to genomics or proteomics , for example to literature searches .this work was supported by the intramural research program of the national library of medicine at national institutes of health .this study utilized the high - performance computational capabilities of the biowulf linux cluster at the national institutes of health , bethesda , md .( http://biowulf.nih.gov ) .we thank john wootton and david landsman for useful comments , roy navon for providing us with the java source code for the statistical algorithms of gorilla , weijun luo for his help with using gage package and the anonymous referees for comments that helped improve the first version of this paper .al - shahrour , f. _ et al . _( 2007 ) . from genes to functional classes in the study of biological systems . , * 8 * , 114 .altschul , s. f. _ et al . _gapped blast and psi - blast : a new generation of protein database search programs ., * 25 * , 33893402 . ashburner , m. _ et al . _ ( 2000 ) . gene ontology : tool for the unification of biology . the gene ontology consortium . , * 25 * , 2529 .backes , c. _ et al . _( 2007 ) . advanced gene set enrichment analysis ., * 35*(web server issue ) , w186192 .barrett , t. _ et al . _( 2009 ) . ., * 37*(database issue ) , d885890 .ben - shaul , y. _ et al . _( 2005 ) . identifying subtle interrelated changes in functional gene categories using continuous measures of gene expression . , *21*(7 ) , 11291137 .benjamini , y. and hochberg , y. ( 1995 ) . controlling the false discovery rate : a practical and powerful approach to multiple testing ., * 57 * , 289300 .bleistein , n. ( 1966 ) .uniform asymptotic expansions of integrals with stationary points and algebraic singularity . , * 19 * , 353370 .blom , e .-_ et al . _( 2007 ) . : functional information viewer and analyzer extracting biological knowledge from transcriptome data of prokaryotes . , * 23*(9 ) , 11611163 .boorsma , a. _ et al . _: scoring the activity of predefined groups of genes using gene expression data ., * 33*(web server issue ) , w592595 .boyle , e. i. _ et al . _open source software for accessing gene ontology information and finding significantly enriched gene ontology terms associated with a list of genes . ,* 20 * , 37103715 .breitkreutz , b. _ et al . _( 2008 ) . ., * 36*(database issue ) , d637640 .breitling , r. _ et al . _iterative group analysis ( iga ) : a simple tool to enhance sensitivity and facilitate interpretation of microarray experiments ., * 5 * , 34 .breslin , t. _ et al . _( 2004 ) . comparing functional annotation analyses with catmap . , * 5 * , 193 .daniels , h. e. ( 1954 ) .saddlepoint approximations in statistics . , * 25 * , 631650 .daniels , h. e. ( 1987 ) . tail probability approximations . , * 55*(1 ) , 3748 .eden , e. _ et al . _discovering motifs in ranked lists of dna sequences ., * 3*(3 ) , e39 .eden , e. _ et al . _( 2009 ) . : a tool for discovery and visualization of enriched go terms in ranked gene lists . ,* 10 * , 48 .goeman , j. and bhlmann , p. ( 2007 ) . . ,* 23*(8 ) , 980987 . gold , d. _ et al . _( 2007 ) . ., * 8*(2 ) , 7177 .hochberg , y. and tamhane , a. c. ( 1987 ) . .huang , d. w. _ et al . _bioinformatics enrichment tools : paths toward the comprehensive functional analysis of large gene lists . , * 37*(1 ) , 113 .jensen , j. l. ( 1995 ) . .clarendon press , oxford .karlin , s. and altschul , s. f. ( 1990 ) .methods for assessing the statistical significance of molecular sequence features by using general scoring schemes . , * 87 * , 22642268 .kim , s .- y . and volsky , d. j. ( 2005 ) .: parametric analysis of gene set enrichment . , * 6 * , 144 .lugannani , r. and rice , s. ( 1980 ) .saddle point approximation for the distribution of the sum of independent random variables ., * 12*(2 ) , 475490 .luo , w. _ et al . _: generally applicable gene set enrichment for pathway analysis . , * 10 * , 161 .mootha , v. k. _ et al . _genes involved in oxidative phosphorylation are coordinately downregulated in human diabetes . , *34*(3 ) , 267273 . press , w. h. _ et al . _( 2007 ) . .cambridge university press , 3 edition .sharma , k. _ et al . _proteomics strategy for quantitative protein interaction profiling in cell extracts .* 6*(10 ) , 741744 . smid , m. and dorssers , l. c. j. ( 2004 ) .: functional analysis of gene expression data using the expression level as a score to evaluate gene ontology terms ., * 20*(16 ) , 26182625 .stojmirovi , a. and yu , y .- k .information flow in interaction networks ., * 14*(8 ) , 11151143 .stojmirovi , a. and yu , y .- k .: analyzing information flow in protein networks ., * 25*(18 ) , 24472449 .subramanian , a. _ et al . _gene set enrichment analysis : a knowledge - based approach for interpreting genome - wide expression profiles . , *102*(43 ) , 1554515550 .sultan , m. _ et al . _( 2008 ) . a global view of gene activity and alternative splicing by deep sequencing of the human transcriptome . ,* 321*(5891 ) , 956960 .wood , a. t. a. _ et al . _saddlepoint approximations to the cdf of some statistics with nonnormal limit distributions . , *88*(422 ) , 680686 .references about saddlepoint approximations of the tail probabilities of random variables are abundant . for completeness of our expositionwe here present the derivation of the lugannani - rice formula , relying extensively on expositions by daniels and woods , booth and butler .let be a continuous random variable supported on a subset of .we will assume that its probability density function ( pdf ) , denoted by exists and that its moment generating function ( mgf ) , defined by converges for real ] in -space into the region ] and ] and $ ] .the integral ( [ eq : inv_formula2 ] ) now transforms ( using cauchy s theorem ) into where . for small , we can write when and hence , differentiating ( [ eq : transform2 ] ) we obtain while when , ( [ eq : expansion1 ] ) implies when is small . thus , and therefore , for small , where is a constant .let by expanding about , it can be shown that , and , since is analytic , is analytic in the neighborhood of that includes . therefore , we can rewrite the integral ( [ eq : inv_formula3 ] ) as the singularity has now been isolated into ( [ eq : inv_formula4 ] ) , which , by comparing with ( [ eq : inv_formula2 ] ) , we recognize to equal .on the other hand , can be expanded as a taylor s series around the saddlepoint and integrated to obtain an asymptotic series for ( [ eq : inv_formula5 ] ) .for the first - order approximation , that is , the leading behavior , we only take the constant term at .let then , and the integral ( [ eq : inv_formula5 ] ) becomes thus , we have obtained the lugananni - rice formula : with given by ( [ eq : saddle1 ] ) and ( [ eq : hatz2 ] ) .as mentioned in the main text , our saddlesum algorithm approximates term p - values by first solving eq.([eq : saddle1 ] ) for using newton s method and then using the lugananni - rice formula ( [ eq : tail ] ) .the key step is estimation of .since the moment - generating function of the underlying space is not known , we estimate it ( and its derivatives ) using . given sufficiently many weights ( ), the results can be quite accurate ( see below ) .one limitation of this approach is that our approximation can only accept scores not greater than times maximal weight ( becomes infinite at this bound ) .thus , the approximation can be inaccurate for very large scores , causing a larger than usual relative error in p - values ( fig .however , occurence of such extreme scores is rarely seen in practice .theoretically , lugananni - rice formula is valid over the whole range of the distribution , for small and large scores and both near the mean and in the tails .however , the form ( [ eq : tail ] ) becomes numerically unstable close to the mean of the distribution ( i.e. when is close to 0 ) .alternative asymptotic approximations exist that are numerically stable near the mean . for saddlesum , we were mainly interested in the tail probabilities and we therefore decided not to attempt to approximate the p - values of the scores smaller than one standard deviation from the mean ( saddlesum returns p - value of 1 for all such scores ) .terms with such scores are never significant in the context of enrichment analysis .when processing a terms database , we retain previously computed values of with associated scores and parameters for lugananni - rice formula in a sorted array . since and the p - value are monotonic with respect to the score , using binary search we can certify for many terms that their p - value is larger than a given cutoff and hence eliminate them without running newton s method .furthermore , binary search provides a bracket for and hence newton s method usually converges in very few iteration .we use the bracketed version of newton s method recommended in the numerical recipes book ( section 9.4 ) .this combines the classical newton s method with bisection and has guaranteed global convergence .we show evaluations of saddlesum performance against some theoretically well - characterized distributions in fig .s5 and s6 .it can be seen that the relative error between the saddlesum approximation and the theoretical p - value is generally very small except for extremely large scores , when p - values are very small . in the context of the enrichment analysis ,this discrepancy is not important because such terms will be evaluated as highly significant even if the p - value is off by few orders of magnitude . to further illustrate the quality of our approximation ,we have computed the kullback - leibler ( kl ) divergences ( relative entropies ) between the tail distribution implied by saddlesum and the theoretical distribution .prior to computation of kl divergence , both distributions were normalized over the region where saddlesum is valid ( i.e. the tail with scores larger than one standard deviation over the mean ) .all kl divergence values are extremely small and are comparable between distributions .s7 shows relative errors of saddlesum compared to the empirical distributions using the same weights and term sizes as for fig .s1 and s2 . in this casehowever , in agreement with the null model of saddlesum , we sampled weights with replacement .our results indicate that , except for small with weights coming from network flow simulations , the relative error of the saddlesum is similar to that obtained in comparison with well - characterized distributions .
term enrichment analysis facilitates biological interpretation by assigning to experimentally / computationally obtained data annotation associated with terms from controlled vocabularies . this process usually involves obtaining statistical significance for each vocabulary term and using the most significant terms to describe a given set of biological entities , often associated with weights . many existing enrichment methods require selections of ( arbitrary number of ) the most significant entities and/or do not account for weights of entities . others either mandate extensive simulations to obtain statistics or assume normal weight distribution . in addition , most methods have difficulty assigning correct statistical significance to terms with few entities . implementing the well - known lugananni - rice formula , we have developed a novel approach , called saddlesum , that is free from all the aforementioned constraints and evaluated it against several existing methods . with entity weights properly taken into account , saddlesum is internally consistent and stable with respect to the choice of number of most significant entities selected . making few assumptions on the input data , the proposed method is universal and can thus be applied to areas beyond analysis of microarrays . employing asymptotic approximation , saddlesum provides a term - size dependent score distribution function that gives rise to accurate statistical significance even for terms with few entities . as a consequence , saddlesum enables researchers to place confidence in its significance assignments to small terms that are often biologically most specific . our implementation , which uses bonferroni correction to account for multiple hypotheses testing , is available at http://www.ncbi.nlm.nih.gov / cbbresearch / qmbp / mn / enrich/. source code for the standalone version can be downloaded from ftp://ftp.ncbi.nlm.nih.gov / pub / qmbpmn / saddlesum/. yyu.nlm.nih.gov * robust and accurate data enrichment statistics via distribution function of sum of weights * aleksandar stojmirovi and yi - kuo yu .2 in national center for biotechnology information + national library of medicine + national institutes of health + bethesda , md 20894 + united states
let us suppose that we are dealing with a system with deterministic dynamics , on which external noise is acting .that means that , in case we could remove the noise ( by switching it off if possible , by isolating the system , etc ) , its equations of motion would be purely deterministic : now , let us consider that gaussian white noise is acting on the system . to include its effects, we add to ( [ eq : deterministic ] ) a term : where the functions of are determined by some physical considerations , and the components of are independent wiener processes .a wiener process is a gaussian stochastic process , almost surely continuous , where non - overlapping increments are independent , and with equation ( [ eq : stochastic ] ) can be written as an issue arises when defining the integral over the wiener processes .if we take a partition of the interval ] that must be chosen as the biggest time interval that we will use among all our simulations , and starting with given initial conditions for the system , the final point obtained with a good integration scheme , using a number of time steps big enough , must be close to the real solution of the differential equation , at time ( for that particular realisation of the wiener process and for this particular initial condition ) .of course , when selecting any other realisation of the wiener process and any other initial condition , the final point at time obtained by the integration method must be close to the one of the real solution as well . of course, we can not know the value of obtained with the real solution but , still , we can test whether the integration method gives something close enough to it , by studying the self - consistence .if we have chosen a small number of integration steps ( a big value for the time step ) , the obtained will not be reliable and , when repeating the integration with more time steps ( but the same wiener process and the same initial conditions ) , we will obtain a new which will considerably differ from the previous one . on the other hand , we can be reasonably sure that we have already chosen a number of integration steps big enough , and therefore we have obtained a reliable value of if , after repeating the integration once more with a considerably bigger number of time steps ( e.g. , twice the former number of time steps , with same wiener process and same initial conditions ) , the new obtained stays close to the value previously obtained .we are using here this method of self - consistency via the _ brownian tree _ : we will start with the maximum number of timesteps , for a chosen natural number , and we will then progressively reduce the number of integration steps by a half each time , i.e. , we will integrate with , , , etc , time steps , until the desired minimum power of 2 ( that can be , if desired , as little as , i.e. , a single time step ) . given the discretised wiener process for time steps , we obtain the discretisation of the same wiener process for half the number of time steps by summing up the two members of each couple , i.e. sometimes , one may first take a signal of gaussian white noise with mean zero and variance 1 ( i.e. , is distributed as ) , so the integration routine gets the increments of the wiener process from the 1correlated signal as , where is the time step that is used at that moment to integrate the differential equation ( see introduction to section [ sec : integration_methods ] ) .then , when halving the total number of time steps ( and therefore doubling the time step to ) , the new signal still has to be 1correlated , so we have to make ,\quad j=0 , \dots , 2^{n-1}-1,\ ] ] so that is 1correlated as well as . also , that way , given that and , equation ( [ eq : brownian_correl_steps ] ) yields ( [ eq : brownian_wiener_steps ] ) . for a desired precision for the -th component in the stochastic differential equation , we will consider that is a number of steps big enough if . in order to check that such accuracy holds for different realisations of the noise, we will repeat the comparison of the and with different brownian paths : we call to the trajectory with the wiener process .we will always use the same initial conditions . as a result, we have for each component and each exponent a set of values consisting of the differences between the state at the final time with integration steps minus the state with integration steps : each value corresponds to each realisation of the wiener process . to be clear : we have for each component and each exponent a set , where . from each set , we can assess the reliability of the integration with time steps .normally , if is not too small , the mean value of those differences will be around 0 , otherwise we can say that there is a preferred sign in the values of the differences and , therefore , a noticeable systematic error at the numerical integration. then , the quality of the numerical integration is better the smaller the absolute values of the differences , i.e. , the narrower the distribution , and this being peaked around 0 . a way of measuring this is to obtain the mean and some central momenta , of the distribution .given that , for a number of integration steps big enough , the absolute values of the differences should be as small as possible , we can say that the reliability is better the smaller ( in absolute value ) the mean and the central momenta . of course , a number of time steps that is acceptable for a given integration method ( heun , milstein , etc ) will be in general insufficient or , contrarily , higher than necessary , for another integration method .we are using here a dynamical system as a model for paleoclimate .our system consists of three variables : the volume of the ice , the concentration , and the variable that is related to the ocean s temperature and circulation .the oscillatory astronomical forcing acting on the system can be approximated by two functions : carrying the effect of the precession of the earth s axis , and carrying the effect of the obliquity .both functions are approximated by a sum of harmonic oscillations , , where the parameters , and are arranged as rows in two data files ( a file for precession and another one for obliquity ) , and we sum for each function the number of oscillations that we like according to the desired precision and computation time .nine parameters appear in the equations that govern the deterministic system : an offset in the coupling of the variable to the ice volume , a coupling of the ice volume to the obliquity , a coupling of the ice volume to the precession , an offset in the coupling to the ice volume , a coupling of the to the precession , the relaxation times of each variable , , , and a parameter a role in the coupling to the variable . we will add to each equation a gaussian white noise variable : , , to the variables , , respectively .the functions are independent , with zero mean and variance one ( the actual variances of the noises will be included in the functions that multiply the s ) : ( these functions correspond to the wiener processes in ( [ eq : stochastic ] ) . ) after inserting the noise , we will also have to take into account the variances of the noises , and maybe some more extra parameters in the functions that couple the variables , , to the noises ( i.e. , the functions that multiply the noises ) .putting all things together , the equations of the system read : + g_v(v , t)\ , \eta_v\nonumber\\ \frac{\mathrm{d}c}{\mathrm{d}t } & = & - \frac{1}{\tau_c } ( c + v -\frac{1}{2}d - c_p\ , \pi)+ g_c(c , t)\ , \eta_c\nonumber\\ \frac{\mathrm{d}d}{\mathrm{d}t } & = & - \frac{1}{\tau_d } [ \varphi_3 ( 2d ) - ( v - v_t)]+ g_d(d , t)\ , \eta_d,\end{aligned}\ ] ] where the function is defined as : and the potentials are : as stated before , the variance of each noise is taken into the function that multiplies it . in the simplest case of additive noise ,we just make , where is the variance of the noise acting on the component .it is worth mentioning that we are performing two corrections by hand " at each integration step . on the first hand , in order to prevent , which would be devoid of physical meaning , we shall make by hand whenever we end up with after an integration step . on the other hand , in an attempt to reduce computing divergences , we will bound and make it actually be .when coding , we have of course tried to avoid redundant calculations as much as possible .this specially includes the figures that just depend on time ( and not on the state variables , , ) .given that , in most cases , we will be propagating many particles , we will first obtain once and for all the values that depend only on time and are therefore the same for all the particles .we start by generating the array of all the times at which the integration steps will take place , and then we calculate at all such times : the precession , the obliquity , and also the combination appearing in ( [ eq : function_r ] ) .this will imply passing these three extra parameters to all the functions ( instead of passing just the parameter time ) and adds complexity to the code , but it saves invaluable time when running the programme , specially when the number of particles is high . for the tests that will follow in the next sections , we took 50 terms for the precession and 20 terms for the obliquity .given that we are comparing three methods of numerical integration , we do not want to stay in the simple case of additive noise , but we want to test the different methods in the more general case of multiplicative noise .we therefore use a simple kind of multiplicative noise : the values of the parameters used here for the comparison of the numerical integration schemes are the following ( one unit of time corresponding to 1000 years ) : the initial conditions used are , , .the code was written in c and compiled with the intel compiler ( icc ) in order to take advantage of vectorisation .the compilation commands were ` ice & & icc file.c -l / opt / gsl / lib -lgsl -lgslcblas ` ` -i / opt / gsl / include -o file ` ( we use the first command ` ice ` to switch to 64 bits . )the cpu is ` intel(r ) xeon(r ) x5450 @ 3.00ghz ` running on ` suse linux 11.0 x86_64 ` .trajectories for three particles were generated using the three integration schemes , with the same wiener process for the three schemes .we integrated up to time equal to 2000 , using time steps per trajectory .the time used to generate the data file for each integration scheme on the computing scenario described above was : * heun scheme : 0.335 seconds . * milstein scheme : 0.304 seconds . *derivative - free milstein scheme : 0.357 seconds .nevertheless , such small times are not reliable for comparing the different integration schemes in terms of time expenses , given that such execution times vary from one execution to another of the same executable file , depending on the other jobs in the whole computer .we will compare the time expenses in section [ subsec : precision_brownian ] ., scaledwidth=70.0% ] , scaledwidth=70.0% ] , scaledwidth=70.0% ] , scaledwidth=70.0% ] , scaledwidth=70.0% ] , scaledwidth=70.0% ] in order to check the agreement between the different integration schemes , we have plotted , in figures [ fig : trajectories - heun - milstein ] and [ fig : trajectories - heun - milstein - df ] , a two - by - two superposition of the same trajectories generated with the different integration schemes . for clarity, we have only plotted the second half , from time 1000 to time 2000 .the space between consecutive points has been intentionally made relatively large , in order to be able to see the overlapping in the same trajectory obtained from two different integration schemes .we can see in figures [ fig : trajectories - heun - milstein ] and [ fig : trajectories - heun - milstein - df ] that all the trajectories overlap perfectly with the corresponding same trajectory obtained with the other integration scheme .this says that the three integration methods are reliable for the number of integration steps stated above ( ) .now , we will make in section [ subsec : precision_brownian ] a quantitative study of accuracy versus time expense of each integration scheme . to compare the rate of strong convergence of the three integration schemes, we followed the method described in section [ sec : convergence_check ] .we integrated particles from time 0 to time 400 with each integration scheme .the maximum number of time steps was , and the minimum ( the biggest power of 2 , as we will see , that is small enough to cause divergences in the numerical integration for the three schemes ) . the same wiener processes were used for the three integration schemes . for the sample of differences , at the final time 400 , between the integrations with and time steps , we obtain the mean and central momenta ( [ central_momenta ] ) , for .the time used to generate the data file for each integration scheme on the computing scenario described above was : * heun scheme : real 4m44.192s , user 4m29.721s , sys 0m4.396s . * milstein scheme : real 3m45.887s , user 3m41.430s , sys 0m4.008s . *derivative - free milstein : real 4m21.324s , user 4m9.640s , sys 0m4.296s .given that the most time consuming task of the programme is the numerical integrations , we can say from the figures above that , for the same number of time steps , the milstein scheme is the fastest method ; the next one would be the derivative - free milstein taking around 1.13 times longer than the former , and the slowest one is the heun scheme taking around 1.22 times longer than the milstein scheme ( beware , though , that these ratios may change if we use another computing environment ) . given that these ratios are close to 1 ( i.e. , the time expense of the integration schemes is roughly the same for the three ) , we shall choose the most accurate of our integration schemes . in order to compare the accuracies , we have rearranged the tables to display together the same relevant variables derived after integrating with the three different methods .the means and central momenta explained above are displayed in tables [ table : mean][table : fourth - momenta ] .the columns are named after the system s variable ( , or ) , and after the integration method : suffix `` -h '' for heun , `` -m '' for milstein , and `` -df '' for derivative - free milstein .the names of the rows correspond to `` the small power of 2 '' , i.e. , means that we are considering the distribution of the differences , at the final time 400 , between the integrations with and time steps : small absolute values in the mean and central momenta indicate that the integration with time steps is in principle reliable . also , for the sake of visual clarity , all the means and central momenta have been multiplied by before displaying them in the tables ..mean values times ; for the different variables , integration methods , and time steps . see text for an explanation on the notation . [cols="^,>,>,>,>,>,>,>,>,>",options="header " , ] we can see that the numerical integration yields divergencies when done with a small number of time steps : in our case , they appear when we descend to integration steps for the two milstein methods , and when we descend to steps for the heun method . as we have stated in section [ subsec : manual - corrections ] , we have tried to remove the divergences coming from the equations , so we should think that the divergences found are due in principle only to the integration using too few , too big , time steps . if we check a given row ( corresponding a given number of integration steps ) in tables [ table : mean][table : fourth - momenta ] , we see that the mean and central momenta for the heun scheme are always smaller ( in absolute value ) compared to the figures for the other two schemes ( with a couple of exceptions in table [ table : third - momenta ] ) .not only smaller , but the figures for the heun scheme are almost always ( except for some rows corresponding to a small number of time steps ) at least _ one order of magnitude smaller _ than the figures for the two milstein schemes .if we were too strict , one might object that the heun scheme is ( just slightly ) more expensive in terms of computing time : however , in most cases , the figure for the heun scheme in a given row of the tables is smaller than the figure for the other schemes displayed one row above ( which corresponds to an integration with double number of time steps ) , so the slightly bigger time expense of running the heun scheme is largely made up for by the accuracy of the method . as a result, we can state that the heun method is the best performing as we get much better accuracy , for a given computation time , compared to the milstein methods .the better performance of the heun method was to some extend expectable , given that , in the case of additive noise , the milstein scheme reduces to the basic euler scheme ( as the derivative in ( [ milstein_diagonal ] ) vanishes ) , whereas heun s scheme is always ( also for additive noise , and even for no noise at all ) a second - order predictor - corrector method .also , the idea of heun s method is easy to understand , and the method is easy to code ( in particular , it does not use any derivative ) .all this makes the heun s scheme a very suitable and attractive method in our opinion .the author would like to thank michel crucifix ( universit catholique de louvain ) and jonathan rougier ( university of bristol ) for useful advice .the work was funded by the erc - starting grant integrated theory and observations of the pleistocene " .
three schemes , whose expressions are not too complex , are selected for the numerical integration of a system of stochastic differential equations in the stratonovich interpretation : the integration methods of heun , milstein , and derivative - free milstein . the strong ( path - wise ) convergence is studied for each method by comparing the final points after integrating with and time steps . we also compare the time that the computer takes to carry out the integration with each scheme . putting both things together , we conclude that , at least for our system , the heun method is by far the best performing one .
codes are the first structured codes ( as opposed to random codes ) that provably achieve the symmetric capacity of binary - input memoryless channels ( bmcs ) .this capacity - achieving code family is based on a technique called channel polarization . given a bmc , after performing the channel transform , which consists of the channel combining and splitting operations , over a set of independent copies of , a second set of synthesized channels is obtained . as the transformation size, i.e. , the number of channel uses participated in the transform , goes to infinity , some of the resulting channels tend to be completely noised , and the others tend to be noise - free , where the fraction of the channels approaches the symmetric capacity of . by transmitting free bits over the noiseless channels and sending fixed bits over the others , polar coding with a very large code length can achieve the symmetric capacity under a successive cancellation ( sc ) decoder with both encoding and decoding complexity . to construct a polar code , the capacities ( or equivalently , reliabilities ) of the polarized channelscan be estimated efficiently by calculating bhattacharyya parameters for binary - input erasure channels ( becs ) .but for channels other than becs , computationally expensive solutions based on density evolution ( de ) and other modified methods are required to calculate the channel reliabilities , .although polar codes are asymptotically capacity achieving , the performance under the sc decoding is unsatisfying in the practical cases with finite - length blocks .several improved sc decoding schemes have been proposed to improve the finite - length performance of polar codes .the successive cancellation list ( scl ) decoding , and successive cancellation stack ( scs ) decoding algorithms are introduced to approach the performance of ml decoding with acceptable complexity . by regarding the improved sc decoding algorithms as path search procedures on the code tree , the scl and scs decoding are the `` width - first '' and the `` best - first '' search , respectively . to provide a flexible configuration under the constraint of both the time and space complexities , an decoding algorithm called the successive cancellation hybrid ( sch )is proposed by combining the principles of scl and scs . moreover , under these improved sc decoding algorithms , polar codes are found to be capable of achieving the same or even better performance than turbo codes or low - density parity - check codes with the help of cyclic redundancy check ( crc ) codes . therefore , polar codes are believed to be competitive candidates in future communication systems . shortly after the polar code was firstly put forward , the channel polarization phenomenon has been found to be universal in many other signal processing problems , such as multiple access communications , source coding , information secrecy and other settings . to improve the spectral efficiency , a -ary polar coded modulation scheme is provided in . by regarding the dependencies between the bits which are mapped to a single modulation symbol as a kind of channel transformation , the polar coded modulation ( pcm ) scheme is derived under the framework of multilevel coding .it is shown in that this polar coded modulation scheme can outperform the turbo coded modulation scheme in the 3gpp wcdma system by up to with -ary quadrature amplitude modulation ( qam ) over additive white gaussian noise ( awgn ) channel . in this paper , the channel polarization technique is extended to the multiple - input multiple - output ( mimo ) transmission scenario .similar to the polar coded modulation scheme , the transmission over the mimo channel is further combined into the channel transform . the mimo transmission , modulation and the conventional binary channel polarization form a three - stage channel polarization procedure . based on this generalized channel polarization , a jointly optimized space - time polar coded modulation ( stpcm ) schemeis proposed .the remainder of the paper is organized as follows .introduces the system model concerned in this paper . provides a three - stage channel transform which can be seen as a joint processing of the conventional binary channel polarization , modulation and mimo transmission ; describes the construction , encoding and decoding of the proposed stpcm scheme . evaluates the performance of the proposed stpcm scheme under the rayleigh fading channel through simulations . finally , concludes the paper .in this paper , the capital roman letters , e.g. , , are used to denote random variables . the lowercased letter denotes a realization of . and are the real and image parts of a complex number , respectively .the modulus of is written as .the calligraphic characters , such as and , are used to denote sets , and we use to denote the number of elements in .the cartesian product of and is written as , and stands for the -th cartesian power of .we use notation to denote an -dimensional column vector and to denote a subvector of , .when , is a vector without elements , and this empty vector is denoted by .we write to denote the subvector of with odd indices ( ; is odd ) .similarly , we write to denote the subvector of with even indices ( ; is even ) . for example , for , , , and .further , given an index set , let denote the subvector of , which consists of with .the matrices are denoted by bold letters , e.g. , .the notations and stand for the transpose and conjugate transpose of , respectively .the element in the -th row and the -th column of matrix is written as .the -th column of matrix is written as ; the -th row of is written as , i.e. , the -th column of .furthermore , we write to denote the kronecker product of two matrices and , and to denote the -th kronecker power of . throughout this paper, means `` logarithm to base 2 '' , and stands for the natural logarithm . a block diagram of space - time coded modulation is depicted in .the information bits are coded and modulated into a series of -ary symbols , and then transmitted to the receiver through a mimo system with and antennas within time slots . at the transmitter , a sequence of -length information bits , where , is fed into a binary channel encoder with code rate .the encoded sequence is interleaved into an other binary sequence . after a modulation, the bits are mapped into complex symbols .these symbols are then partitioned into streams with symbols in each stream and respectively transmitted over antennas .the transmitted symbols are represent by a matrix , where the rows and columns are corresponding to the transmitting antennas and time slots , respectively . in this paper , only qam is considered , and the average transmitting power of the transmitted symbols are normalized to one , i.e. , =1 $ ] . at the receiver , antennas are configured .thus , the mimo channel at the -th time slot can be described by a matrix with .the -th column of , i.e. the received signals at the -th time slot is where is an additive noise matrix , the elements of which are i.i.d .complex circular gaussian random variables with mean zero and variance , i.e. . in this paper , the channels between all the transmit / receive antenna pairs are assumed to be independent memoryless discrete - time normalized rayleigh fast and uncorrelated fading channels , i.e. , for any time slot , the channel confident of satisfies .we assume that an ideal channel estimation ( instant values of and ) is available at the receiver .furthermore , due to the channel - aware property of polar coding , a precise knowledge of the noise variance is assumed to be available at the transmitter which can be usually obtained from a feedback link . after receiving , a series of signal processes , i.e. , mimo detection , demodulation , de - interleaving and channel decoding , are used to retrieve the information bits .these generalized `` decoding '' operations can be done in either a separately concatenated manner or a jointly combined manner .in this section , after a brief review of the existing works , the channel polarization is extended to the mimo transmission case . under the multilevel coding framework ,a three - stage channel transform is derived , which is the basis of the proposed stpcm scheme . in the initial work of arikan , a mapping is used to denote a bmc channel , where and are the input and output alphabets , respectively . since the channel input is binary , .the channel transition probabilities are , and . after channel combining and splitting operations on independent uses of , we obtain successive uses of synthesized binary input channels , with transition probabilities where where the matrix , in which is the bit - reversal permutation matrix and \ ] ] after this channel transform , part of the resulting channels with becomes better than the original channel , i.e. , where function is the symmetric capacity ( the maximum mutual information between the channel inputs and outputs under uniform input distribution ) ; while the others with become worse , where is the complementary set of .arikan proved it in that when goes to infinity , with , with , and .when dealing with the polar coded modulation problem , the channel becomes , where is the -ary input alphabet , , and is the modulation order .every bits are modulated into a single modulation symbol under a specific one - to - one mapping called constellation labeling thus , the channel can be equivalently written as with transition probabilities where is the inverse mapping of . by regarding the modulation process as a special kind of channel transform , synthesized bmcs can be obtained , where , with transition probabilities after that , a conventional binary - input channel polarization transform is performed on each of the resulting bmcs .finally , a series of polarized bmcs is obtained , where and . in the mimo transmission scenario ,the channel becomes , where and are the alphabet at each transmit or receive antenna , respectively . in order to simultaneously transmit streams, we assume the number of the receive antennas is no less than that of the transmit antennas , i.e. , and the channel matrix is full rank . in each time slot , after transmitting the symbols of via the transmit antennas respectively , the received signal is where with .suppose the data stream are detected in a successive cancellation manner , correlated channels are obtained , where and the transition function similar to the pcm in , a three - stage channel transform is depicted in fig.[fig_polarization ] . after performing the first stage channel transform which is induced by the mimo transmission in ( [ equ_mimo_polarization ] ) , each of the resulting -ary input channels is transformed into a set of binary input channels with , then , by respectively performing binary channel polarization transform on uses of , totally bmcs are obtained , where , , , , and the set the one - to - one mapping from to is jointly determined by the binary channel polarization ( [ equ_binary_polarization ] ) , modulation process ( [ equ_constellation_labeling ] ) and the serial - to - parallel processing . after substituting ( [ equ_wkj ] ) into ( [ equ_wkji_tmp ] ) , a consistent representation of is obtained where construct a practical stpcm scheme , similar to the conventional polar coding in and the pcm in , the most reliable channels of are selected for carrying the information bits . in the existing works where only awgn channels are considered , the channel reliabilities can be evaluated efficiently by using gaussian approximation ( ga ) of de .however , the channel model considered in this paper is rayleigh fast fading channel , and no existing practical solution is available under this scenario .therefore , we first propose a pcm scheme over the rayleigh fast fading channel , where the fading channel is approximated by an awgn channel with identical capacity .after that , a stpcm scheme is derived based on the three - stage channel polarization discussed in the previous section . in this subsection ,the channel transform of the mimo channel under a qr decomposition is proposed . since the channel transform in ( [ equ_mimo_polarization ] )implicates a detecting order of the data streams , the transmitter and receiver should have an agreement on the specific mimo detection solution . in this paper , qr - decompositionis applied to for each channel coefficient matrix at the receiver , where is an unitary matrix , and is an upper triangular matrix with for any and for any , where and .the received signal in ( [ equ_channel ] ) after qr - decomposition detection is where the elements in is still i.i.d gaussian distributed , for any . after expanding the matrix operations in ( [ equ_ch_after_detec1 ] ) , we have the transmitted streams can be detected in a decreasing order in the antenna index , i.e., the stream from the -th transmit antenna is first detected , then the -th , , finally the .under such a successive cancellation detection , when dealing with , the term in ( [ equ_ch_after_detec ] ) can be dropped .therefore , the are equivalent to be transmitted over a fading channel with gains .thus the resulting polarized channels \{ } in ( [ equ_mimo_polarization ] ) ( redefined in a reversed order of ) are written as at the transmitter , the detecting order of the transmit streams and the noise variance are assumed to be notified through a feedback link .however , the , or equivalently , is time - varying , and the instantaneous values of the channel coefficients are unavailable at the transmitter when the channel is assumed to be fast fading .this is quite different from the conventional polar coding schemes where the precise knowledge of the channel state is known at both the transmitter and receiver .the reliabilities of can not be precisely evaluated by the existing solutions , so the pcm scheme in also can not be applied directly on each . in the following part of this subsection, we propose to construct pcm schemes by approximating the fading channels using a set of awgn channels which have the same capacities with the originals , and it will be used in the construction of the proposed stpcm scheme in section [ subsec_multilevel_stpcm ] . the -ary qam with constellation is equivalent to two independent -ary pulse amplitude modulations ( pam ) with constellations and , respectively . without loss of generality , we assume the real and image parts of the qam constellation are identical , i.e. , . the symmetric capacity of an awgn channel under -ary qam with noise variance is where under the channel model described in , the channel coefficients are i.i.d normalized circular gaussian distributed .according to ( * ? ? ?* theorem 3.3 ) , twice the square of the elements in are scaled -distributed with degrees of freedom , i.e. , for , where the probability density function ( pdf ) of for a given value and degrees of freedom is where is the gamma function thus , the ergodic capacity of a mimo channel under -ary qam with noise variance can be calculated as where is the ergodic capacity of which is calculated as with . after approximating the fading channel using an awgn channel with , where the code construction and performance evaluationis then performed over each of the equivalent awgn channels in the same way as that in the conventional awgn case . as that will be shown in, the bounds obtained by ga under this equivalence well match the simulated block error rate ( bler ) curves . in this subsection ,a stpcm scheme over qam modulated channels base on qr - decomposition is proposed . as for the scheme construction , independent uses of the mimo channel in time slots are transformed into a series of binary - input channels in a three - stage channel transform .when the code length is finite , pcm under set - partitioning ( sp ) labeling is find to achieve the best performance over pam modulated channel . in this paper ,an identical sp labeling rule is applied on the real and image parts of the qam constellation , respectively . moreover , to achieve the optimal utilization of the channel independencies , an additional transform is applied between the real - valued channel pairs corresponding to real and image parts of qam symbols . the detailed channel transform adopted by the proposed stpcm scheme is described below . 1 . [ s1 ] after a qr - decomposition of the mimo channel , a set of single - input multiple - output ( simo ) channels are obtained , with .2 . [ s2 ] under a -ary qam with , the simo channel at the -th transmit antenna is further transformed to binary - input channels . without loss of generally , the first half of the channels with correspond to the real parts of the modulation symbols , while the other half correspond to the image parts .[ s3 ] for each and , uses of are further transformed by an -scaled binary channel polarization .since at the -th transmit antenna , the channels corresponding to the real and image parts of the symbols , i.e. and with , are noised by independent real - valued awgns , and the labeling rules of the real and image parts of the qam constellation are set to be identical , one additional step of binary channel polarization transform can be performed between the channel pairs of and .thus , for each and , the channel uses , which consist of uses of and respectively , are then transformed by an matrix into , where .note that when the system is working over fading channels , the shared channel gain introduces correlationship between the channel pairs and for any specific time slot .therefore , an additional interleaving process is required on the inputs of and to make the channel uses participated in the binary channel transform independent . following section [ sub_ch_trans_with_qr ] ,when constructing the stpcm scheme , each of the channels obtained in s1 ) are approximated using an awgn channel , the reliabilities of the corresponding can then be evaluated by de or ga in the same way as that in the conventional pcm scheme . finally , the most reliable channels among are selected for carry the information bits , and the others are fixed to frozen bits , while the universal set , where , , } .gives an illustration of the mapping from the information bits to the transmitted signal matrix . at the receiver , since both the mimo channel and the modulation procedures are combined into the channel polarization transform , the mimo detection , demodulation and binary polar decoding can be jointly processed . a successive cancellation ( sc )algorithm can be used to decode this generalized polar code . given the received signal , the information bit is decoded with indices taking values from to under an sc manner where where the indices of the channel are calculated as , , , and is the floor function .equation ( [ equ_sc ] ) is essentially the same with the conventional sc decoding rules in .therefore , the improved sc decoding algorithms , scl and crc - aided scl ( cascl ) can also be used to decode the proposed stpcm scheme which can yield much better performance than sc .the proposed stpcm scheme over mimo channel with -ary qam is equivalent to a set of binary polar codes with code length . for binary polar coding ,the encoding and decoding complexities are both .compared to the component polar encodings , the complexities brought by the modulation and interleaving operations are negligible , so the encoding complexity of stpcm is . to decode the stpcm ,a qr - decomposition is applied on each for .since it is assumed that and the complexity of qr - decomposition operation is , the decoding complexity of stpcm is .[ fig_sim_config ] in this section , the bler performance of stpcm scheme is analyzed via simulations .the number of available mimo channel uses is in , and the code rate is in . for comparison ,the the performance of a bit - interleaved turbo coded modulation ( bitcm ) scheme over channel is also provided . the turbo encoder and rate - matching algorithm used in 3gpp wcdma system adopted .the punctured codeword is fed into a randomized interleaver , and then modulated and distributed to the transmit antennas .the constellation rule of bitcm scheme is gray mapping . at the receiver , the mmse detection , demodulation , deinterleaving and log - map decoding ( with maximum iterations ) executed sequentially .this transmission model is essentially applied in many practical wireless communication systems .different from the separate signal processing in the above bitcm scheme , the propose stpcm can be regarded as a joint processing of the channel coding , modulation and the mimo transmission .gives a block diagram of stpcm transmission .the sc decoding algorithms in ( [ equ_sc ] ) is used to decode the stpcm . as stated in ,when decoding the bitcm , metric updating operations in the trellis representations of the component convolutional codes is required : iterations over constituent -state decoders with metric updates per trellis node , and the interleaver size .when decoding the stpcm under sc , the number of required metric updates in trellis of the component polar codes is .therefore , under the simulated configurations of and , the bitcm consumes about times of the computational complexity taken by stpcm under sc decoding . gives the bler performance of stpcm under sc decoding .the simulated stpcm schemes are constructed after evaluating the reliabilities of the polarized channels using ga algorithm .similar to the case of the conventional binary polar codes , the bler performances under the sc decoding of the proposed stpcm and the corresponding estimated values obtained by ga are well matched . to improve the performance of stpcm, the cascl decoding is applied .the searching width of the cascl decoder is set to , the complexity of which is upper bounded by times of sc .taking the complexity - reducing implementation methods into consideration , the cascl decoding of stpcm under this configuration is with the comparable complexity with the log - map decoding of bitcm .as shown in , the improvement in bler performance of stpcm under the cascl is about or more against that under the sc decoding .furthermore , the stpcm scheme can even outperform the bitcm scheme by no less than .the performance gain of stpcm scheme under cascl decoding against the bitcm scheme remains when higher modulation order and more antennas are assigned .the performance over rayleigh fast and uncorrelated mimo channels of with up to antennas are shown in .as the figure shows , the performance gains are around to .a comprehensive comparison of stpcm and bitcm schemes under different configurations is provided in .the cascl decoding is used to decode the stpcm scheme . in the subfigures ,the minimum required snrs to achieve bler are plotted , and the ergodic capacities ( [ equ_fadingcap ] ) of the corresponding transmission schemes are also provided . among all the simulated cases ,the stpcm scheme can achieve a performance again of to against the bitcm scheme .particularly , a significant gain is observed in the case of mimo with ( ) , when the bitcm scheme suffers with a severe _ error floor _ effect around the bler .a space - time coded modulation scheme based on polar codes is proposed following the multilevel principle , which can be seen as a joint optimization of the binary polar coding , modulation and multiple - input multiple - output ( mimo ) transmission .similar to the multilevel approach of polar coded modulation , the mimo transmission process is combined into the channel transform . based on the generalized channel polarization , a space - time polar coded modulation ( stpcm ) scheme with qr - decompositionis proposed for the -ary modulated mimo channel .in addition , a practical solution of polar code construction over the fading channels is also provided , where the fading channels are approximated by an awgn channel which shares the same capacity with the original .the proposed stpcm scheme is simulated over uncorrelated mimo rayleigh fast fading channels . compared with the widely used bit - interleaved turbo coded modulation ( bitcm ) approach ,the proposed stpcm scheme achieves a performance gain of to in all the simulated cases .this work was supported in part by the national natural science foundation of china ( no . 61171099 ) , the national science and technology major project of china ( no . 2012zx03003 - 007 ) and qualcomm corporation .e. arikan , `` channel polarization : a method for constructing capacity - achieving codes for symmetric binary - input memoryless channels , '' _ ieee trans .inf . theory _ ,55 , no . 7 , pp . 3051 - 3073 , jul .
the polar codes are proven to be capacity - achieving and are shown to have equivalent or even better finite - length performance than the turbo / ldpc codes under some improved decoding algorithms over the additive white gaussian noise ( awgn ) channels . polar coding is based on the so - called channel polarization phenomenon induced by a transform over the underlying binary - input channel . the channel polarization is found to be universal in many signal processing problems and has been applied to the coded modulation schemes . in this paper , the channel polarization is further extended to the multiple antenna transmission following a multilevel coding principle . the multiple - input multile - output ( mimo ) channel under quadrature amplitude modulation ( qam ) are transformed into a series of synthesized binary - input channels under a three - stage channel transform . based on this generalized channel polarization , the proposed space - time polar coded modulation ( stpcm ) scheme allows a joint optimization of the binary polar coding , modulation and mimo transmission . in addition , a practical solution of polar code construction over the fading channels is also provided , where the fading channels are approximated by an awgn channel which shares the same capacity with the original . the simulations over the mimo channel with uncorrelated rayleigh fast fading show that the proposed stpcm scheme can outperform the bit - interleaved turbo coded scheme in all the simulated cases , where the latter is adopted in many existing communication systems . polar codes , space - time coding , coded modulation , multilevel coding , joint optimization .
recently a novel paradigm was suggested for the design of quantum algorithms for solving combinatorial search and optimization problems based on quantum adiabatic evolution . in the quantum adiabatic evolution algorithm ( qaa )a quantum state is closely following a ground state of a specially designed slowly time - varying control hamiltonian . at the beginning of the algorithmthe control hamiltonian has a simple form with a known ground state that is easy to prepare , and at the final moment of time it coincides with the `` problem '' hamiltonian which ground state encodes the solution of the classical optimization problem in question here is a cost function defined on a set of binary strings , each containing bits .the summation in ( [ hp ] ) is over the states forming the computational basis of a quantum computer with qubits .state of the -th qubit is an eigenstate of the pauli matrix with eigenvalue .if at the end of the qaa the quantum state is sufficiently close to the ground state of then the solution to the optimization problem can be retrieved by the measurement .it has been shown recently that the query complexity argument that lead to the exponential lower bound for the unstructured search can not be used to rule out the polynomial time solution of np - complete satisfiability problem by the quantum adiabatic evolution algorithm ( qaa ) .a set of examples of the 3-satisfiability problem has been recently constructed to test analytically the power of qaa . in these examplesthe cost function depends on a bit - string with bits , , only via a hamming weight of the string , , so that where the function is in general non - monotonic and defines a particular instance of this `` hamming weight problem '' ( hwp ) . in the original version of qaa applied to the hwp where the control hamiltonian is a linear interpolation in time between the initial and final hamiltonians . in this case , it was shown that the system can be trapped during the qaa in a local minimum of the cost function for a time that grows exponentially in the problem size . it was also shown that an exponential delay time in the quantum adiabatic algorithm can be interpreted in terms of the quantum - mechanical tunnelling of an auxiliary large spin between the two intermediate states . the above example has a significance greater than just being a particular simplified case of a binary optimization problem with symmetrized cost .indeed , one can argue that it shows a generic mechanism for setting `` locality traps '' in the 3-satisfiability problem .but most importantly , this example demonstrates that exponential complexity of qaa can result from a _ collective phenomenon _ in which transitions between the configurations with low - lying energies can only occur by simultaneous flipping of large clusters containing order - n bits . in spin glasses, there is typically an exponential number of such configurations , the so - called local ground states. a similar picture may be applicable to random satisfiability problems . in some cases, these transitions can be understood and described in terms of macroscopic quantum tunnelling .a tunnelling of magnetization was observed in large - spin molecular nanomagnets and in disordered ferromagnets .the paper suggests that large tunnelling barriers can be avoided in qaa by using multiple runs of qaa with realizations of the control hamiltonians sampled from a random ensemble .this ensemble is chosen in a sufficiently simple and general form that does not depend on the specific instance of the optimization problem .different hamiltonians correspond to different paths of the unitary evolution that begin and end in the same initial and final states ( modulus phase factors ) .the complexity of qaa with different paths for the hwp was tested numerically in using an ensemble of random 8 matrices .the results indicate that the hwp may be solved in polynomial time with finite probability . in case when the random paths preserve the bit - permutation symmetry of the problem it is natural to describe the random ensemble of in terms of the dynamics of a spin- system .this approach allows for a general theoretical analysis of the algorithm . in the present paper ,we perform this analysis for the random version of hwp ( over - constrained 3-satisfiability problem ) by mapping the dynamics of qaa onto the motion of a quantum particle in a 1d effective potential .this allows us to compute the statistical weight of the successful evolution paths in the ensemble and provide a complete characterization of such paths .in a qaa with different paths , one specifies the time - dependent control hamiltonian where the control parameter plays the role of dimensionless time .this hamiltonian guides the quantum evolution of the state vector according to the schrdinger equation from to , the _ run time _ of the algorithm . is the `` problem '' hamiltonian given in ( [ hp ] ) . and are ` driver' hamiltonians designed to cause the transitions between the eigenstates of . an initial state of the system is prepared as a ground state of the initial hamiltonian .it is typically constructed assuming knowledge of the solution of the classical optimization problem and related ground state of . in the simplest case where is a pauli matrix for -th qubit and some scaling constant .the ground state of has equal projections on any of the basis states ( [ basis ] ) .consider instantaneous eigenstates of with corresponding eigenvalues arranged in non - decreasing order at any value of provided the value of is large enough and there is a finite gap for all between the ground and exited state energies , , quantum evolution is adiabatic and the state of the system stays close to an instantaneous ground state , ( up to a phase factor ) . because the final state is close to the ground state of the problem hamiltonian .therefore a measurement performed on the quantum computer at will find one of the solutions of combinatorial optimization problem with large probability .quantum transition away from the adiabatic ground state occurs most likely in the vicinity of the point where the energy gap reaches its minimum ( avoided - crossing region ) .the probability of the transition is small provided that where , \label{mingap1}\end{aligned}\ ] ] the r.h.s . in eq .( [ mingap ] ) gives an upper bound estimate for the required runtime of the algorithm and the task is to find its asymptotic behavior in the limit of large .the numerator in ( [ mingap ] ) is of the order of the largest eigenvalue of , which typically scales polynomially with .however , can scale down exponentially with and in such cases the required runtime of the quantum adiabatic algorithm to find a solution grows exponentially fast with the size of the input .one should note that the second term in the r.h.s . of ( [ h_tot ] ) is zero at and .therefore , by using different driver hamiltonians one can design a family of ( possibly random ) adiabatic evolution paths that start at in the same generically chosen initial state and arrive at the ground state of at . in general , different paths will correspond to different minimum gaps and one can introduce the distribution of minimum gaps .this distribution can be used to compute the fraction of the adiabatic evolution paths that arrive at the ground state of within polynomial time , for a successfully designed family of paths the fraction is bounded from below by a polynomial in which leads to the average polynomial complexity of qaa .consider a binary optimization problem defined on a set of -bit strings with the cost function in the following form : this cost is symmetric with respect to the permutation of bits , it depends on a string only through the number of unit bits in the string ( the hamming weight ) . in this paperwe consider the cost function ( [ symcost ] ) in the following form which is generalization of the cost introduced in here the sum is over all possible 3-bit subsets of the -bit string .a subset contributes to the total cost a weight factor where is a number of units bits in the subset .a set of weights defines an instance of this generalized hamming weight problem ( hwp ) .one can formulate a random version of hwp , e.g. , by drawing numbers independently from a uniform distribution defined over a certain range . in the limit of large cost function ( [ 3sat_gen ] ) takes the following form : here and we only keep the terms of the leading order in . the coefficients in ( [ gp ] )are linear combinations of +\frac{1}{6 } \binom{3}{% k}\left[p_0+(-1)^k p_3\right ] .\label{beta_1}\ ] ] here for and for .the function in ( [ gp ] ) is a third degree polynomial in , and the form of the function depends on the coefficients ( ) . it is easy to show that there is a finite size region in the parameter space where is a non - monotonic function of that has global and local minima on the interval .those minima are separated by a finite barrier with width .the barrier separates strings that have close values of the cost but are at large hamming distance from each other : they have distinct bits .this property can lead to exponentially small minimum gaps in qaa due to the onset of low - amplitude quantum tunnelling . ) _ vs _ for different choices of the weights .curve corresponds to , and the cost function has a global minimum at , corresponding to the string with the hamming weight zero , .it also has a local minimum at corresponding to the bit string with hamming weight , .the curve yields the particular form of the cost function considered in , .curve corresponds to , it has a global minimum at inside of the interval .this minimum corresponds to approximately bit strings that all have the same hamming weight .,width=316 ]it is natural to consider the control hamiltonians ( [ h_tot ] ) for solving the hwps that are symmetric with respect to permutation of individual bits ( [ basis ] ) . inwhat follows , we use the normalized components of the total spin operator for the system of individual spins- here are the projections of the total spin operator on the -th axis ( ) and are pauli matrices for the -th spin . for the sake of bookkeeping , in ( [ s ] ) and also throughout the paper we use `` hats '' for the spin operators , such as , , and some others , in order to distinguish them from their corresponding eigenvalues ( and , respectively , in the above example ) . to obtain the problem hamiltonian ( [ hp ] )we make use of the obvious connection between the values of the hamming weight function of an -bit string and corresponding eigenvalues of the spin projection operator then from eqs .( [ basis]),([gp ] ) and ( [ sz ] ) we obtain we chose the driver in a bit - symmetric form that coincides with ( [ hd ] ) ( up to a constant term) it was proposed in that can be constructed using some generic ensemble of random matrices .the bit - symmetric random drivers for the cost functions of the type ( [ 3sat_gen ] ) can be constructed as follows .one generates an random hermitian matrix with zero diagonal elements and non - diagonal elements that are independent random numbers identically distributed in a certain interval .matrix elements of can be enumerated by all possible configurations of a 3-bit string .then takes the form here are computational basis states eq .( [ hp ] ) corresponding to bit - strings , and string has three of its bits flipped at the positions , and as compared to the string ( i.e. , , etc ) .each randomly selected generates and therefore a random path modification of the qaa . from the above discussion, it follows that the matrix of the operator ( [ hea ] ) is symmetric with respect to the bit permutations and therefore it commutes with the operator of a total spin of a system of spins .this means that acts independently in each of the sub - spaces corresponding to certain values of the total spin .it follows from ( [ h_p_z ] ) and ( [ hdfarhi ] ) that the same is true for the total control hamiltonian ( [ h_tot ] ) = 0,\quad \tau \in ( 0,1 ) .\label{commut}\ ] ] since in our case the initial state ( [ hd ] ) is a totally symmetric combination of all states and therefore corresponds to the maximal spin , our system always stays in this sub - space during the algorithm .therefore in the analysis of the complexity of qaa one can reduce the matrix of to the matrix that only involves the states with different spin projections of the _ maximum _ total spin .binary strings corresponding to the quantum states from this subspace are distinguished from each other by their hamming weight only . in appendix[ matrix ] , we show that in the case of real - valued symmetric matrices and in the large - spin limit , the bit - symmetric driver ( [ hea ] ) can be presented as a linear combination of 6 operators expressed in terms of the large spin operator components acting in the subspace with . using this fact , and also eqs .( [ h_p_z ] ) and ( [ hdfarhi ] ) one can write a bit - symmetric control hamiltonian ( [ h_tot ] ) in the following form where ( ) are independent real coefficients given in eq .( [ gammas_1 ] ) . as we show in appendix [ matrix ] , any random realization of the real matrix can be mapped onto combinations of drivers ( [ h_e_3 ] ) by the appropriate choice of the real coefficients .we note that in eq .( [ h_e_3 ] ) does not have any terms involving operator .the reason for that is that we chose matrix elements of ( [ hea ] ) to be real numbers .then matrix elements of in basis are real as well . in this case can only involve terms with * even * powers of . in ( [ h_e_3 ] ) we have used a conservation of the total spin ( see discussion above ) and substituted . the form of the total hamiltonian in ( [ h_tot_4 ] ) allows us to analyze the minimum gap in qaa with different paths ( [ h_tot ] ) using the wkb analysis of the dynamics of a spin- in the large spin limit ( ) .our analysis in this section is a particular application of the wkb - type approach commonly used for the description of quantum spin tunnelling in magnetics , , , .this approach is applicable for the large spins , which is the case of interest for us .we choose as a quantization axis and following the standard procedure to obtain the effective quasi - classical hamiltonian in polar coordinates with ] .we make use of the villain transformation where azimuthal angle operator satisfies the commutation relation = i\epsilon .\label{comm1}\ ] ] in a change of notation we introduce a coordinate and canonically - conjugate momentum ( cf . ) ( ) . expanding ( [ villain_1 ] ) in the large spin limit , we obtain finally , we write the scaled hamiltonian of the system ( [ h_tot_4 ] ) in terms of the new variables where = i\,\epsilon , \label{[qp]}\ ] ] and is a small correction ( here has the same arguments as in ( [ hqp ] ) ) . the stationary schrdinger equation ( [ adiab ] ) in the new basis can be solved in the wkb approximation with the small parameter playing the role of a plank constant. then the wave function takes the form , \label{wkb}\ ] ] where in the leading order in the action function satisfies the following hamilton - jacobi equation this equation describes a 1d auxiliary mechanical system with coordinate , momentum , energy , and hamiltonian function .classical orbits satisfy the hamiltonian equations where and stand for the partial derivatives of with respect to and , respectively .stationary points of the dynamics correspond to the elliptic and saddle points of the hamiltonian function elliptic points are minima ( or maxima ) of on the ( ) plane .they satisfy the condition where is understood as a second derivative of with respect to , etc .saddle points correspond to in ( [ d ] ) . in the limit the adiabatic ground state ( [ schh ] )is localized in the small vicinity of the fixed points corresponding to the global minimum of at a given value of . to logarithmic accuracythe wkb - asymptotic ( [ wkb ] ) of the ground - state wave function is determined by the mechanical action for the imaginary - time instanton trajectory emanating from the fixed point , \notag \\ & & q(-i\infty ) = q_{\ast } , \quad p(-i\infty ) = p_{\ast } , \quad q(0)=q . \label{inst}\end{aligned}\ ] ] integration in ( [ inst ] ) is along the imaginary axis .the instanton trajectory obeys eq .( [ ham_eq ] ) with the boundary conditions given above and corresponding to the line of integration in ( [ inst ] ) .the choice of the final instant , , is arbitrary since the instanton trajectory is degenerate with respect to a shift of the time axis .we note that the wkb asymptotic ( [ inst ] ) decays exponentially fast as the coordinate in ( [ inst ] ) moves away from its value at the global minimum into the classically inaccessible region .this corresponds to the growth of the imaginary part of the action in ( [ inst ] ) , similar to the conventional quantum tunnelling in the potential . in the vicinity of ground - state wave function takes the form similar to that of harmonic oscillator : , \notag \\ & & m^{\ast } = \frac{1}{|h_{pp}(q_{\ast } , p_{\ast } , \tau ) | } , \label{osc}\end{aligned}\ ] ] here is defined in ( [ d ] ) .similarly , the energy spectrum in that region corresponds to the classical elliptic orbits with oscillation frequency we note that the frequency depends on and determines the time - varying instantaneous gap between the ground and first exited states , .it can be seen from eq .( [ hqp ] ) that the global minimum of will correspond to ( ) provided that the following condition holds for all : where the positive and negative signs of correspond to even and odd values of , respectively .the value of in ( [ dgdnx ] ) corresponds to the global minimum of the effective potential under the above condition the hamiltonian function of the system near the global minimum exactly corresponds to that of the harmonic oscillator with effective frequency ( [ d ] ) and mass ( [ osc ] ) in the wkb picture the ground state of the system correspond to the particle performing zero - level oscillations near the bottom of the slowly varying potential .there are two types of the bifurcations that can destroy the above adiabatic picture : assume that at some instant of time the effective mass goes to infinity . in the vicinity of this point the hamiltonian function ( [ hqp ] )can be approximated as follows : where in the above equations all functions are evaluated at the point .equation ( [ locb ] ) corresponds to bifurcation point .it can be seen from ( [ locb ] ) that for the single global minimum of splits into the two minima with nonzero momenta due to the symmetry the two global minima with nonzero will stay symmetric with respect to the -axis at later times .it follows from ( [ d ] ) , ( [ osc1 ] ) ) that the linear oscillation frequency vanishes at the bifurcation point , , however the energy gap . by solving the schrdinger equation ( [ adiab ] ) at this point in the representation of the momentum one can find the eigenfunctions and eigenvalues corresponding to a 1d quantum system moving in a quartic potential ( cf .( [ locb ] ) ) .this analysis yields an estimate for the value of the gap , and the characteristic localization range for the size of the energy barrier in momentum separating the two global minima in ( [ locb ] ) grows with time for and this leads to a rapid decrease of the energy gap . sufficiently far from the bifurcation point , , each of the global minima gives rise to its own wkb asymptotic ( [ inst ] ) localized at the minimum . the ground state and the first exited state correspond to their symmetric and anti - symmetric combinations , respectively for tunnelling splitting of energy levels for the symmetric and antisymmetric states determines the value of the gap and decreases exponentially fast with .away from the bifurcation region , , the gap scales down exponentially with ( note that ) . as a result of the local bifurcation , the purely adiabatic evolution in qaa collapses .the amplitude of staying in the adiabatic ground state for is nearly equally split between the states ( [ split ] ) with the two lowest eigenvalues . in general, this may reduce the probability of finding a system in a ground state at by a factor of 2 .we note that the control hamiltonian ( [ h_tot_4 ] ) is at most a cubic polynomial in , , and therefore the number of local bifurcation events during qaa is of the order of one . in the worst casethey will cause the reduction of the success probability in qaa by a constant factor . for a given instance of the cost function ( [ gp ] ) defined by the coefficients ( or ) the onset of local bifurcations ( [ locb ] ) depends on the choice of the driver hamiltonian ( [ h_e_3 ] ) .there are a number of ways to select coefficients s in the driver hamiltonian ( [ h_e_3 ] ) to avoid local bifurcations during qaa in a broad range of values of the coefficients .for example , to completely suppress local bifurcations ( [ locb ] ) one can keep in ( [ h_e_3 ] ) only terms linear in and set the hamiltonian function defines a 3d surface over a 2d plane and the shape of this surface varies with time .we consider global bifurcations of this surface where the energies of its two minima cross each other at some instant of time while the distance between the minima on the plane remains finite at the crossing point .for the minima exchange their roles : global minimum becomes local and vise versa .before and after the intersection in the energy space the two minima are uniquely identified with the ground and first exited states of the system s hamiltonian ( [ hqp ] ) .the corresponding wave functions are well approximated by their asymptotic expressions([inst]),([osc ] ) .the small vicinity of the global bifurcation point can be described within the standard 2-level avoiding - crossing picture .there are given by symmetric and antisymmetric superpositions of the wkb - asymptotic corresponding to intersecting minima .the value of the gap changes with time as where is some constant and the minimum gap is determined by the overlap of the wkb asymptotic . to logarithmic accuracyit is given by the imaginary part of the mechanical action ( [ inst ] ) along the instanton trajectory connecting the two minima here are coordinates of the two minima ; , and the instanton trajectory obeys the eqs .( [ ham_eq ] ) . the analytical expression for the minimum gapwas studied in , for the case , using a simplified version of the hamming weight problem ( [ 3sat_gen ] ) .below we identify certain geometrical properties of the global bifurcations in the case that will be used later in the selection of the drivers for the successful qaa . for , and are represented by the curves 1,2 , and 3 , respectively.,width=316 ] in the case , the hamiltonian has a minimum at and the value of corresponds to the global minimum of the effective potential ( [ u ] ) .we use eq .( [ h_tot_4 ] ) and also the condition to obtain the following equation for this equation holds until the global bifurcation point at where changes discontinuously in time ( see fig . 2 ) . at the minimum of the potential and therefore the direction of the motion of entirely depends on the direction of the `` force '' , . at potential has a unique minimum at the point .it is clear that with this initial condition equation ( [ dq*dt ] ) can lead to a `` wrong '' minimum of that lies above the global minimum , and such cases will give rise to a global bifurcation .this effect is illustrated in fig.[fig : cost ] where the two different cost functions correspond to the same direction of motion for .the value of may either smoothly approach the global minimum of the cost ( curve 2 ) , or move toward a wrong local minimum ( curve 1 ) , leading to the global bifurcation and exponentially small gap in qaa .adding to the control hamiltonian can invert the direction of motion of toward the global minimum of .this can be seen from the fact the eq.([dq*dt ] ) in presence of possesses the additional term ( here we drop for sake of brevity the argument in ) .clearly , the successful should _ not _ possess reflection symmetry with respect to .therefore we should only select the terms in ( [ h_e_3 ] ) that contain odd powers of .taking into account ( [ avoid ] ) we arrive at the following form of the driver hamiltonian this driver can remove the potential barrier between the two competing global minima of by shifting the original minimum at towards the true global minimum of the cost function ( cf .[ fig : cost ] ) . in the classical picture ( [ hqp ] )the driver ( [ gamma4 ] ) corresponds to an external field parallel to -axis which can destroy the tunnelling barrier along this direction .the mechanism of such tunnelling avoidance is similar to the one considered in , where the external field generated by the driver ( [ gamma4 ] ) compensates the effective field due to the linear term proportional to the coefficient in the problem hamiltonian ( [ gp ] ) . in general, one can expect that a complete suppression of the tunnelling barrier at all values of requires a certain magnitude ( and sign ) of the coefficient depending on the choice of the coefficients in the cost function .the transition to the tunnelling regime can be described as an bifurcation point , illustrated in fig.2 .the effective potential changes parametrically with and . near the bifurcation point , the potential has the form where are deviations from the bifurcation point in and , respectively .the corresponding conditions for the bifurcation point are : taking into account ( [ u ] ) and ( [ h_tot_4]),([h_e_3]),([gamma4 ] ) , the above equation yields ^{2 } } , \label{bifurcation_1 } \\\gamma _ { 4c}\left ( 1-\tau _ { c}\right ) & = & \frac{\left ( 1-\tau _ { c}\right ) \left ( 3\beta _ { 1}+\beta _ { 3}\right ) + \tau _ { c}\beta _ { 2}\beta _ { 3}}{\tau _ { c}\beta _ { 2}-2\left ( 1-\tau _ { c}\right ) } , \notag\end{aligned}\ ] ] these equations should be solved for and for the given set of the coefficients .the bifurcation is avoided when for example , in the particular case of the hwp ( [ 3sat_gen ] ) considered in , , we have in this case the example of the driver hamiltonian that allows to avoidance of tunnelling in qaa was given in where the value of was used . according to ( [ avoid_farhi ] )this value is way below the critical value .we performed numerical simulations with the effective potential ( [ u ] ) checking for the onset of tunnelling for all ] ( =0,1,2,3 ) . for each size of the cube we select the point with largest value of denoted below as .the results are presented in fig.3 .the critical value is monotonically increasing in , and the dependence is close to linear for sufficiently large , but it is non - linear in the range . it can be inferred from eq .( [ bifurcation_1 ] ) that a nonlinear dependence of on the scale is due to the fact that the critical time also depends on .the linear dependence of for large has a simple intuition . according to ( [ gp ] ) and ( [ beta_1 ] ) ,the magnitude of is proportional to . according to eq.([dq*dt ] ), the maximal magnitude of the coefficient presents a force that can possibly move a system into the local minimum at small . from ( [ beta_1 ] ) , we conclude that }\beta _ { 1}=2l ] .making use of ( [ gammas_1 ] ) , we obtain therefore , the probability that inequality ( [ range ] ) is satisfied is estimated as 1 - 15 0.71875 .on the other hand , the values of in ( [ gammas_1 ] ) belong to the range , . using the value of -0.95 given in ( [ avoid_farhi ] ) we estimate the probability of to be approximately equal to 0.46 . making an approximation that the cases when the effective mass is non - zero are statistically independent from the cases when , we obtain the total probability of success as .46 0.71875 1/3 , which is in qualitative agreement with the numerical results of .this estimate can be generalized to the case when the matrix elements are distributed in the interval $ ] for sufficiently large . in this case, the probability that ( [ range ] ) is satisfied remains the same , 0.71875 , while the probability that is estimated as . with the assumption of statistical independence , the total probability of successis , and in the limit of large we have which exceeds slightly the value for .in absence of tunnelling , the dynamics of the large spin can be characterized by classical equations of motion for the spin projections treated as c - numbers in the form , \label{heis_dyn_1}\ ] ] with in coordinate form and in terms of the dimensionless spin projections , this yields where we took into account that since does not contain the component , . in the casewhen the effective magnetic field does not explicitly depend on time , the system ( [ heis_dyn_1 ] ) , ( [ heis_dyn_2 ] ) has two independent integrals of motion the first ( [ int2 ] ) reflects the conservation of total spin and also holds for an arbitrary time - dependent field , whereas the second integral corresponds to the adiabatic invariant of the system ( [ heis_dyn_1 ] ) , ( [ heis_dyn_2 ] ) . since in our case is parametrically time - dependent , the adiabatic invariant is conserved approximately for sufficiently slow parametric evolution .note that the adiabatic solutions always play the role of envelope solutions .this means that on average , the spin closely follows the adiabatic solution , but there are fast oscillatory - type motions superimposed on the slow adiabatic evolution .basically , the adiabatic approximation in the classical case is applicable when the slow motion is much slower than the fast oscillatory motion .this exactly corresponds to the adiabatic evolution of the spin system in the quantum case .making use of ( [ int2 ] ) and taking into account that at the instant , the total spin was parallel to the -axis , we obtain , or implying that the total spin is always parallel the effective magnetic field .therefore , the adiabatic evolution of the large spin can be simply described as the situation when the spin follows the effective field ( on average ) .we note that at this level , there is a direct correspondence between the adiabatic classical solution and the quasiclassical wave functions of the large spin parallel to . from ( [ adiabatic_1 ] ), it follows that this direction can be identified with the effective magnetic field .this justifies the variational approach introduced in , identifying the variational wave functions with the adiabatic ground states along the evolution paths when the total spin is parallel to .therefore , one can observe that in the absence of tunnelling , the general hwp is solved essentially by the classical paths of the qaa .we apply the quantum adiabatic evolution algorithms with different paths to the generalized hamming weight problem that corresponds to the specific case of the random satisfiability problem defined in ( [ 3sat_gen ] ) .we show that any random evolution path produced by this algorithm for the hwp can be obtained by using 6 specific deterministic basis operators with random weights and therefore is parameterized by 6 independent random numbers .therefore , the approach to qaa with different paths can still be reduced to the large spin dynamics for the hwp .we show that only one of these generators can be a universal driver fundamentally responsible for tunnelling suppression for arbitrary hwp and therefore the problem of constructing such a universal driver reduces to the definition of its weight . due to the possible reflection symmetry of the cost function , any particular case of the general hwpcan be solved with one of the two values of the weight with that lead to the collapse of the adiabatic evolution .the global bifurcations correspond to the onset of tunnelling in qaa and lead to the failure of the algorithm .in contrast , the local bifurcations while still corresponding to exponentially small minimum gap only lead to the decrease of the probability of success by a factor of 2 . since in a given problem function is a low degree polynomial in its arguments there are only a few local bifurcations possible .however , the phenomenon of local bifurcations may become important for more difficult random optimization problems . assuming the number of such bifurcations is large the probability of success is reduced by a factor of .for that scales up with that would lead to the failure of the algorithm .we want to thank edward farhi ( mit ) for useful discussion .this work was supported in part by the national security agency ( nsa ) and advanced research and development activity ( arda ) under army research office ( aro ) contract number xxxxxx - xx - x - xxxx , we also want to acknowledge the support of nasa is revolutionary computing algorithms program ( project no : 749 - 40 ) .the random real symmetric random matrix introduced in ( [ hea ] ) , describes the transitions between each of the states for each clause involving bits .this matrix has independent matrix elements and can be presented in the form where correspond to the transitions involving one , two and three bits , respectively . for each realization, we have where are the real coefficients . the indices label the bits , and are the pauli sigma - matrices of raising / lowering , -projection and -projection , respectively and is a spin projection variable .note that the operator is a projector onto the spin state for the bit .clearly , the number of independent parameters in ( [ a_c_2 ] ) is , where the three terms of the sum correspond to and , respectively .note that this number of parameters equals the number of independent matrix elements of estimated above . in the matrix form, the representation ( [ a_c_2 ] ) yields , \label{a_c_3}\ ] ] where the vector of basis states is ^t , \label{basis_1}\ ] ] and the lower left portion of the symmetric matrix is obtained by reflection with respect to the diagonal .the driver is obtained by summation over all clauses . in doing this summation , we take into account that now the bit indices run through all bits , whereas the indices characterizing the realization of still run through the bits ( since the same realization of is applied to all triples of bits ) . the driver is given by where correspond to the transitions involving one , two and three bits , analogous to ( [ a_c_1 ] ) . one should note that the second term on the r.h.s . for gives a contribution , which is diagonal in representation and therefore leads to the effective re - definition of the cost function .following the logic of , we disregard such terms .also , the commutation relations between the total spin components give contributions to the effective potential and can be neglected in the large - spin limit .taking this into account , we obtain from ( [ h_e_2 ] ) in the large - spin limit where is a dimensionless spin projection on -axis and the coefficients are given by in particular , the deterministic driver considered in corresponds to , and it follows from ( [ gammas_1 ] ) that in this case , the only non - zero coefficient in ( [ gammas_1 ] ) is .this corresponds to , which is equivalent to in the large - spin limit according to the above discussion .taking into account only the term in and expanding up to the 4th order , we obtain the conditions for the bifurcation point in the form ( cf .( [ conda3 ] ) ) , \notag \\ \tau _ { c}\left ( 2\beta _ { 2}+6\beta _ { 3}x\right ) & = & -\left ( 1-\tau _ { c}\right ) \left [ 2 - 3x^{2}+\gamma _ { c}\tau _ { c}\left ( -3x\right ) \right ] , \label{bif_2 } \\ 6\tau _ { c}\beta _ { 3 } & = & \left ( 1-\tau _ { c}\right ) \left ( 6x+3\gamma _ { c}\tau _ { c}\right ) , \notag\end{aligned}\ ] ] which have to be solved for , and for the given . solving for , we obtain condition ( [ bifurcation_1 ] ) in the text . r. monasson and r. zecchina , `` entropy of the k - satisfiability problem '' , phys .lett . , * 76 * , p.3881 ( 1996 ) ; _ ibid _ , `` statistical mechanics of the random k - satisfiability problem '' , phys .e * 56 * , p.1357 ( 1997 ) .
we provide a theoretical study of the quantum adiabatic evolution algorithm with different evolution paths proposed in . the algorithm is applied to a random binary optimization problem ( a version of the 3-satisfiability problem ) where the -bit cost function is symmetric with respect to the permutation of individual bits . the evolution paths are produced , using the generic control hamiltonians that preserve the bit symmetry of the underlying optimization problem . in the case where the ground state of coincides with the totally - symmetric state of an -qubit system the algorithm dynamics is completely described in terms of the motion of a spin- . we show that different control hamiltonians can be parameterized by a set of independent parameters that are expansion coefficients of in a certain universal set of operators . only one of these operators can be responsible for avoiding the tunnelling in the spin- system during the quantum adiabatic algorithm . we show that it is possible to select a coefficient for this operator that guarantees a polynomial complexity of the algorithm for all problem instances . we show that a successful evolution path of the algorithm always corresponds to the trajectory of a _ classical _ spin- and provide a complete characterization of such paths .
cellular systems are evolving into heterogeneous networks consisting of distributed base stations ( bss ) covering overlapping areas of different sizes , and thus the problems of interference management and cell association are becoming complicated and challenging .one of the most promising solutions to these problems is given by so called _ cloud radio access networks _, in which the encoding / decoding functionalities of the bss are migrated to a central unit .this is done by operating the bss as `` soft '' relays that interface with the central unit via backhaul links used to carry only baseband signals ( and not `` hard '' data information ) - .cloud radio access networks are expected not only to effectively handle the inter - cell interference but also to lower system cost related to the deployment and management of the bss .however , one of the main impairments to the implementation of cloud radio access networks is given by the capacity limitations of the digital backhaul links connecting the bss and the central unit .these limitations are especially pronounced for pico / femto - bss , whose connectivity is often afforded by last - mile cables , and for bss using wireless backhaul links . in the _ uplink _ of cloud radio access networks , each bs compresses its received signal to the central unit via its finite - capacity backhaul link .the central unit then performs joint decoding of all the mobile stations ( mss ) based on all received compressed signals .recent theoretical results have shown that _ distributed compression _schemes can provide significant advantages over the conventional approach based on independent compression at the bss .this is because the signals received by different bss are statistically correlated - , and hence distributed source coding enables the quality of the compressed signal received from one bs to be improved by leveraging the signals received from the other bss as side information .note that the correlation among the signals received by the bss is particularly pronounced for systems with many small cells concentrated in given areas .while current implementations employ conventional independent compression across the bss , the advantages of distributed source coding were first demonstrated in , and then studied in more general settings in - .related works based on the idea of computing a function of the transmitted codewords at the bss , also known as compute - and - forward , can be found in . in the _ downlink _ of cloud radio access networks ,the central encoder performs joint encoding of the messages intended for the mss .then , it independently compresses the produced baseband signals to be transmitted by each bs .these baseband signals are delivered via the backhaul links to the corresponding bss , which simply upconvert and transmit them through their antennas .this system was studied in . in particular , in ,the central encoder performs dirty - paper coding ( dpc ) of all mss signals before compression .a similar approach was studied in by accounting for the effect of imperfect channel state information ( csi ) .reference instead proposes strategies based on compute - and - forward , showing advantages in the low - backhaul capacity regime and high sensitivity of the performance to the channel parameters . for a review of more conventional strategies in which the backhaul links are used to convey message information , rather than the compressed baseband signals ,we refer to - .in this work , we propose a novel approach for the compression on the backhaul links of cloud radio access networks in the downlink that can be seen as the counterpart of the distributed source coding strategy studied in - for the uplink .moreover , we propose the joint design of precoding and compression .a key idea is that of allowing the quantization noise signals corresponding to different bss to be correlated with each other . the motivation behind this choice is the fact that a proper design of the correlation of the quantization noises across the bss can be beneficial in limiting the effect of the resulting quantization noise seen at the mss .in order to create such correlation , we propose to jointly compress the baseband signals to be delivered over the backhaul links using so called _ multivariate compression _* ch . 9 ) .we also show that , in practice , multivariate compression can be implemented without resorting to joint compression across all bss , but using instead a successive compression strategy based on a sequence of minimum mean squared error ( mmse ) estimation and per - bs compression steps . after reviewing some preliminaries on multivariate compression in sec .[ sec : preliminaries ] , we formulate the problem of jointly optimizing the precoding matrix and the correlation matrix of the quantization noises with the aim of maximizing the weighted sum - rate subject to power and the backhaul constraints resulting from multivariate compression in sec .[ sec : problem formulation ] .there , we also introduce the proposed architecture based on successive per - bs steps .we then provide an iterative algorithm that achieves a stationary point of the problem in sec .[ sec : joint ] .moreover , we compare the proposed joint design with the more conventional approaches based on independent backhaul compression - or on the separate design of precoding and ( multivariate ) quantization in sec .[ sec : separate ] .the robust design with respect to imperfect csi is also discussed in detail . in sec .[ sec : numerical - results ] , extensive numerical results are provided to illustrate the advantages offered by the proposed approach .the paper is terminated with the conclusion in sec .[ sec : conclusions ]. _ notation _ : we adopt standard information - theoretic definitions for the mutual information between the random variables and , conditional mutual information between and conditioned on random variable , differential entropy of and conditional differential entropy of conditioned on .the distribution of a random variable is denoted by and the conditional distribution of conditioned on is represented by .all logarithms are in base two unless specified . the circularly symmetric complex gaussian distribution with mean and covariance matrix is denoted by .the set of all complex matrices is denoted by , and represents the expectation operator .we use the notations and to indicate that the matrix is positive semidefinite and positive definite , respectively . given a sequence , we define a set for a subset .the operation denotes hermitian transpose of a matrix or vector , and notation is used for the correlation matrix of random vector , i.e. , ] ; is used for the conditional correlation matrix , i.e. , ] for , where the matrix is a non - negative definite matrix ( see , e.g. , ( * ? ? ?ii - c ) ) . ]\leq p_{i},\,\,\mathrm{for}\,\ , i\in\mathcal{n_{b}}.\label{eq : perbs power constraint}\ ] ] assuming flat - fading channels , the signal received by ms is written as where we have defined the aggregate transmit signal vector ^{\dagger} ] undergo precoding and compression , as detailed next .* 1 . precoding : * in order to allow for interference management both across the mss and among the data streams for the same ms ,the signals in vector are linearly precoded via multiplication of a complex matrix .the precoded data can be written as where the matrix can be factorized as ,\label{eq : whole beamformer}\ ] ] with denoting the precoding matrix corresponding to ms .the precoded data can be written as ^{\dagger} ] of compressed signals for all the bss is given by where the compression noise ^{\dagger} ] and defines the correlation between the quantization noises of bs and bs .rate - distortion theory guarantees that compression codebooks can be found for any given covariance matrix under appropriate constraints imposed on the backhaul links capacities .this aspect will be further discussed in sec .[ sub : multivariate - compression theory ] . with the described precoding and compression operations ,the achievable rate ( [ eq : rate ms k ] ) for ms is computed as [ rem : compression]in the conventional approach studied in - , the signals corresponding to each bs are compressed independently .this corresponds to setting for all in ( [ eq : compression covariance ] ) .a key contribution of this work is the proposal to leverage correlated compression for the signals of different bss in order to better control the effect of the additive quantization noises at the mss .the design of the precoding matrix and of the quantization covariance can be either performed separately , e.g. , by using a conventional precoder such as zero - forcing ( zf ) or mmse precoding ( see , e.g. , - ) , or jointly .both approaches will be investigated in the following .[ rem : dpc]if non - linear precoding via dpc is deployed at the central encoder with a specific encoding permutation of the ms indices , the achievable rate for ms is given as in lieu of ( [ eq : rate ms k ] ) and can be calculated as with the function given as note that the dpc is designed based on the knowledge of the noise levels ( including the quantization noise ) in order to properly select the mmse scaling factor . as explained above , due to the fact that the bss are connected to the central encoder via finite - capacity backhaul links ,the precoded signals in ( [ eq : precoding bs - wise ] ) for are compressed before they are communicated to the bss using the gaussian test channels ( [ eq : gaussian test channel each bs ] ) . in the conventional casein which the compression noise signals related to the different bss are uncorrelated , i.e. , for all as in - , the signal to be emitted from bs can be reliably communicated from the central encoder to bs if the condition is satisfied for .this follows from standard rate - distortion theoretic arguments ( see , e.g. , and sec .[ sub : compression ] ) .we emphasize that ( [ eq : independent comp condition ] ) is valid under the assumption that each bs is informed about the quantization codebook used by the central encoder , as defined by the covariance matrix . in this paper , we instead propose to introduce correlation among the compression noise signals , i.e. , to set for , in order to control the effect of the quantization noise at the mss . as discussed in sec .[ sec : preliminaries ] , introducing correlated quantization noises calls for joint , and not independent , compression of the precoded signals of different bss .as seen , the family of compression strategies that produce descriptions with correlated compression noises is often referred to as _ multivariate compression ._ by choosing the test channel according to ( [ eq : whole encoding operation ] ) ( see sec . [ sub : multivariate - compression theory ] ) , we can leverage lemma [ lem : multivariate covering ] to obtain sufficient conditions for the signal to be reliably delivered to bs for all . in lemma[ lem : multivariate ] , we use to denote the matrix obtained by stacking the matrices for horizontally .[ lem : multivariate ] the signals obtained via the test channel ( [ eq : whole encoding operation ] ) can be reliably transferred to the bss on the backhaul links if the condition is satisfied for all subsets .the proof follows by applying lemma [ lem : multivariate covering ] by substituting for the signal to be compressed , and for the compressed versions . comparing ( [ eq : independent comp condition ] ) with( [ eq : multivariate computed ] ) shows that the introduction of correlation among the quantization noises for different bss leads to additional constraints on the backhaul link capacities .assuming the operation at the central encoder , bss and mss detailed above , we are interested in maximizing the weighted sum - rate subject to the backhaul constraints ( [ eq : multivariate computed ] ) over the precoding matrix and the compression noise covariance for given weights , . this problem is formulated as [ eq : original problem ] the condition ( [ eq : original problem backhaul ] ) corresponds to the backhaul constraints due to multivariate compression as introduced in lemma [ lem : multivariate ] , and the condition ( [ eq : original problem power ] ) imposes the transmit power constraints ( [ eq : perbs power constraint ] ) .it is noted that the problem ( [ eq : original problem ] ) is not easy to solve due to the non - convexity of the objective function in ( [ eq : original problem objective ] ) and the functions in ( [ eq : original problem backhaul ] ) with respect to . in sec . [sec : joint ] , we will propose an algorithm to tackle the solution of problem ( [ eq : original problem ] ) . proposed architecture for multivariate compression based on successive steps of mmse estimation and per - bs compression.,width=623,height=347 ] in order to obtain correlated quantization noises across bss using multivariate compression, it is in principle necessary to perform joint compression of all the precoded signals corresponding to all bss for ( see sec . [ sub : multivariate - compression theory ] ) .if the number of bss is large , this may easily prove to be impractical .here , we argue that , in practice , joint compression is not necessary and that the successive strategy based on mmse estimation and per - bs compression illustrated in fig .[ fig : successive comp ] is sufficient .the proposed approach works with a fixed permutation of the bss indices .the central encoder first compresses the signal using the test channel ( [ eq : gaussian test channel each bs ] ) , namely , with , and sends the bit stream describing the compressed signal over the backhaul link to bs .then , for any other with , the central encoder obtains the compressed signal for bs in a successive manner in the given order by performing the following steps : * ( _ a _ ) * * _ estimation _ : * the central encoder obtains the mmse estimate of given the signal and the previously obtained compressed signals .this estimate is given by \label{eq : mmse estimate x_hat_i}\\ & = \mathbf{\sigma}_{\mathbf{x}_{\pi(i)},\mathbf{u}_{\pi(i)}}\mathbf{\sigma}_{\mathbf{u}_{\pi(i)}}^{-1}\mathbf{u}_{\pi(i)},\nonumber\end{aligned}\ ] ] where we defined the vector ^{\dagger} ] for . in fig .[ fig : graph wyner ] , we compare the proposed scheme with joint design of precoding and compression with state - of - the - art techniques , namely the compressed dpc of , which corresponds to using dpc precoding with independent quantization , and reverse compute - and - forward ( rcof ) . we also show the performance with linear precoding for reference .it is observed that multivariate compression significantly outperforms the conventional independent compression strategy for both linear and dpc precoding .moreover , rcof in remains the most effective approach in the regime of moderate backhaul , although multivariate compression allows to compensate for most of the rate loss of standard dpc precoding in the low - backhaul regime is due to the integer constraints imposed on the function of the messages to be computed by the mss .] . in this subsection, we evaluate the average sum - rate performance as obtained by averaging the sum - rate over the the realization of the fading channel matrices .the elements of the channel matrix between the ms in the cell and the bs in the cell are assumed to be i.i.d .complex gaussian random variables with in which we call the inter - cell channel gain .moreover , each bs is assumed to use two transmit antennas while each ms is equipped with a single receive antenna . in the separate design, we assume that the precoding matrix is obtained via the sum - rate maximization scheme in under the power constraint for each bs with selected as discussed in sec .[ sub : separate precoding ] .note that the algorithm of finds a stationary point for the sum - rate maximization problem using the mm approach , similar to table algorithm 1 without consideration of the quantization noises .average sum - rate versus the power offset factor for the separate design of linear precoding and compression in sec .[ sec : separate ] with db and db.,width=453,height=340 ] fig .[ fig : graph gamma ] demonstrates the impact of the power offset factor on the separate design of linear precoding and compression described in sec .[ sec : separate ] with db and db .increasing means that more power is available at each bs , which generally results in a better sum - rate performance .however , if exceeds some threshold value , the sum - rate is significantly degraded since the problem of optimizing the compression covariance given the precoder is more likely to be infeasible as argued in sec .[ sub : separate precoding ] .this threshold value grows with the backhaul capacity , since a larger backhaul capacity allows for a smaller power of the quantization noises . throughout the rest of this section ,the power offset factor is optimized via numerical search .average sum - rate versus the transmit power for linear precoding with bit / c.u . and db.,width=453,height=340 ] in fig .[ fig : graph snr ] , the average sum - rate performance of the linear precoding and compression schemes is plotted versus the transmit power with bit / c.u . and db .it is seen that the gain of multivariate compression is more pronounced when each bs uses a larger transmit power .this implies that , as the received snr increases , more efficient compression strategies are called for . in a similar vein ,the importance of the joint design of precoding and compression is more significant when the transmit power is larger .moreover , it is seen that multivariate compression is effective in partly compensating for the suboptimality of the separate design . for reference, we also plot the cutset bound which is obtained as where is the sum - capacity achievable when the bss can fully cooperate under per - bs power constraints .we have obtained the rate by using the inner - outer iteration algorithm proposed in ( * ? ? ?it is worth noting that only the proposed joint design with multivariate compression approaches the cutset bound as the transmit power increases .average sum - rate versus the transmit power for the joint design in sec .[ sec : joint ] with bit / c.u . and db.,width=453,height=340 ] in fig .[ fig : graph dpc ] , we compare two precoding methods , dpc and linear precoding , by plotting the average sum - rate versus the transmit power for the joint design in sec . [sec : joint ] with bit / c.u . and db . for dpc , we have applied algorithm 1 with a proper modification for all permutations of mss indices and took the largest sum - rate . unlike the conventional broadcast channels with perfect backhaul links where there exists constant sum - rate gap between dpc and linear precoding at high snr ( see ,e.g. , ) , fig .[ fig : graph dpc ] shows that dpc is advantageous only in the regime of intermediate due to the limited - capacity backhaul links .this implies that the overall performance is determined by the compression strategy rather than precoding method when the backhaul capacity is limited at high snr .average sum - rate versus the backhaul capacity for linear precoding with db and db.,width=453,height=340 ] fig .[ fig : graph ci ] plots the average sum - rate versus the backhaul capacity for linear precoding with db and db . it is observed that when the backhaul links have enough capacity , the benefits of multivariate compression or joint design of precoding and compression become negligible since the overall performance becomes limited by the sum - capacity achievable when the bss are able to fully cooperate with each other .it is also notable that the separate design with multivariate compression outperforms the joint design with independent quantization for backhaul capacities larger than bit / c.u .average sum - rate versus the inter - cell channel gain for linear precoding with bit / c.u . and db.,width=453,height=340 ]finally , we plot the sum - rate versus the inter - cell channel gain for linear precoding with bit / c.u . and db in fig .[ fig : graph alpha ] .we note that the multi - cell system under consideration approaches the system consisting of parallel single - cell networks as the inter - cell channel gain decreases .thus , the advantage of multivariate compression is not significant for small values of , since introducing correlation of the quantization noises across bss is helpful only when each ms suffers from a superposition of quantization noises emitted from multiple bss .in this work , we have studied the design of joint precoding and compression strategies for the downlink of cloud radio access networks where the bss are connected to the central encoder via finite - capacity backhaul links . unlike the conventional approaches wherethe signals corresponding to different bss are compressed independently , we have proposed to exploit multivariate compression of the signals of different bss in order to control the effect of the additive quantization noises at the mss .the problem of maximizing the weighted sum - rate subject to power and backhaul constraints was formulated , and an iterative mm algorithm was proposed that achieves a stationary point of the problem .moreover , we have proposed a novel way of implementing multivariate compression that does not require joint compression of all the bss signals but is based on successive per - bs estimation - compression steps .robust design with imperfect csi was also discussed . via numerical results , it was confirmed that the proposed approach based on multivariate compression and on joint precoding and compression strategy outperforms the conventional approaches based on independent compression and separate design of precoding and compression strategies .this is especially true when the transmit power or the inter - cell channel gain are large , and when the limitation imposed by the finite - capacity backhaul links is significant .15 j. andrews , `` the seven ways hetnets are a paradigm shift , '' to appear in _ ieee comm . mag ._ , mar . 2013 .intel cor ., `` intel heterogeneous network solution brief , '' solution brief , intel core processor , telecommunications industry .j. segel and m. weldon , `` lightradio portfolio - technical overview , '' technology white paper 1 , alcatel - lucent .s. liu , j. wu , c. h. koh and v. k. n. lau , `` a 25 gb / s(/km ) urban wireless network beyond imt - advanced , '' _ ieee comm . mag .122 - 129 , feb . 2011 .china mobile , `` c - ran : the road towards green ran , '' white paper , ver .2.5 , china mobile research institute , oct . 2011 .t. flanagan , `` creating cloud base stations with ti s keystone multicore architecture , '' white paper , texas inst ., oct . 2011 .ericsson , `` heterogeneous networks , '' ericsson white paper , feb . 2012 .p. marsch , b. raaf , a. szufarska , p. mogensen , h. guan , m. farber , s. redana , k. pedersen and t. kolding , `` future mobile communication networks : challenges in the design and operation , '' _ ieee veh .tech . mag ._ , vol . 7 , no16 - 23 , mar .v. chandrasekhar , j. g. andrews and a. gatherer , `` femtocell networks : a survey , '' _ ieee comm . mag .59 - 67 , sep . 2008 .i. maric , b. bostjancic and a. goldsmith , `` resource allocation for constrained backhaul in picocell networks , '' in _ proc .ita 11 _ , ucsd , feb .s. h. lim , y .- h .kim , a. e. gamal and s .- y .chung , `` noisy network coding , '' _ ieee trans .inf . theory _3132 - 3152 , may 2011 .a. e. gamal and y .- h .network information theory _ , cambridge university press , 2011 .a. sanderovich , o. somekh , h. v. poor and s. shamai ( shitz ) , `` uplink macro diversity of limited backhaul cellular network , '' _ ieee trans .inf . theory _ ,55 , no . 8 , pp . 3457 - 3478 , aug .a. sanderovich , s. shamai ( shitz ) and y. steinberg , `` distributed mimo receiver - achievable rates and upper bounds , '' _ ieee trans .inf . theory _ ,55 , no . 10 , pp .4419 - 4438 , oct .a. del coso and s. simoens , `` distributed compression for mimo coordinated networks with a backhaul constraint , '' _ ieee trans .wireless comm ._ , vol . 8 , no .9 , pp . 4698 - 4709 , sep .l. zhou and w. yu , `` uplink multicell processing with limited backhaul via successive interference cancellation , '' in _ proc .ieee glob .( globecom 2012 ) _ , anaheim , ca , dec . 2012 . s .- h . park , o. simeone , o. sahin and s. shamai ( shitz ) , `` robust and efficient distributed compression for cloud radio access networks , '' _ ieee trans692 - 703 , feb . 2013 .m. grieger , s. boob and g. fettweis , `` large scale field trial results on frequency domain compression for uplink joint detection , '' in _ proc .ieee glob .( globecom 2012 ) _ , anaheim , ca , dec .b. nazer , a. sanderovich , m. gastpar and s. shamai ( shitz ) , `` structured superposition for backhaul constrained cellular uplink , '' in _ proc . ieee intern .inf . theory ( isit 2009 ) _ , seoul , korea , jun .hong and g. caire , `` quantized compute and forward : a low - complexity architecture for distributed antenna systems , '' in _ proc .theory workshop ( itw 2011 ) _ , paraty , brazil , oct .o. simeone , o. somekh , h. v. poor and s. shamai ( shitz ) , `` downlink multicell processing with limited - backhaul capacity , '' _eurasip j. adv .proc . _ , 2009 .p. marsch and g. fettweis , `` on downlink network mimo under a constrained backhaul and imperfect channel knowledge , '' in _ proc .ieee glob .( globecom 2009 ) _ , honolulu , hawaii , nov . 2009 .m. h. m. costa , `` writing on dirty paper , '' _ ieee trans .inf . theory _ ,439 - 441 , may 1983 .hong and g. caire , `` reverse compute and forward : a low - complexity architecture for downlink distributed antenna systems , '' in _ proc .ieee intern .inf . theory ( isit 2012 ) _ , cambridge , ma , jul .b. l. ng , j. s. evans , s. v. hanly and d. aktas , `` distributed downlink beamforming with cooperative base stations , '' _ ieee trans .inf . theory _ , vol .5491 - 5499 , dec . 2008. i. sohn , s. h. lee and j. g. andrews , `` belief propagation for distributed downlink beamforming in cooperative mimo cellular networks , '' _ ieee trans .wireless comm .4140 - 4149 , dec . 2011 .r. zakhour and d. gesbert , `` optimized data sharing in multicell mimo with finite backhaul capacity , '' _ ieee transproc . _ , vol .6102 - 6111 , dec .o. simeone , n. levy , a. sanderovich , o. somekh , b. m. zaidel , h. v. poor and s. shamai ( shitz ) , `` cooperative wireless cellular systems : an information - theoretic view , '' _ foundations and trends in communications and information theory _ , vol. 8 , nos . 1 - 2 , pp . 1 - 177 , 2012 .l. zhang , r. zhang , y .- c .liang , y. xin and h. v. poor , `` on gaussian mimo bc - mac duality with multiple transmit covariance constraints , '' _ ieee trans .inf . theory _ ,2064 - 2078 , apr . 2012 .x. zhang , j. chen , s. b. wicker and t. berger , `` successive coding in multiuser information theory , '' _ ieee trans .inf . theory _ ,53 , no . 6 , pp . 2246 - 2254 , jun .r. zhang , `` cooperative multi - cell block diagonalization with per - base - station power constraints , '' _ieee j. sel .areas comm .1435 - 1445 , dec .2010 . m. hong , r .- y .sun , h. baligh and z .- q .luo , `` joint base station clustering and beamformer design for partial coordinated transmission in heterogeneous networks , '' _ ieee j. sel .areas comm .226 - 240 , feb . 2013 . c. t. k. ng and h. huang ,`` linear precoding in cooperative mimo cellular networks with limited coordination clusters , '' _ ieee j. sel .areas comm .9 , pp . 1446 - 1454 , dec .m. hong , q. li , y .- f .liu and z .- q .luo , `` decomposition by successive convex approximation : a unifying approach for linear transceiver design in interfering heterogeneous networks , '' arxiv:1210.1507 .u. erez and s. t. brink , `` a close - to - capacity dirty paper coding scheme , '' _ ieee trans .inf . theory _ ,3417 - 3432 , oct .g. d. forney , jr ., `` shannon meets wiener ii : on mmse estimation in successive decoding schemes , '' arxiv:0409011v2 .d. n. c. tse and s. v. hanly , `` multiaccess fading channels - part i : polymatroid structure , optimal resource allocation and throughput capacities , '' _ ieee trans .inf . theory _ ,44 , no . 7 , pp .2796 - 2815 , nov .a. beck and m. teboulle , `` gradient - based algorithms with applications to signal recovery problems , '' in _convex optimization in signal processing and communications _ ,y. eldar and d. palomar , eds .42 - 88 , cambridge university press . 2010 .g. scutari , f. facchinei , p. song , d. p. palomar and j .- s .pang , `` decomposition by partial linearization : parallel optimization of multi - agent systems , '' arxiv:1302.0756v1 .s. loyka and c. d. charalambous , `` on the compound capacity of a class of mimo channels subject to normed uncertainty , '' _ ieee trans .inf . theory _ , vol .2048 - 2063 , apr .2012 . c. shen , t .- h .chang , k .- y .wang , z. qiu and c .- y .chi , `` distributed robust multicell coordinated beamforming with imperfect csi : an admm approach , '' _ ieee trans .sig . proc .60 , no . 6 , pp .2988 - 3003 , jun . 2012 .e. bjornson and e. jorswieck , `` optimal resource allocation in coordinated multi - cell systems , '' _ foundations and trends in communications and information theory _ ,vol . 9 , nos . 2 - 3 , pp .113 - 381 , 2013 .s. boyd and l. vandenberghe , _ convex optimization _ , cambridge university press , 2004 .d. gesbert , s. hanly , h. huang , s. shamai ( shitz ) , o. simeone and w. yu , `` multi - cell mimo cooperative networks : a new look at interference , '' _ ieee j. sel .areas comm .1380 - 1408 , dec . 2010h. huh , h. c. papadopoulos and g. caire , `` multiuser miso transmitter optimization for intercell interference mitigation , '' _ ieee trans .4272 - 4285 , aug .j. lee and n. jindal , `` high snr analysis for mimo broadcast channels : dirty paper coding versus linear precoding , '' _ ieee trans .inf . theory _ ,4787 - 4792 , dec . 2007
this work studies the joint design of precoding and backhaul compression strategies for the downlink of cloud radio access networks . in these systems , a central encoder is connected to multiple multi - antenna base stations ( bss ) via finite - capacity backhaul links . at the central encoder , precoding is followed by compression in order to produce the rate - limited bit streams delivered to each bs over the corresponding backhaul link . in current state - of - the - art approaches , the signals intended for different bss are compressed independently . in contrast , this work proposes to leverage joint compression , also referred to as multivariate compression , of the signals of different bss in order to better control the effect of the additive quantization noises at the mobile stations ( mss ) . the problem of maximizing the weighted sum - rate with respect to both the precoding matrix and the joint correlation matrix of the quantization noises is formulated subject to power and backhaul capacity constraints . an iterative algorithm is proposed that achieves a stationary point of the problem . moreover , in order to enable the practical implementation of multivariate compression across bss , a novel architecture is proposed based on successive steps of minimum mean - squared error ( mmse ) estimation and per - bs compression . robust design with respect to imperfect channel state information is also discussed . from numerical results , it is confirmed that the proposed joint precoding and compression strategy outperforms conventional approaches based on the separate design of precoding and compression or independent compression across the bss . cloud radio access network , constrained backhaul , precoding , multivariate compression , network mimo .
to quantitatively describe an evolving ecosystem requires some general principle that applies equally to all species .a candidate for such a principle is to consider the relationship between an organism s genotype and its _ fitness _ , which is some measure of the expected number of genes passed back into the species gene pool ( dawkins 1983 ) .each point in the multidimensional space of all possible genotypes can be assigned its own fitness value , forming a _ fitness landscape _ , as schematised in fig .[ f : fit_land ] .the process of evolution can then be described as a walk over this landscape in the direction of increasing fitness . rather than try to calculate the fitness for each genotype from first principles , clearly a hopeless task , kauffman has assumed that the relationship is so complex that it can be well approximated by random variables ( kauffman 1993 ) .this leads to the concept of _ rugged fitness landscapes _ , where `` rugged '' refers to the large variations in fitness that can result from small changes in genotype .models based on the bak - sneppen approach assume that each species moves on its own rugged fitness landscape , eventually becoming trapped in the region of a local maximum .if the landscape is fixed , then the species can only evolve by moving to a different maximum .suppose that a species labelled is at a local maximum which is separated from nearby maxima by fitness barriers of heights , where and the are ordered such that . over time , fluctuations in the species fitness may bring it into the vicinity of one of its barriers , allowing it to crossover to a different maximum .this constitutes an evolutionary event in which the species changes from one typical genotype to another . for uncorrelated fluctuations ,the expected time to crossover a barrier of height will be given by an arrhenius equation of the form where the constant fixes the timescale and is analogous to temperature .since the landscape is rugged , a species that escapes from one maximum will soon become trapped by another , and will find itself surrounded by an entirely new set of barriers . in the limit , ( [ e : se_tevolve ] ) implies that it will always be the species with the smallest in the system that evolves first , and that the other species will have moved no appreciable distance towards their own barriers by the time this occurs .thus the ecosystem will consist of species that infrequently hop between maxima at a rate that depends upon the , but are otherwise essentially static .the evolution of species will alter the landscapes of all those species linked to it in the ecosystem , for instance via predator - prey or host - parasite relationships . although each species will in general have to move on their new landscapes before finding a new maximum , it is a further approximation of the theory that this can be ignored and only the barriers are affected .the original bak - sneppen model is defined purely in terms of the smallest barriers , which are arranged on a lattice in such a way that interacting species occupy adjacent lattice sites . the evolutionary process described in the previous paragraphs then reduces to the dynamical interaction between adjacent .the system is static until the site with the smallest evolves , when and all of the in adjacent sites are assigned new values .the system is again static until another site evolves , and so on .it has been shown that the essential system behaviour is insensitive to details such as the choice of probability distribution for the ( bak 1993 , paczuski 1996 ) .this _ robustness_ relates to the universality of the critical state , and is essential if such a simple model is to faithfully describe real ecosystems .the bak - sneppen model can be enhanced by a more detailed consideration of the fitness landscapes and their interaction .three features absent from the original model will be considered here , namely _ speciation _ , _ extinction _ and _ external noise_. each feature is described in general terms below before the new model is fully specified . _ speciation : _ speciation occurs when two sub - populations reach a state of reproductive isolation and should be considered as separate species ( maynard - smith 1993 , ridley 1993 ) .for instance , two groups that are reunited after prolonged geographical isolation may have evolved so much in different directions that they are unable to produce viable offspring . up until now, a species has been described as simply occupying a region of genotype space .more detailed analysis shows that a population forms a `` cloud '' of points of roughly equal fitness around the local maximum ( kauffman 1993 ) .normally the whole population crosses over the same barrier , but if then it is possible that part of the population will instead cross over the barrier . if this happens , the two subpopulations will move to different maxima and a speciation event will have occurred . a simple criterion for speciation is to say that species will branch into two subspecies if when it evolves to a new form , where is a constant parameter .further barriers could also be considered to incorporate the simultaneous splitting into three or more subspecies , but such events will be very rare and are ignored here . _ extinction : _ the system size would increase without limit if only speciation were allowed , so some mechanism is required by which a species can be made extinct and permanently removed from the system .the original bak - sneppen model does not distinguish between this form of extinction and _ pseudo - extinction _ , which is where a rapidly evolving species disappears from the fossil record if its intermediate forms are not recorded .what is required is some criterion for true extinction defined purely in terms of the individual species fitness landscapes , analogous to ( [ e : se_spec ] ) .it is not clear how this may be achieved . instead, a heuristic approach is adopted here , which is to say that a species is made extinct if it is linked to the species with the minimum barrier , and has greater than some threshold value .this proves to be the simplest choice for which the system size does not diverge . _external noise : _ a fitness landscape is ultimately a function of the species itself , the species with which it interacts , and any factors external to the ecosystem , so fluctuations in the inorganic environment can also cause the fitness barriers to change .examples include local disturbances such as volcanic eruptions or the formation of a new river , to global events including meteor impacts and changes in the sea level .the interactions between species have already been incorporated into the model , but no allowance has yet been made for these abiotic factors .continuing with the philosophy that changes in fitness can be approximated by random variables , external influences are assumed to alter the fitness landscapes by an amount per unit time , where is a new parameter .more precisely , every species in the system will have their barriers altered by an amount where the are uniformly distributed on $ ] and uncorrelated in time .external effects will occur on a separate timescale to the evolutionary processes in ( [ e : se_tevolve ] ) , but for simplicity both timescales are fixed at the same constant rate in this model . it remains to be decided how interacting species are linked together .the original bak - sneppen model placed the species on a regular crystalline lattice in which interacting species occupy adjacent sites , but this is not flexible enough to incorporate the addition of new species to the system and is of no use here .real food webs have a much more involved structure , and if the full range of interactions is allowed rather than just links in the food chain , then it appears that a great many species interact at least weakly ( hall 1993 , caldarelli 1998 ) . trying to model thiswould only serve to draw attention away from the main motivation for the new model , which is to allow a variable system size .instead , we adopt the mean field approach in which each species interacts with other species chosen at random from the system .the species are reselected at every time step . the extended model can now be fully specified .the ecosystem consists of species labelled by .each species occupies the region around a local maximum on a rugged fitness landscape , and is separated from nearby maxima by barriers of various heights .the larger barriers can be ignored since they will rarely contribute to the dynamics , but at least two must be retained if speciation is to be incorporated .hence each species is defined by its two smallest barriers and , which are uniformly distributed over the range [ 0,1 ] and then ordered so that .the following steps ( i)(vi ) are iterated for every time step .\(i ) _ evolution : _ the smallest in the system is found and marked for evolution .it will move to a new maximum in step ( vi ) .\(ii ) _ speciation : _ if the single species marked for evolution has , then a new species is introduced to the system with random barriers . .\(iii ) _ interaction : _ other species are chosen at random from the remaining in the system .they will be assigned new barriers in step ( vi ) .\(iv ) _ extinction : _ if any the of the interacting species has , it is removed from the system . .\(v ) _ external noise : _ every barrier in the system is transformed according to ( [ e : se_noise ] ) , and reordered if necessary .\(vi ) _ new barriers : _ the species marked for evolution in step ( i ) and the interacting species from step ( iii ) are assigned new random barriers , ordered such that . with such specific definitions of general processes ,it is obviously important to check that the model is robust to any arbitrary choices . to test this ,the simulations have been repeated with various changes to the rules , and in no case was any qualitative deviation observed .for instance , different values for the extinction threshold in step ( ii ) give the same behaviour , even if the threshold value varies in time around a fixed mean .both and were chosen from uniform , gaussian and exponential distributions , again with no apparent change in behaviour .since the model appears to be robust , further discussion will be restricted to the simple set of rules given in . the threshold value for extinctionwas fixed at 1 to minimise the number of new parameters .before continuing , it should be pointed out that the algorithm presented in steps above is not exactly the same as that described in our previous exposition of this work ( head 1997 ) .this earlier model assumed that all species `` mutate '' ( evolve ) at every time step , whereas it is of course just the species with the minimum barrier that evolves .the corrected model studied here behaves in much the same way as its previous incarnation , except that the number of species is now only weakly dependent on the connectivity .this is in agreement with data for real multispecies communities , as discussed further in the next section .the quantity of interest is the total system size .this varies in a manner that depends upon the choice of values for the parameters , and , as described below . : steps ( ii ) , ( iv ) and ( v ) never feature in the time evolution of the system and the larger barriers are redundant .the interact in the same way as the original bak - sneppen model , the only difference being that each is the smaller of two uniform distributions on [ 0,1 ] and so is distributed according to , . since the model is robust to the choice of probability distribution , this difference is not important . remains fixed at its initial value . : there are no interactions , and the species that evolves will always have so extinction is impossible . if or remains fixed if . _ , and _ : there is speciation but no extinction , . _ , and _ : there is extinction but no speciation , . _ , and _ : fluctuates around some constant value which is independent of the initial system size . an example is given in fig .[ f : se_n_infty ] .note that if , is so small that statistical fluctuations will eventually send and the simulation is over .that should exist at all is by no means obvious , since does not explicitly appear in the rules for speciation and extinction .it exists because of the external noise of order , which is just as likely to push two barriers apart as to bring them together and so does not affect the rate of speciation . however , the noise acts asymmetrically on barriers near the threshold for extinction , tending to push species over this threshold into the small tail corresponding to those species that will be made extinct when next selected .since every species is subjected to external noise at every time step , the rate of extinction increases with whilst the rate of speciation remains roughly constant .a steady state will be found when these two rates balance .this qualitative reasoning is confirmed by the analysis in sec .[ s : se_anal ] .the parameters and are abstract quantities defined purely in terms of the model , so it is not possible to estimate their values for real ecosystems .nonetheless it is intuitively reasonable to assume that speciation and extinction events are rare , and therefore both and should be small .the number of links per species has been measured for real communities , and according to some studies is independent of the system size ( hall 1993 , kauffman 1993 ) .this is in agreement with numerical simulations of the model , which shows only a weak dependence on from the range to , as given in table [ t : se_num ] ( at end of document ) .there is a small peak around , which also corresponds to the most common value of observed in nature .however , the data for real ecosystems is based on food webs whereas the bak - sneppen approach considers all direct interactions between species , so it is not clear how far this comparison can be taken . turning to consider global ecosystems , the fossil record for all marine organisms highlights the possibility of a statistical steady state throughout much of the palaeozoic era. the total number of ( families of ) species fluctuates around a roughly constant value up until the mass extinction at the end of the permian period , after which the system size increased beyond its earlier levels and is still increasing today ( benton 1995 ) .it could be conjectured that the new species that emerged after the end - permian extinction were on average either more likely to speciate , or less susceptible to external noise , or both , which should result in an increased system size according to the model .the data for continental organisms is less clear and if anything shows a continuing increase in diversity at varying rates .the distribution of the magnitude of the change in per unit time is poisson to first order , implying that the speciation and extinction events are uncorrelated for .however , the distribution is not _ precisely _ poisson , which is presumably due to the tendency for to drift towards when is large .an example is presented in fig .[ f : se_jump_size ] . that the distribution was not power law is disappointing , but perhaps unsurprising given that the interactions between species are randomised at every time step , making it difficult for the system to self - organise .it is possible that a spatially extended model might allow for correlations to build up towards a critical state and a power law to be recovered , but this must remain as speculation at present .it is possible to derive the dependence of on the parameters , and by extending the mean field solution of the original bak - sneppen model ( flyvberg 1993 ) . in theorythis approach could give the exact solution , since the interacting species are selected at random in the new model and so it is mean field by definition .however , the increased dynamical complexity means that only the first order parameter dependence has be calculated .* standard model with two barriers per species * the original solution was based on a single barrier per species . before tackling speciation and extinction ,it must first be shown how the mean field approach can be modified to handle pairs of barriers .define to be the probability that a randomly selected species has one barrier in the range [ x , x+dx ) and the other in [ y , y+dy ) .note that this refers to the barriers _ before _ ordering , so can be less that or greater than .the probability for a species to have _ both _ barriers greater than is represented by , which is related to by since for values of or outside the range [ 0,1 ] , for and for. the species with the smallest barrier can be any of the in the system , as long as all of the remaining species have larger barriers .hence , the probability distribution for the species with the smallest barrier , is given by at each time step , will change by an amount which is given by the master equation the first term on the right hand side of ( [ e : se_evolution ] ) accounts for the evolution of the species with the lowest barrier , the second for the new barriers assigned to the species with which it interacts , and the third term handles the new pairs of barriers . in the statistical steady state , and , using ( [ e : se_pmin ] ) and ( [ e : se_evolution ] ) , the solution to ( [ e : se_steady ] ) depends upon the behaviour of as ( flyvberg 1993 ) .if , then the term proportional to vanishes and conversely , if either or is so small that , then the second term in ( [ e : se_steady ] ) will be and , so and hence from ( [ e : se_pmin ] ) , each solution applies in different regions of the plane , which , for large , will be separated by sharply defined boundaries .these boundaries can be found by remembering that both and are probability distributions and normalise to one .to first order in , the full solutions are hence the species with the smallest barrier will always be found in the region where , and its interacting species will always come from the region corresponding to . *analysis for and non - zero but small * when either or , the system size becomes a function of time and the algebra quickly becomes prohibitive .the simpler and more intuitive approach adopted here is to initially ignore speciation and extinction altogether and only incorporate the external noise of order .this leads to new solutions for and , from which the rates of speciation and extinction can be calculated even though they are no longer dynamically involved .the natural system size is then the value of for which the two rates balance .the analysis presented below assumes that is small ; large values of and are considered at the end of this section .the effect of the external noise will be to perturb the solutions for and given in ( [ e : se_solp ] ) and ( [ e : se_solpmin ] ) , as schematised in fig .[ f : se_mfsol ] .the master equation ( [ e : se_evolution ] ) must be modified in two ways .first , the external noise can cause barriers to move outside of the range [ 0,1 ] , so the range of possible and must be extended .however , the barriers are still assigned values in the range [ 0,1 ] and the term for new barriers must be altered accordingly .secondly , a term for the noise itself must be included .the new steady state equation is the theta functions in the first term ensure that new barriers lie in the range [ 0,1 ] .the last term on the right hand side of ( [ e : se_stdynoise ] ) accounts for the external noise , where is the laplacian operator .a full derivation of this term is given in the appendix ._ rate of extinction : _ each of the random neighbours will be made extinct if it has and .thus the rate of extinction is given by where the integral is over the region in fig .[ f : se_mfsol ] .strictly speaking , the distribution in this equation should be , but this distinction can be ignored for large . when both barriers are large , and ( [ e : se_stdynoise ] ) can be simplified by the transformation to give for either or negative , corresponding to or , the second term on the right - hand side of ( [ e : se_newvar ] ) vanishes and the equation can be solved by separation of variables .coupled with the boundary conditions for or , the solution is where is an arbitrary constant .whatever the value of , it must be independent of and since these parameters do not appear in ( [ e : se_newvar ] ) .transforming back into the original variables gives the explicit parameter dependence , substituting this into ( [ e : se_ke ] ) gives _ rate of speciation : _ it has not been possible to find a solution to ( [ e : se_newvar ] ) for and .the variable separable solution does not behave correctly , and other methods tried have been fruitless .instead , the solution will be used as a first approximation .the rate of speciation will be proportional to the density of species with .since only the species with the minimum barrier can speciate , substituting the explicit expression for from ( [ e : se_solpmin ] ) into ( [ e : se_k_s ] ) gives for small . with , broadens and so the real rate of speciation will decrease for larger .the value of can now be found up to parameter dependence .the rates of speciation and extinction balance when , and therefore this implies that should be roughly constant .this quantity has been calculated from the numerical simulations and is shown in table [ s : se_num ] .the agreement is good for variations in , but less so for and .this most probably reflects the first order approximation used in deriving in ( [ e : se_ks ] ) .* either or large * for the sake of completeness , the equivalent expression to ( [ e : se_ninfty ] ) will now be derived for large or , although such values bear no relevance to actual systems .if is large but small , the system size rapidly increases and with it the expected time a species will move under the influence of external noise before being assigned new barriers . similarly ,if both and are large , then the system behaviour is also dominated by the external noise .this is called the _ noise dominated regime_. if is small but large , becomes so low that fluctuations will eventually drive every species in the system to extinction . in the noise dominated regime , will no longer be just a small perturbation around the original solution but will extend to large positive and negative values in both the and directions .since the external noise is isotropic , will be symmetric about the and axes and at most 1 in 4 species have both barriers in the region .hence the rate of extinction will approach its upper bound value of when a barrier is assigned a new value in the range [ 0,1 ] , it undergoes an unbiased random walk until it is again assigned a new value and brought back to near the origin .the average number of steps in this walk will be and , since the average step size is , an analogy with a one - dimensional random walker implies that the total distance travelled will be ( papoulis 1991 ) .this gives the width of the barrier distribution in both the and directions .the number of species in the infinite strip is inversely proportional to the width of , so the rate of speciation is now given by as increases , the rate of extinction will remain roughly constant but now the rate of speciation will decrease until a balance is found at . from ( [ e : se_kemax ] ) and ( [ e : se_ksnew ] ) , the corresponding value of is a convenient way to display the crossover in behaviour from small to the noise dominated regime is to consider as a function of . according to ( [ e : se_ninfty ] ) and ( [ e : se_nnew ] ) , this should change from for small to for large . numerical results in support of this predictionare presented in fig .[ f : se_crossover ] .in summary , we have postulated one possible way in which models of macroevolution based on the bak - sneppen approach can be extended to incorporate speciation , extinction and abiotic influences .the speciation and extinction mechanisms are defined purely in terms of each individual species fitness landscape , irrespective of the total number of species in the system .nonetheless , the total diversity fluctuates around a constant value , which was termed the _ natural system size _ to stress that it was not arbitrarily chosen .although the proposed mechanism for speciation , _ ie ._ a population simultaneously crossing two different fitness barriers , seems appealing , the extinction mechanism is far more heuristic and somewhat unsatisfactory .a better model might focus on trying to find a more plausible means of extinction , defined in terms of the fitness landscapes .for instance , the species chosen for evolution might be made extinct if its fitness barrier is below some threshold value .it may also be possible to place the model on a web structure , and allow the connections themselves to be subject to alteration whenever a species evolves to a new form .the value of was found to be only weakly dependent upon the average number of connections per species in the system , in agreement with known data .this leads us to hope that simple models such as ours may be able to reproduce the essential behaviour of real ecosystems .more realistic models consider the full fitness landscapes rather than just the barriers , but the increased complexity limits the system sizes that can be simulated ( kauffman 1993 ) . a practical step forward might be to reduce known biological principles to simple rules that may be applied to global ecosystems .we would like to thank prof .mark newman for useful discussions concerning our model .* bak * p. and sneppen k. 1993 _ `` punctuated equilibria and criticality in a simple model of evolution '' _ phys .rev . lett . * 71 * 4083 - 4086 * bak * p. 1997_ `` how nature works : the science of self - organized criticality '' _ oxford university press * benton * m. j. 1995 _ `` diversification and extinction in the history of life '' _ science * 268 * 52 - 58 * caldarelli * g. , higgs p. g. and mckane a. j. 1998 _ `` modelling coevolution in multispecies communities '' _ preprint adap - org/9801003 * dawkins * r. 1982 _ `` the extended phenotype '' _ oxford university press * flyvberg * h. , sneppen k. and bak p. 1993 _ `` mean - field theory for a simple model of evolution '' _ phys . rev . lett . * 71 * 4087 - 4090 * hall * s. j. and raffaelli d. g. 1993 _ `` food webs - theory and reality '' _ adv. res . * 24 * 187 - 239 * head * d. a. and rodgers g. j. 1997 _ `` speciation and extinction in a simple model of evolution '' _ phys . rev .e * 55 * 3312 - 3319 * kauffman * s. a. 1993 _ `` the origins of order '' _ oxford university press * kramer * m. vandewalle n. and ausloos m. 1996 _ `` speciations and extinctions in a self - organising critical model of tree - like evolution '' _ j. phys .i ( france ) * 6 * 599 - 606 * maynard - smith * j. 1993 _ `` the theory of evolution '' _ cambridge university press * newman * m. e. j. 1997 _ `` a model of mass extinction '' _ j. theor .biol . * 189 * 235 - 252 * paczuski * m. maslov s. and bak p. 1996_ `` avalanche dynamics in evolution , growth and depinning models '' _ phys .e * 53 * 414 - 443 * papoulis * a. 1991 _ `` probability , random variables and stochastic processes '' _ mcgraw - hill * peliti * l. 1997 _ `` an introduction to the statistical theory of darwinian evolution '' _ preprint cond - mat/9712027 * ridley * m. 1993 _ `` evolution '' _ blackwell scientific publications * roberts * b. w. and newman m. e. j. 1996 _ `` a model for evolution and extinction '' _ j. theor . biol .* 180 * 39 - 54 * sol * r. v. and manrubia s. c. 1996 _ `` extinction and self - organised criticality in a model of large - scale evolution '' _ phys. rev .e * 54 * r42 - 45 * sol * r. v. , manrubia s. c. , benton m. and bak p. 1997a _ `` self - similarity of extinction statistics in the fossil record '' _ nature * 388 * 764 - 767 * sol * r. v. and manrubia s. c. 1997b _ `` criticality and unpredictability in macroevolution '' _ phys . rev .e * 55 * 4500 - 4507 * wilke * c. and martinetz t. 1997 _ `` simple model of evolution with variable system size '' _ physe * 56 * 7128 - 7131in this appendix , the term for external noise that appears in ( [ e : se_stdynoise ] ) is derived .assuming that is small , noise effects alone will result in taking the mean value of the surrounding square with sides , that is the new term is added to ( [ e : se_evolution ] ) , the expression for for when , to give the total change in at every timestep for . setting this to zero then gives the new steady state equation ( [ e : se_stdynoise ] ) ..observed values of the natural system size for different , and .the standard deviation of the fluctuations are given in brackets , which also serve as rough error bars .the data has been averaged over at least three separate rune of timesteps each .note that the line for , and appears three times to allow for easier comparison .[ cols="^,^,^,>,>",options="header " , ]
we describe a simple model of evolution which incorporates the branching and extinction of species lines , and also includes abiotic influences . a first principles approach is taken in which the probability for speciation and extinction are defined purely in terms of the fitness landscapes of each species . numerical simulations show that the total diversity fluctuates around a natural system size which only weakly depends upon the number of connections per species . this is in agreement with known data for real multispecies communities . the numerical results are confirmed by approximate mean field analysis . 2 the bak - sneppen model was introduced to illustrate the possible role of self - organised criticality in evolving ecosystems ( bak 1993 , 1997 ) . it is a toy model that describes every species by a single scalar quantity , relating to the expected time before that species evolves to a new form . interactions in the ecosystem are incorporated by assuming that a species that evolves can alter the time taken for other species to evolve , such as those involved in predator - prey or host - parasite relationships . the model is said to be _ self - organised critical _ because it approaches a critical state without any apparent need for fine parameter tuning . consequently , it predicts that extinction events of magnitude should occur with a frequency proportional to , which is consistent with known paleobiological data ( sol 1996 ) . further evidence has come from analysis of the temporal distribution of extinctions , which exhibits `` 1/f noise '' , also predicted by the model ( sol 1997a ) . other simple models of macroevolution have now been devised which also claim agreement with the paleobiological data ( peliti 1997 ) . some of these are believed to be self - organised critical ( sol 1997b ) , although others exhibit power law behaviour via different mechanisms ( roberts 1996 , newman 1997 ) . all of these models have in common the assumption of a constant system size . this has been justified by assuming that each species occupies a single _ ecological niche _ , and that if a species is made extinct its niche is immediately filled by a similar , newly emerged species . the concept of an ecological niche refers to a set of conditions and interactions within the ecosystem that only a single species can satisfy . however , since the definition of a niche depends upon the other species , the total number of niches should be defined from within the system itself rather than being fixed to some arbitrary constant value for the benefit of computer simulation . models have been devised which are similar to the bak - sneppen model but allow the total number of species to vary in time . kramer _ et al . _ introduced a model in which the species are placed onto a branching phylogenetic tree structure , where each species only interacts with its closest relatives ( kramer 1996 ) . depending upon the choice of a parameter , the tree either expands indefinitely or stops growing after a finite time . in the model of wilke _ et al . _ the system refills after an extinction event at a rate according to a parameter , which is `` the maximal number of species that can be sustained with the available resources '' ( wilke 1997 ) . however , since the resources are themselves biotic , we believe that any such should be defined from within the system . in this paper , we derive and study a version of the bak - sneppen model in which the probabilities for speciation and extinction are defined purely in terms of the individual species . nonetheless , the total diversity fluctuates around a natural system size without the need for global control . in particular , there is no recourse to `` ecological niches '' . the rules of the model are based on careful considerations of the motion of species on their fitness landscapes . this is described in sec . [ s : se_desc ] , and the results of numerical simulations of the model are presented in sec . [ s : se_num ] . comparisons to real ecosystems are made in sec . [ s : se_real ] . approximate analysis of the model is presented in sec . [ s : se_anal ] which supports and enhances the numerical work . finally , we discuss our results in sec . [ s : se_disc ] .
the observational infrastructure enabling the study of solar physics has seen dramatic advances over the past five years . among these , we highlight a few : in 2009 , the two stereo spacecraft ( launched into earth - trailing and earth - leading orbits in 2006 ; ) passed the quadrature points relative to the sun - earth line .when combined with earth - perspective viewing , the following years enabled a view of the entire solar surface , for the first time in history showing us the evolution of an entire stellar atmosphere . the atmospheric imaging assembly ( aia ; ) on board the solar dynamics observatory ( sdo - launched in 2010 ; ) provides uninterrupted observing of the outer atmosphere of the entire earth - facing side of the sun at a cadence of 12s and a resolution of close to an arcsecond .combined with magnetography and helioseismology with the helioseismic and magnetic imager ( hmi ; ) , as well as sun - as - a - star spectroscopy in the euv with the euv variability experiment ( eve ; ) , this powerful sdo spacecraft sends down well over 1tb / day .the primary data and higher - level derivatives fill a data archive that now exceeds 7pb and holds over 96% of all data ever taken from space in the domain that focuses on the sun and the physics of the sun - earth connections .instrumentation with high spatial or spectral resolution is flown on the jaxa - led hinode / solar - b mission ( launched in 2006 ; ) and the nasa small explorer iris ( interface region imaging spectrograph , launched in 2013 ; ) that offer images with resolutions between 0.1 and 0.3arcsec , combined with spectroscopy in the visible and ultraviolet .rhessi , launched in 2002 , is continuing to provide unique spectroscopic images of the sun at high energy .these space - based instruments provide important access to the domain from photosphere to corona , critically complemented by ground - based telescopes and their instrumentation , as well as by rocket experiments such as hi - c . in the optical domain , the 1.6 m new solar telescope ( nst , ) at big bear in the u.s.a . and the 1.5-meter german gregor solar telescope at tenerife in spain have been in regular operation since 2012 .the 1-m new vacuum solar telescope ( nvst , ) at fuxian lake in southwest china is also in regular operation recently .crisp ( crisp imaging spectro - polarimeter ) at the swedish 1-m solar telescope ( sst ; ) reached 0.13arcsec spatial resolution and high polarimetric sensitivity aided by post - processing .all these telescopes have capacities close to their diffraction limit due to advanced designs and excellent seeing conditions .we have seen glimpses into very high - resolution ground - based coronal observing as well with , for the first time , synoptic observations of coronal stokes polarimetry in the near infrared by the coronal multichannel polarimeter ( comp , ) telescope . in the radio domain ,the recently commissioned chinese spectral radioheliograph ( csrh ; ) at mingantu in inner mongolia of china ( renamed as mingantu ultrawide spectral radioheliograph - muser ) is a radioheliograph operating with the widest frequency range ever reached from 400mhz to 15ghz , with a high temporal , spatial , and spectral resolution .recent non - solar dedicated radio arrays such as the murchison wide - field array ( mwa , ) and the low - frequency array for radio astronomy ( lofar , ) have obtained spectroscopic solar imaging at metric and lower frequencies .the recently upgraded karl g. jansky very large array ( evla ) has provided solar radio dynamic imaging spectroscopy of type iii bursts at decimeter wavelengths .even the millimeter domain and beyond is seeing major advances with the atacama millimeter / sub - millimeter array ( alma ; ) observatory coming on line for solar observing as it continues to complete its construction phase .the observations of tens of thousands of sun - like stars by the nasa kepler satellite is offering yet more insights into the physics of the sun , ranging from an improved understanding of internal structure and dynamics ( with asteroseismic techniques ) to a view of rare , extremely energetic flares .kepler data , with ground - based follow - up studies , suggest that solar flares may occur with energies up to several hundred times higher than observed directly in the past half century with space - based instrumentation .the volume of information that needs to be processed by solar researchers is increasing rapidly . in terms of data to be analyzed , we have definitely reached the petabyte era .this is true for observational data in the archives of sdo , but also in computer experiments in which single snapshot data dumps of the advanced codes can exceed a tb .a tendency towards open data policies means that we have ever more access to large volumes and a daunting diversity of data .that complicates finding , processing , and integrating data . infrastructure support for , for example ,the virtual solar observatory ( vso ) , solarsoft idl ( sswidl ) , and the heliophysics events knowledgebase and registry ( hek , her ) are critically important to enable efficient utilization of the growing data diversity and volume .the community lags in strategic thinking about these meta - infrastructural elements , both where current support and future expansion or replacement are concerned .there has also been a recent movement towards open - source data analysis software , with the development of the sunpy solar analysis environment in python .a similar flood of information is found with scientific publications , which exceed 2,000 refereed publications per year ( see above ) . here, the support infrastructure of ads is of critical value .the wide diversity of journals in which the works of colleagues are published requires subscription access to many publications , at costs that are increasingly hard to bear for relatively small research groups ; here , preprint services such as arxiv and maxmillennium play significant roles in making the community aware of what is going on .`` living reviews '' as offered by the free on - line journal living reviews in solar physics enable new researchers to understand the context of their work and established researchers in one sub - specialty a quick introduction into adjacent areas by their peers . among the difficulties the solar activity community also facesis that many solar and inner - heliospheric events are studied by different groups and published in different journals .finding studies on a particular solar region of interest is hampered by inconsistent use of the characterizing spatio - temporal coordinates of events which may be found in abstract , main text , tables , appendices , and sometimes only marked in figures that are not machine - readable .the iau adopted a standard convention for this in 2009 ( ) in its sol standard ( short for solar object locator ) .its use is encouraged by journal editors including those of solar physics , the astrophysical journal , and the journal of geophysical research .broad usage of the sol standard would enable computer searches of related publications , enabling researchers to put new ( meta-)studies in broader contexts .for the highlights touched upon in this report , we opted for two criteria to identify topics of interest .one is to mention specific areas of note in the opinion of the organizing committee that may be new developments , are highly specialized yet significant , may concern new instrumentation or methods , or are otherwise deemed to be developments that may grow to see more activity in terms of publications in the near future . the other criterion we applied is to be guided towards topics of frequent activity by the community itself by looking at the most cited works . such a selection does introduce a bias toward the papers published early in the period reviewed , of course , but our purpose is not to identify the most - cited works per se , but rather to find the dominant themes within the set of these works that apparently resonate strongly within the community already within the 3-y window from which they are selected .we searched for the most - cited refereed publications from ads with the terms `` solar.activity '' , `` coronal.mass.ejection '' , `` solar flare '' , `` solar prominence '' , or synonym(s ) in their abstracts in the period 2012 - 2014 . within this set , we identify the following themes ( sorted alphabetically ) : active - region magnetic fields ; coronal thermal structure ; coronal seismology ; solar flares and eruptions ; and the sun - in - time related aspects of long - term solar variability including cosmic - ray modulation .we close with a collection of developments , discoveries , and surprises .the photospheric magnetic field of active regions forms the foundation of the overlying atmosphere .its evolution through emergence , displacement , and submergence of flux is key to driving eruptive and explosive events in the corona and into the heliosphere . until recently ,generally only line - of - sight magnetic field maps were available for this work .nowadays , regular vector - magnetic determinations from the observed polarization signals are available from sources that include hinode , solis , and sdo / hmi .the sensitivity of such vector field maps allows the detection of lasting changes in the photospheric field when comparing pre- to post - flare observations ( _ e.g. _ * ? ? ?temporal resolution is so good that coronal events can be tightly bracketed to try to understand the causes of flares and eruptions , as well as the changes in energy and helicity involved ; for example , analyze a series of nonlinear force - free field models for the evolution in the energy of an active region around the time of a major eruption .the increasing availability of vector - magnetic data enables statistical studies on the properties of active regions and their activity heretofore possible only on line - of - sight magnetograms .for example , use a data base of vector field maps of over 2,000 active regions to train a machine - learning algorithm to attempt forecasting of large flares . review 3,000 vector field maps of some 60 regions to compare estimates of free energy with flare rates . analyze a sample of nearly 200 coronal mass ejections to reveal that flux , twist , and proxies for free energy tend to set upper limits to the speed of cmes emanating from the active regions studied . review hundreds of vector magnetograms of a sample of 80 active regions to study helicity and twist parameters to test for the influence of coriolis forcing .one long sought - after goal of flare and cme physics is to use models of the solar atmospheric field to understand why field configurations destabilize and under what conditions destabilization begins and proceeds . in a `` meta - analysis '' review , discuss recent developments , including the use of 3d field extrapolations that suggest that topological structures ( notably null points and hyperbolic flux tubes ) may be involved in the triggering and generally as tracers of reconnection processes early on .but even as the availability of vector - magnetic field maps becomes routine , the realization is growing that by themselves they appear insufficient to provide generally adequate lower - boundary conditions to understand the solar atmospheric activity .for example , a group of modelers using a variety of non - linear force - free field algorithms concludes in a series of studies ( see , _ e.g. _ * ? ? ?* ; * ? ? ?* ; * ? ? ?* and references therein ) that snapshot vector - magnetic maps do not suffice to obtain a reliable coronal field model with accurate energy or helicity measurements , with effects of instrumental resolution and field of view , as well as the model geometry ( cartesian vs. spherical ) all compounding the problems .new developments include the use of coronal loop observations to guide non - potential field models either from a single perspective as is possible currently ( _ e.g. _ * ? ? ? * ) or by using stereoscopic data from existing or future space- based instrumentation such as stereo , solar orbiter , and sdo ( _ e.g. _ * ? ? ?* ; * ? ? ?methods to follow the evolution of active - region fields based on data driving are also being developed , using the uninterrupted stream of ( vector ) magnetograms now available from space - based platforms ( _ e.g. _ * ? ? ?* ; * ? ? ?* based on the mhd - like magnetofrictional approximation ) , even reaching up to global mhd field models from near the solar surface into the heliosphere .others are developing mhd methods to study cme initiation based on observed surface field evolution ( _ e.g. _ * ? ? ? * ) .fundamental difficulties with these emerging methods include the difficulty in measuring the transverse field in areas of relatively low flux densities , the intrinsic 180-degree ambiguity in the field direction given a magnitude for the transverse component , the need to constrain the electric field to drive the model s evolution , and ultimately the quantitative comparison with solar observables to determine the verisimilitude of the model fields .even as our ability to observe and process rapidly growing data volumes on active region fields increases , we remain puzzled by the sun s atmospheric magnetic field : we have yet to understand how large amounts of energy can sometimes be stored to eventually be explosively converted into flares and cmes , while in other cases the energy is either not stored or is not explosively released . for a recent review of our understanding of the magnetic field in the solar atmosphere , and the variety of methods used to observe and study it , we refer to , and references therein .the x - ray solar corona is made of complex arrays of magnetic flux tubes anchored on both sides to the photosphere , confining a relatively dense and hot plasma .this optically thin plasma is almost fully ionized , with temperatures above 1mk , and emitting mostly in the extreme uv to x - rays with intensities proportional to the square of its density .significant progress was made in studying the physical and morphological features of coronal loops in a series of very successful euv and x - ray solar missions , since the first observational evidence of the presence of coronal loops provided by rocket missions in the mid-1960s . in recent years, several solar missions were launched and started producing spectacular images and data in different spectral ranges .the high spatial and temporal resolution of these instruments and the complementarity between these data sets , poses new challenges to understand the heating and dynamics of coronal loops .the launch in 2010 of the solar dynamics observatory ( sdo , * ? ? ?* ) allows continuous observation of the whole sun with high temporal and spatial resolution with aia , eve , and hmi .in particular , aia images span at least 1.3 solar diameters in multiple wavelengths , at about 1arcsec in spatial resolution and at a cadence of about 10seconds .coronal loops observed by aia have been found to be highly variable and highly structured in space , time , and temperature , challenging the traditional view of these loops as isothermal structures and favoring the case for multi - thermal cross - field temperature distributions . because the thermal conductivity is severely reduced in the directions perpendicular to the magnetic field, a spatially intermittent heating mechanism might give rise to a multi - threaded structure in the internal structure of loops .one possible example of such an intermittent heating process is mhd turbulence , which is expected to produce fine scale structuring within loops all the way beyond the resolution of current observations .a recent study by combining spectroscopic data from the euv imaging spectrometer ( eis , * ? ? ? * ) aboard hinode and sdo / aia images , shows that most of their loops must be composed of a number of spatially unresolved threads .more recently , the sounding rocket mission high - resolution coronal imager ( hi - c , * ? ? ?* ) , achieved an unprecedented spatial resolution 0.2arcsec in euv images . find that the finely structured corona , down to the 0.2 resolution , is concentrated in the moss and in areas of sheared field , where the heating is intense .this result suggests that heating is on smaller spatial scales than aia and that it could be sporadic .these results are consistent with differential emission measure ( dem ) analysis that study the distribution of temperature across loops . present a systematic study of the differential emission measure distribution in active region cores , using data from eis and the x - ray telescope ( xrt ) aboard hinode .their results suggest that while the hot active region emission might be close to equilibrium , warm active regions may be dominated by evolving million degree loops in the core .more recently , used xrt and eis data as well as images from sdo / aia , and found that cooler loops tend to have comparatively narrower dem widths . while the dem distribution of warm loops could be explained through bundles of threads with different temperatures ,cooler loops are consistent with narrow dems and perhaps even isothermal plasma .the authors then speculate that warm , multi - thermal , multi - threaded loops might correspond to plasma being heated , while cool loops are composed of threads which have had time to cool to temperatures of about a million degrees , thus resembling a single isothermal loop .the interface region imaging spectrograph ( iris , * ? ? ?* ) was launched in 2013 , and provides crucial information to understand coronal heating , by tracing the flow of energy and plasma from the chromosphere and transition region up to the corona .iris obtains high resolution uv spectra and images with high spatial ( 0.33arcsec ) and temporal ( 1s ) resolution .recent iris observations show rather fast variations ( 20 - 60s ) of intensity and velocity on spatial scales smaller than 500 km at the foot points of hot coronal loops .these observations were interpreted as the result of heating by electron beams generated in small and impulsive heating events ( the so - called coronal nanoflares ) .theoretical models of coronal heating have been traditionally classified into ac or dc , depending on the time scales involved in the driving at the loop foot points : ( a ) ac or wave models , for which the energy is provided by waves at the sun s photosphere , with timescales much faster than the time it takes an alfvn wave to cross the loop ; ( b ) dc or stress models , which assume that energy dissipation takes place by magnetic stresses driven by slow foot point motions ( compared to the alfvn wave crossing time ) at the sun s photosphere .although these scenarios seem mutually exclusive , two common factors prevail : ( i ) the ultimate energy source is the kinetic energy of the sub - photospheric velocity field , ( ii ) the existence of fine scale structure is essential to speed up the dissipation mechanisms invoked . for a coronal heating mechanism to be considered viable, the input energy must be compatible with observed energy losses in active regions , estimated by to be . used high - resolution observations of plage magnetic fields made with the solar optical telescope aboard hinode to estimate the vertical poynting flux being injected into the corona , obtaining values of about , which suffices to heat the plasma .coronal seismology , like terrestrial seismology and traditional helioseismology , provides a means of probing the background state of the medium through which the waves propagate .although the corona is not hidden from view like the interiors of earth and sun , it is very difficult to directly measure plasma and magnetic properties such as density , magnetic field , or transport coefficients .observations of oscillations in the coronal plasma can potentially provide powerful constraints on these quantities , for example through the alfvn velocity ( _ e.g. _ , see the _ living review _ of * ? ? ?but unlike the solar interior and terrestrial examples where waves are but perturbations on the background state , the coronal seismic waves are crucially important to the energy balance of their host medium .coronal heating and solar wind acceleration are widely thought to result at least in part from waves .an overview of coronal seismology as of 2012 is provided by .the last 58 years have seen an explosion in coronal wave studies due to the advent of new instrumentation , such as the ground - based coronal multichannel polarimeter ( comp , * ? ? ? * ) and the space - based atmospheric imaging assembly ( aia ) on the solar dynamics observatory ( sdo ) . both have revealed ubiquitous alfvn - like ( i.e. , transverse to the magnetic field ) coronal oscillations ( * ? ? ?* respectively ) , though the interpretation of their exact natures alfvn or kink is controversial .the term `` alfvnic '' is commonly used to encompass both wave types . in either case , the increased resolution of the aia observations has allowed the detection of sufficient oscillatory power to potentially power both the corona and solar wind , though precise mechanisms are not currently known with certainty .the concentration of oscillatory power in the few - minute period band , and in particular at around 5-minutes , strongly suggests a link with the sun s internal -mode global oscillations .consistency in the estimation of alfvn speed between seismic techniques and magnetic field extrapolation is confirmed by using aia data from a flare - induced coronal loop oscillation .time - distance techniques applied to comp observations indicate a preponderance of outward propagating waves over inward propagation , even in closed loop structures , suggesting _ in situ _ dissipation ( or mode conversion ) on a timescale comparable to the alfvn crossing time . disentangling alfvn and kink wavesis addressed in depth by . in the magnetically structured solar atmosphere , the only true alfvn wave is torsional , and there is considerable interest in identifying these in observations because of the amount of energy they could potentially contribute to the outer atmosphere . however , being incompressive , alfvn waves are not seen in intensity , and torsional alfvn waves are also difficult to detect in doppler .recently though , 0.33-arcsec high - resolution observations of the chromosphere and transition region ( tr ) with nasa s interface region imaging spectrograph ( iris ) , coordinated with the swedish solar telescope , have revealed widespread twisting motions across quiet sun , coronal holes , and active regions alike that often seem to be associated with heating .this must presumably extend to the corona as well .dissipation of alfvn waves in the solar atmosphere has long been thought to rely largely on the generation of alfvn turbulence through nonlinear interaction between counter - propagating waves .observations with comp showing enhanced high - frequency power near the apex of coronal loops possibly supports this view .the presumed acceleration of the solar wind by alfvn turbulent energy deposition poses the challenge of identifying and explaining counter - propagating alfvn waves in open magnetic field regions required to produce that turbulence . the presence of these counter - propagating waves using comp .these observations also provide evidence of a link to the -mode spectrum , which presumably relies on magnetoacoustic - to - alfvn mode conversion occurring in the lower atmosphere .the solar chromosphere is where most of the energy of a solar flare is dissipated and radiated in bright linear structures called flare ribbons , and recent work on this topic has been dominated by observations from iris .the high spectral resolution available with iris shows complex spectral line profiles in the si iv transition region line at 1394 and 1403 sometimes with two or three time - varying gaussian components within one iris spatial pixel .the hot `` coronal '' line of fe xxi ( 1354 ) on the other hand shows only a single strongly blue - shifted component originating in compact ribbon sources at the beginning of a flare , with no hot `` stationary component '' present .the presence of this line also demonstrates the high temperatures reached by the chromosphere in flares as deduced from hinode / eis observations . with eis, we see 1.5 - 3mk redshifts confirming earlier reports , as well as significant non - thermal broadening .this means that the momentum - conserving condensation front that is produced by flare heating and paired with the evaporation flow contains hot plasma . in turnthis implies that the condensation front originates relatively high in the chromosphere , otherwise such high temperatures would not be possible in the standard ( electron - beam - driven ) model of flares .this is somewhat at odds with recent measurements of element abundance which look more photospheric than coronal , suggesting that up - flowing evaporated material comes from low down in the chromosphere , below where the normal fractionation by first ionization potential sets in .flare optical ( or white - light wl ) emission continues to be difficult to observe and difficult to explain .optical foot points characterized using 3-filter observations with hinode / sot could be explained by modest temperature increases of the photospheric black body .the other main proposed radiation mechanism is recombination emission , and flare continuum in the near uv ( beyond the balmer edge ) observed with iris has an intensity consistent with this , but the tell - tale balmer jump has not been seen .co - spatial hard x - ray ( hxr ) and white - light flare sources have been observed using rhessi and sdo / hmi , and require that both emissions are produced a few hundred kilometres above the photosphere , at a height corresponding to the temperature minimum region , and beyond the range expected for hxr - emitting electrons arriving from the corona ( unless the chromosphere is under - dense compared to expectations ) .it may be possible to generate optical emission from the temperature minimum region , for example by modest heating due to ion - neutral damping , but the presence of non - thermal electrons in this plasma is harder to explain . also indicating the flare s impact on the dense lower atmosphere , there have been many more reports of `` sun quakes '' flare seismic emission ( e.g. * ? ? ?* ; * ? ? ?* ) but the mechanical driver remains uncertain ; there are correspondences with either hxr sources ( pressure pulses from electron - beam driven shocks ) or magnetic transients ( lorentz forces ) in some but not all cases .high - resolution ground - based flare observations using the new solar telescope at bbso show extraordinary fine structure , on a sub - arcsecond scale , in flare ribbons and footprints , setting the scene for future observations with dkist .examination of the changes in photospheric vector field occurring at the flare impulsive phase by using the hmi on sdo suggests strong , permanent and abrupt variations in the vertical component of the lorentz force at the photosphere consistent with a downward ` collapse ' of magnetic loops , and changes in the horizontal component mostly parallel to the neutral line in opposite directions on each side , indicating a decrease of shear near the neutral line .the brighter the flare ( as expressed in the goes class ) , the larger are both the total ( area - integrated ) change in the magnetic field and the change of lorentz force , although found that indicators of magnetic non - potentiality ( e.g. rotation , shear and helicity changes ) are more closely associated with flares see for example the study of flare - associated rotating sunspots by . in the corona , imaging and spectroscopic observations show compelling evidence for plasma flows suggestive of those expected around a coronal reconnection region ( e.g. * ? ? ?* ; * ? ? ?* ) . the growing number of observed flares in archives and the increased coverage of flares in wavelength space and in domains from surface to heliosphere is enabling an improved assessment of energy budgets .for example , quantify energies of an ensemble of large , eruptive flares to find , among others that it appears that the energy in accelerated particles during the initial phases of the flare suffice to supply the energy eventually radiated in the flare across the spectrum , and that that total energy is statistically just under the bulk kinetic energy in associated coronal mass ejections .the central problem in solar flare theory remains the acceleration of the non - thermal electrons required to explain observed chromospheric hxr sources .observations with rhessi and sdo show that coronal electron acceleration can be very efficient ; find that essentially all electrons in a coronal source of density a few times are energized to above around 10kev .kappa distributions , which are found to be a better fit than a standard thermal plus non - thermal distribution in coronal hxr sources , are shown to arise naturally in an acceleration region when there is a balance between diffusive acceleration and collisions , in the absence of significant escape from the acceleration region . in the electron - beam model of a flare, electrons must of course escape the corona to produce the chromospheric hxr sources , and the number flux requirements have always been somewhat problematic .this may be alleviated if electrons are boosted by wave - particle interactions in the corona ; a quasi - linear simulation of coronal electron propagation shows that wave - particle interaction with the high phase - velocity langmuir waves generated by density inhomogeneities can accelerate beam electrons to higher energies , reducing the requirement on electron flux at energies of a few tens of kev by up to a factor ten .study a model in which electrons are re - accelerated in the chromosphere , concluding that this also reduces demands on putative electron beam fluxes ( however requirements on the flare chromosphere energy source , not directly addressed in this model , remain the same ) .chromospheric ( re- ) acceleration models may also produce electron angular distributions which are more isotropic , consistent with the angular distributions inferred from inversion of rhessi mean electron flux spectra , accounting for photospheric x - ray albedo , by .a completely different view by is that the electron energization in flares takes place in a parallel electric field that develops close to or in the chromosphere , in a region of anomalous resistivity , if energy is transported alfvnically specifically by inertial alfvn waves .however , the orthodoxy remains that energy transport is by electron beams , and this is now being tested against observations using beam - driven radiation hydrodynamics codes , the output of which can be compared with , for example , iris and eve and aia data , so far with mixed success .the understanding of the initiation and evolution of coronal mass ejections ( cmes ) has tremendously profited from the combination of coronagraphic observations with high cadence imaging in the euv together with the multi - perspective view provided by the stereo mission , as well as from increasingly sophisticated mhd and thermodynamic modeling ( for reviews see * ? ? ?* ; * ? ? ?magnetic flux ropes play a key role in the physics of cmes .but there is a long debate whether flux ropes are pre - existing or formed during the eruption . observed the formation of a flux rope during a confined flare in high - cadence sdo / aia euv imagery . within hours after its formation , the flux rope became unstable and erupted resulting in a cme / flare event .for other cmes , for example those associated with quiescent prominence - cavity systems , a variety of observations indicate a pre - existing flux rope which may erupt bodily as a cme ( see e.g. figure 12 of ) . synthesized 16 years of coronagraphic and euv observations with mhd simulations , and found that flux ropes are a common structure in cmes ; in at least 40% a clear flux rope structure could be identified .in addition , they established a new `` two - front '' morphology consisting of a faint front followed by diffuse emission and the bright cme leading edge .the faint front is suggestive of a wave or shock front driven by the cme .the high - cadence six - passband sdo / aia euv imagery allows to perform differential emission measure ( dem ) analysis on solar flares and cmes to study their multi - thermal dynamics .it was shown that the cme core region , typically identified as the embedded flux rope , is hot ( 8 10 mk ) indicative of magnetic reconnection being involved .in contrast , the cme leading front has temperatures similar to the pre - eruptive corona but of higher densities suggesting that the front is a result of compression of the ambient coronal plasma .the hot flux rope is a key indicator to the physical processes involved in the early acceleration phase of the cme .the environment of cmes is important for the development of non - radial propagation . report that during solar minimum conditions cmes originating from high latitudes can be easily deflected toward the heliospheric current sheet , thus eventually becoming geo - effective . showed that coronal holes nearby the cme initiation site can cause strong deflections of cmes .modeling of the initiation of cmes continues to provide insights into the various forces and mechanisms that may be involved : initiation may involve the kink instability , sunspot rotation , reduction of tension of the overlying field , torus instability , and the breakout process . which dominates under which conditions and how commonly these occur remain topics of future work . since their discovery by the soho / eit instrument about 15 years ago, there has been a vivid debate about the physical nature of large - scale euv waves , i.e. whether they are true wave phenomena or propagating disturbances related to the magnetic restructuring due to the erupting cme . in the recent years, there has been tremendous progress in the understanding of these intriguing phenomena thanks to the unprecedented observations available , in particular the high - cadence euv imagery in six wavelengths bands by sdo / aia combined with the stereo multi - point view which allowed for the first time to follow euv waves in full - sun maps .there seems now relatively broad consensus that large - scale euv waves are often fast - mode magnetosonic waves ( of large amplitude or shocks ) , driven by the strong lateral expansion of the cme ( see reviews by * ? ? ?* ; * ? ? ?a number of detailed case studies revealed that the cme lateral front and the euv wave appear originally co - spatial .but when the lateral cme expansion slows down , the euv wave decouples from the driver and then propagates freely , adjusting to the local fast - mode speed of the medium ( e.g. * ? ? ?* ; * ? ? ?three - dimensional thermodynamic mhd modeling of well observed euv waves also supports these findings , showing the outer fast - mode euv wave front followed by another bright front indicating the cme component .statistical studies of euv waves based on sdo / aia and stereo / euvi data revealed euv wave speeds that range from close to the fast magnetosonic speed in the quiet corona to values well above , the fastest ones exceeding 1000 km s . showed that at least half of the euv waves under study show significant deceleration , and a distinct anti - correlation between the starting speed and the deceleration , providing further evidence for a freely propagating fast - mode wave .the association rate of euv waves with type ii bursts , which are indicative of shock waves in the solar corona , may be as high as 50% .detailed case studies provided a number of further characteristics suggestive of the wave nature , such as reflection and refraction of euv waves at coronal holes and active regions , transmission into coronal holes as well as the initiation of secondary waves by the arrival of the wave at structures of high alfvn speed and for one case , have evaluated the euv wave s initial energy using a blast - wave approximation , to be around 10% of that of the associated cme . discovered quasi - periodic fast - mode wave trains within a large - scale euv wave with a periodicity of 2 min , running ahead of the laterally expanding cme flanks . presented the first simultaneous observations of the propagation of a large - scale euv wave and an h moreton wave , showing that the wave fronts evolve co - spatially indicating that they are both signatures of a fast magnetosonic wave pulse .the stereo mission , often in combination with soho / lasco , offers observations of cmes all the way from their origin on the sun , and of their propagation in interplanetary space to beyond 1au from outside the sun - earth line .these data combined with a multitude of other in - situ space missions have been vividly used to connect remote sensing cme observations to the field and plasma data observed by in - situ spacecraft , to constrain models of interplanetary cme propagation , to study cme - cme interaction , and to forecast cme arrival times and speeds with the ultimate aim improving the prediction of their geo - effectiveness . and tracked a flux rope all the way from its solar origin to its in - situ signatures at 1au using the stereo secchi euv imagers , coronagraphs and wide - angle heliospheric imagers .they establish that the cavity in the classic three - part cme is the feature that becomes the magnetic cloud , implying material ahead of the cavity is piled - up material from the corona or the solar wind .modeling of the interplanetary propagation of cmes makes use of empirical , analytic and numerical approaches .the analytical drag - based model " ( dbm ) is based on the hypothesis that the lorentz forces driving a cme eruption ceases in the upper corona and that beyond a certain distance the interplanetary cme ( icme ) dynamics is governed solely by the interaction of the icme and the ambient solar wind plasma . from the observational side , a variety of reconstruction methods have been developed and applied to the heliospheric imager data including one- as well as two - spacecraft ( stereoscopic ) observations and inclusion of in - situ data and radio type ii bursts to better constrain the propagation direction , distance and speed profile of cmes in interplanetary space .these efforts result in comparable typical uncertainties in the cme arrival time of about half a day .studies using stereo heliospheric imager data and in - situ plasma and field measurements established that the interaction of cmes in the inner heliosphere , due to a faster cme launched after a slower one , seems to be a common and important phenomenon . the interaction process may cause deflection or merging of cmes , and either deceleration or acceleration of merged cme fronts ( including heating and compression ) . , reporting in a fast cme causing an extreme storm , speculate that the interaction between two successively launched cmes resulted in the extreme enhancement of the magnetic field of the ejecta that was observed in - situ near 1au . the unusually deep and temporally - extended activity minimum between sunspot cycles 23 and 24 , followed by a slowly rising and low amplitude cycle 24 , has led to renewed interest in the underlying causes of solar cycle fluctuations , including grand minima .much attention has focused on the so - called babcock - leighton solar cycle models , in which the regeneration of the solar surface dipole takes place via the decay of active regions .most extant versions of these dynamo models are geometrically ( axisymmetric ) and dynamically ( kinematic ) simplified , yet they do remarkably well at reproducing many observed solar cycle characteristics ( see , _ e.g. _ , karak _ et al ._ 2014 , and references therein ) .explanations for the extended cycle 23 - 24 minimum and low amplitude cycle 24 have been sought in terms of variations in the meridional flow expected to thread the solar convection zone ( upton and hathaway 2014 ) , and patterns of active region emergence and associated feedback on surface flows ( cameron _ et al ._ 2014 ; jiang _ et al .these successes of the babcock - leighton modelling framework have however been challenged by helioseismic measurements ( zhao _ et al ._ 2013 ; schad _ et al ._ 2013 ) indicating that the meridional flow within the convection zone has a far more complex cellular structure than assumed in the majority of these mean - field - like solar cycle models .possible avenues out of this conundrum are being explored ( see , _e.g. _ , hazra _ et al ._ 2014 ; belucz _ et al ._ 2015 ) .much effort has also been invested in implementing various form of data assimilation schemes in dynamo models , with the aim of achieving improved forecasting of the amplitude and timing of future sunspot cycles . at this point in timeno existing dynamo model - based forecasting scheme has done significantly better than the known precursor skill of the solar surface magnetic dipole moment at times of cycle minima , nonetheless progress is likely forthcoming in this area .global magnetohydrodynamical simulations of solar convection have also progressed rapidly in recent years , with many research groups worldwide now running simulations producing large - scale magnetic fields undergoing polarity reversals ( _ e.g. _ masada _ et al ._ 2013 ; nelson _ et al ._ 2013 ; fan and fang 2014 ; passos and charbonneau 2014 ; warnecke _ et al ._ 2014 ) . due to computing limitations all these simulationsrun in parameter regimes still far removed from solar interior conditions .nonetheless , many are producing tantalizingly solar - like features , including rotational torsional oscillations ( beaudoin _ et al ._ 2013 ) equatorward propagation of activity `` belts '' ( kpyl _ et al ._ 2012 ; warnecke _ et al ._ 2014 ; augustson _ et al ._ 2015 ) cyclic in - phase magnetic modulation of convective energy transport ( cossette _ et al ._ 2013 ) and grand minima - like interruptions of cyclic behavior ( augustson _ et al .one particularly interesting feature is the spontaneous production of magnetic flux tube - like structures within the convection zone , as reported in nelson _these were found to rise to the top of the simulation domain , partly through magnetic buoyancy , while maintaining their orientation in a manner compatible with hale polarity laws ( nelson _ et al .this has revived the idea that dynamo action could be wholly contained within the solar convective envelope , rather than relying on the tachocline for the formation and storage of the magnetic flux ropes eventually giving rise to sunspots .major efforts have also taken place in reinterpreting and reanalyzing historical observations of magnetic activity .noteworthy in this respect are the analyses of tilt angle patterns for bipolar magnetic regions ( see pavai __ 2015 , and references therein ) , and reanalysis of polar faculae data by munoz - jaramillo _ et al . _( 2012 ) . of particular importanceis the recently completed revision of the international sunspot number ( ssn ) time series .ssn values for the period 1947-present now account for a discontinuity in the manner of counting spot groups having occurred at the locarno reference station ( clette _ et al .correcting for this leads to a significant ( % ) decrease in ssn values during the space era .consequently , reconstructions of solar activity into the distant past using the ssn as a backbone to extrapolate space - borne measurements will need to be reassessed .radionuclides generated by the atmospheric impact of galactic cosmic rays ( gcrs ) provide a crucial proxy for the evolution in the sun s activity on time scales longer than a few years or a decade , depending on the radionuclide and its deposition in terrestrial natural archives .new ice core data on and tree ring data on have been combined to provide better understanding of climate impacts on these records : a joint analysis of composite tree ring data with ice cores from greenland and antarctica have enabled the separation of the common signal ( assumed to be dominated by solar and heliospheric variability ) from terrestrial variability . from this , we now have 94 centuries of data on a proxy for solar activity . but translating that proxy into details of solar activity that may drive space weather and terrestrial climate remains a challenge , as reviewed by , _e.g. _ . andthen , of course , there were numerous surprising realizations and discoveries , in the real world as much as in the rapidly growing virtual world ) .we mention merely a small sampling in no particular order : a weak solar cycle following an uncommonly low and long solar minimum ; a series of x - class flares from ar12192 none of which were associated with a cme , contrasting with statistics to date ; use of a sun - grazing comet to probe the high corona and its connection to the innermost heliosphere ; an extremely large amount of dense , cool plasma falling back onto the sun following a massive filament eruption providing a close - up example of distant accretion processes ; reports of enormously energetic flares from what would appear to be sun - like stars and potential evidence for strong sep events associated with very energetic solar flaring from records albeit without obvious auroral counterparts ; the successful creation of realistic looking sunspots in the computer ; the revision of sunspot numbers that suggests no long - term increase in solar activity occurred over the past few hundred years ; radiative magneto - convective simulations have reached resolution scales of a few kilometers , and suggest comparable energy densities in magnetic and kinetic reservoirs ; iris observations uncovering rapidly - evolving low - lying loops at transition region temperatures , heretofore inferred from emission measure studies but never yet observed ; a new model was proposed for coronal heating based on magnetic gradient pumping ; non - potential field models for a continuously - driven corona over a 16-year period was achieved ; a solar eruption in july of 2012 that would likely have powered a century - level extreme geomagnetic storm , as for the carrington - hodgson flare of 1859 , had it enveloped earth ; the realization that stokes theorem combined with the induction equation could explain why polar fields should be a good indicator for the strength of the next sunspot cycle ; the simulation of a sequence of homologous cmes and demonstration of so - called `` canniballistic '' behavior ; the discovery of nested toroidal line - of - sight flows and `` lagomorphic '' coronal polarimetric signatures within coronal cavities indicating the presence of magnetic flux ropes ; rapidly rotating magnetic structures ( `` magnetic tornadoes '' ) have been identified , which provide a channel of energy and twist from the solar surface to the corona ; and the x - class flare sol2014 - 03 - 29 t made history by becoming `` the best - observed flare of all time '' ( http://www.nasa.gov/content/goddard/nasa-telescopes-coordinate-best-ever-flare-observations/#.vhj4trq9yow[according to nasa ] ) as the ground - based dunn solar telescope and the space - based iris , rhessi , and sdo all observed it in detail .201 # 1isbn # 1#1#1#1#1#1#1#1_#1_#1*#1*#1#1#1#1#1#1#1#1#1#1#2#1#1#1#1#1_#1_#1#1#1*#1*#1#1#1_#1_#1#1#1#1#1#1#1#1#1#1#1 # 1http://dx.doi.org/#1[]:#1 # 1http://arxiv.org / abs/#1[]:#1 # 1http://adsabs.harvard.edu / abs/#1[]:#1#1#1#1#1#1#1#1#1#1#1#1#1
after more than half a century of community support related to the science of `` solar activity '' , iau s commission 10 was formally discontinued in 2015 , to be succeeded by c.e2 with the same area of responsibility . on this occasion , we look back at the growth of the scientific disciplines involved around the world over almost a full century . solar activity and fields of research looking into the related physics of the heliosphere continue to be vibrant and growing , with currently over 2,000 refereed publications appearing per year from over 4,000 unique authors , publishing in dozens of distinct journals and meeting in dozens of workshops and conferences each year . the size of the rapidly growing community and of the observational and computational data volumes , along with the multitude of connections into other branches of astrophysics , pose significant challenges ; aspects of these challenges are beginning to be addressed through , among others , the development of new systems of literature reviews , machine - searchable archives for data and publications , and virtual observatories . as customary in these reports , we highlight some of the research topics that have seen particular interest over the most recent triennium , specifically active - region magnetic fields , coronal thermal structure , coronal seismology , flares and eruptions , and the variability of solar activity on long time scales . we close with a collection of developments , discoveries , and surprises that illustrate the range and dynamics of the discipline . * * = division ii + commission 10 solar activity + + = president carolus j. schrijver + vice - president lyndsay fletcher + past president lidia van driel - gesztelyi + organizing committee ayumi asai , paul s. cally , + paul charbonneau , sarah e. gibson , + daniel gomez , siraj s. hasan , + astrid m. veronig , yihua yan + .overview of commission 10 leadership and triennial reports ( as available in ads ) from 1961 onward , i.e. for the period that c10 operated under the banner of `` solar activity '' or `` activit solaire '' . reports flagged with an asterisk appear not to be available on line . [ cols="<,<,<",options="header " , ] the community exchanges information efficiently at scientific meetings . fig . [ fig : meetings ] shows that the number of such events tends to increase over the years , with marked fluctuations from year to year , averaging around 40 meetings per year over the past decade . the symposia supported by the iau through commission 10 in the last three years are listed in table [ tab : meetings ] . summer schools in which new generations of researchers are given broader or deeper views into the community s activities appear to grow slowly in frequency , trending towards about five events per year ( dashed line in fig . [ fig : meetings ] ) . although the scientific community working on `` solar activity '' appears healthy and growing , there is a clear need to improve how we communicate the excitement about our science to the general public : for example , only 8 in the most recent 400 press releases and news articles listed by the aas were related to some aspect of solar activity .
is designed to estimate the location and velocity of objects in the surrounding space .the radar performs sensing by analyzing correlations between sent and received ( analog ) signals . in this notewe describe the digital radar , i.e. we assume that the radar sends and receives finite sequences .the reduction to digital setting can be carried out in practice , see for example , and . throughout this notewe denote by the set of integers equipped with addition and multiplication modulo .we denote by the vector space of complex valued functions on equipped with the standard inner product , and refer to it as the _ hilbert space of sequences_. we use the notation to denote the unit complex sphere in : we describe the discrete radar model which was derived in .we assume that a radar sends a sequence and receives as an echo a sequence .the relationship between and is given by the following equation : = h(s)[n ] + \mathcal{w}[n ] , \mbox { } n \in \mathbb{z}_n,\ ] ] where , called the _ channel operator _, is defined by . ] = \sum_{k=1}^{r } \alpha_k e(\omega_k n ) s[n - \tau_k ] , \mbox { } n \in \mathbb{z}_n,\ ] ] with s the complex - valued attenuation coefficients associated with target , , the time shift associated with target , the frequency shift associated with target , and denotes a _ random _ noise .the parameter will be called the _sparsity _ of the channel .the time - frequency shifts are related to the location and velocity of target .we denote the plane of all time - frequency shifts by .we denote by the probability measure on the sample space generated by the random noise and attenuation coefficients .let us elaborate on the constraint .in reality . however , we can rescale the received sequence to make .the rescaling will not change the quality of the detection , as evident from section [ prm ] .we make the following assumption on the distribution of : _ assumption _ ( * square root cancellation * ) : for every , there exists , such that for any vectors we have note that an additive white gaussian noise ( awgn ) of a constant , i.e. , independent of , signal - to - noise ratio ( snr ) satisfies this assumption .in addition , we make the following natural assumption on the distribution of the attenuation coefficients of the channel operator : _ assumption _ ( * uniformity * ) : for any measurable subset we have where denotes the unique up to scaling non - negative borel measure on which is invariant under all rotations of , i.e. , elements in .the last natural assumption that we make is the independence of the noise and of the vector of attenuation coefficients of the channel operator : _ assumption _ ( * independence * ) : the random sequences and are independent .the main task of the digital radar system is to extract the channel parameters , using and satisfying ( [ discr_channel ] ) .one of the most popular methods for channel estimation is the pseudo - random ( pr ) method . in this notewe describe the pr method and analyze its performance .a classical method to estimate the channel parameters in ( [ discr_channel ] ) is the _ pseudo - random method _it uses two ingredients - the ambiguity function , and a pseudo - random sequence . in order to reduce the noise component in ( [ discr_channel ] ) , it is common to use the ambiguity function that we are going to describe now .we consider the time - frequency shift operators which act on by [ n]= e(\omega n)\cdot f[n-\tau ] , \mbox { } n \in \mathbb{z}_n \label{ho}\]]the _ ambiguity function _ of two sequences is defined as the matrix =\left\langle \pi ( \tau , \omega ) f , g\right\rangle , \text { \ } \tau , \omega \in \mathbb{z } _ { n}. \label{af}\ ] ] [ fc]the restriction of the ambiguity function to a line in the time - frequency plane , can be computed in arithmetic operations using fast fourier transform . for more details , including explicit formulas see section v of .overall , we can compute the entire ambiguity function in operations .we say that a norm - one sequence is -_pseudo - random , _ figure [ prfigure ] for illustration if for every we have _ _ _ _\right\vert \leq b/\sqrt{n}. \label{pr}\]]there are several constructions of families of pseudo - random ( pr ) sequences in the literature see and references therein . for pseudo - random sequence.,height=151 ] consider a pseudo - random sequence , and assume for simplicity that in ( [ pr ] ) .then we have \label{prm } \\ & = & \left\ { \begin{array}{c } \widetilde{\alpha } _ { k}+\tsum\limits_{j\neq k}\widetilde{\alpha } _{ j}/\sqrt{n},\text { \ if } \left ( \tau , \omega \right ) = \left ( \tau _ { k},\omega _ { k}\right ) , \text { } 1\leq k\leq r ; \\ \tsum\limits_{j}\widehat{\alpha } _ { j}/\sqrt{n},\text { \ \ \ \ \ otherwise , \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ } \end{array}\right .\notag\end{aligned}\]]where are certain multiples of the s by complex numbers of absolute value less or equal to one . in particular , we can compute the time - frequency parameter if the associated attenuation coefficient is sufficiently large , i.e. , it appears as a peak " of let .we say that at the ambiguity function of and has -peak , if \right| \geq n^{-1/2 + \delta}.\ ] ] below we describe see figure [ three - peaks]the pr method . for a pseudo - random sequence , and a channel of sparsity three.,height=151 ] * input : * : : pseudo - random sequence , the echo , and a parameter . * output : * : : channel parameters .compute on and return those time - frequency shifts at which the -peaks occur .we call the above computational scheme the * pr method with parameter *. notice that the arithmetic complexity of the pr method is using remark [ fc ] .first , we introduce two important quantities that measure the performance of a detection scheme .these are the probability of detection and the expected number of false targets .then , we formulate the main result of this note which provides a quantitative statement about the performance of the pr method .assume that we have targets out of the possible see section model .also , assume that some random data is associated with these targets and influencing the performance of a detection scheme .for example , in our setting the random data consists of ( i ) the attenuation coefficients associated with the targets , and ( ii ) the noise . in general, we model the randomness of the data by associating a probability space to the set of targets . for every in the space denote by and , the number of true and false targets detected by the scheme , respectively .we define the _ probability of detection _ by and the _ expected number of false targets _ by the main result of this note is the following : [ main_thm ] assume the channel operator ( [ operator ] ) satisfies the uniformity , square root cancellation , and independence assumptions . then for , where , we have , and , as , for the pr method with parameter .we suspect that the true rate of convergence in , and , as , is polynomial .in fact , our proof of theorem [ main_thm ] confirms that the rate of convergence is at least polynomial .the obtained estimates in theorem [ main_thm ] show that the pr method is effective in terms of performance of detection , in the regime , for .we would like to note that if , for , then the performance of the pr method deteriorates since the noise influence becomes dominant .before giving a formal proof , we provide a sketch .to detect the parameters of the channel operator , the pr method evaluates the ambiguity function of , and at : let us denote by the -th cross term , and by the -th noise component .then we have the parameter is detectable by the pr method if the main term is much larger than the -th cross term , and the noise component . for a random point on the unit sphere , by concentration "most of its coordinates are of absolute value approximately equal to .thus , if , the magnitude of most of s is greater than .another instance of the concentration phenomenon guarantees that for most channels , the magnitude of the cross term is smaller than .finally , the square root cancellation assumption on the noise guarantees that the magnitude of the noise term is much smaller than .we begin the formal proof of the theorem with auxiliary lemmata . in section [ proofs_section ]we prove these statements .[ large_coord ] let be a uniformly chosen point on , and fix .there exists ( independent of and ) , such that for every we have [ prob_lem ] let be events in a probability space , such that , , for some .then for the event where we have [ mixed_terms ] let , and let , , be vectors satisfying then for any , there exists , such that for a uniformly chosen random point we have assume that , for some .let , where is a ( )-pseudo - random sequence in , is a channel of sparsity with uniformly distributed attenuation coefficients given by ( [ operator ] ) , and satisfies the square root cancellation assumption .we denote , and assume that at the receiver we perform pr method with parameter . *( a ) * proof of as " .we consider two cases . * case 1 .* .denote by for . since by lemma [ large_coord ] there exists such thatwe have therefore , for sufficiently large , we have denote by by lemma [ prob_lem ] we have since , we have . therefore , by lemma [ mixed_terms ] , there exists such that it follows from ( [ prm ] ) , ( [ eq_1 ] ) , ( [ eq_2 ] ) , the square root cancellation and independence assumptions , that with probability greater or equal than , at least of the channel parameters of are detectable .the latter implies that for sufficiently large .therefore we have as .* case 2 . * .by lemma [ large_coord ] , there exists such that by cauchy - schwartz inequality we have for all : it follows from ( [ prm ] ) , square root cancellation assumption on the noise , and the last two inequalities that for sufficiently large , all channel parameters of the operator are detectable with probability greater or equal than . therefore ,for sufficiently large we have the latter implies that as . *( b ) * proof of as " .* . by lemma [ mixed_terms ], there exists such that we have where is the set of all channel parameters of .it follows from ( [ prm ] ) , the square root cancellation and independence assumptions , and the last inequality that with probability greater or equal than the pr method will not detect any wrong channel parameters .the latter implies that as .* case 2 . * .by cauchy - shwartz inequality we have that for every : it follows from ( [ prm ] ) , the square root cancellation assumption on the noise , and the last inequality that there exists such that for sufficiently large we have the latter implies that the pr method with parameter satisfies as .we identify the borel probability space on invariant under all rotations with the borel probability space of the real unit sphere invariant under all rotations . recall that without loss of generality , it is enough to prove the statement for .let .we use the notation for the probability space on invariant under all rotations in . for any , and any dimension , we denote the real sphere of radius in by : let . by fubini s theorem and using the homogeneity of the lebesgue measure we get is well known that therefore , for large enough we have finally , the containment together with last inequality imply the statement of the lemma .let .we proceed similarly to the proof of lemma [ large_coord ] .since and we conclude that there exists such that if , then there exists such that this implies that there exists such that by the rotation invariance of the lebesgue measure on , it follows that there exists such that for any set of directions we have the latter implies the statement of the lemma .* acknowledgements .* we are grateful to our collaborators a. sayeed , k. scheim and o. schwartz , for many discussions related to the research reported in these notes . also , we thank anonymous referees for numerous suggestions . and , finally , we are grateful to g. dasarathy who kindly agreed to present this paper at the conference . 99 fish a. , gurevich s. , hadani r. , sayeed a. , and schwartz o. , delay - doppler channel estimation with almost linear complexity. _ ieee transactions on information theory , volume 59 , issue 11 , 7632 - 7644 , 2013 ._ gurevich s. , hadani r. , and sochen n. , the finite harmonic oscillator and its applications to sequences , communication and radar ._ ieee transactions on information theory , vol .9 , september 2008 . _
a performance of the pseudo - random method for the radar detection is analyzed . the radar sends a pseudo - random sequence of length , and receives echo from targets . we assume the natural assumptions of uniformity on the channel and of the square root cancellation on the noise . then for , where , the following holds : ( i ) the probability of detection goes to one , and ( ii ) the expected number of false targets goes to zero , as goes to infinity .
we consider here spin - half nuclei , quantum bits ( qubits ) , whose computation - basis states align in a magnetic field either in the direction of the field ( ) , or in the opposite direction ( ) . several such bits ( of a single molecule ) represent a binary string , or a register . a macroscopic number of such registers / molecules can be manipulated in parallel , as done , for instance , in _ nuclear magnetic resonance ( nmr)_. from the perspective of _ nmr quantum computation ( nmrqc ) _ ( for a recent review see ) , the spectrometer that monitors and manipulates these qubits / spins can be considered a simple `` quantum computing '' device ( see for instance refs . and and references therein ) .the operations ( gates , measurements ) are performed in parallel on many registers .the probabilities of a spin to be up or down are and , where is called the _ polarization bias_. , where is the boltzmann constant , is the gyromagnetic coefficient , is the intensity of the magnetic field and is the temperature of the heat bath . for small values , at equilibrium at room temperature , nuclear - spin polarization biases are very small ; in common nmr spectrometers is at most .while the polarization bias may be increased by physical cooling of the environment , this approach is very limited for liquid - state nmr , especially for in - vivo spectroscopy . to improve polarization , including selective enhancement , various `` effective cooling '' methods increase the polarization of one or more spins in a transient manner ; the cooled spins re - heat to their equilibrium polarization as a result of thermalization with the environment , a process which has a characteristic time of .an interesting effective cooling method is the reversible ( or unitary ) entropy manipulation of srensen and schulman - vazirani , in which data compression tools and algorithms are used to compress entropy onto some spins in order to cool others .such methods are limited by shannon s entropy bound , stating that the total entropy of a closed system can not decrease .the reversible cooling of schulman - vazirani was suggested as a new method to present scalable nmrqc devices .the same is true for the algorithmic cooling approach that is considered here ._ algorithmic cooling ( ac ) _ combines the reversible effective - cooling described above with thermalization of spins at different rates ( ) .ac employs slow - thermalizing spins ( _ computation spins _ ) and rapidly thermalizing spins ( _ reset spins _ ) .alternation of data compression steps that put high entropy ( heat ) on reset spins , with thermalization of hot reset spins ( to remove their excess entropy ) can reduce the total entropy of suitable spin systems far beyond shannon s bound .let us describe in detail the three basic operations of ac . in some algorithms ,the reset spins can be used directly in the compression step .these algorithms have already led to some experimental suggestions and implementations . an efficient and experimentally feasible ac technique , termed_ `` practicable algorithmic cooling ( pac ) '' _ , combines _ polarization transfer ( pt ) _ , reset , and rpc among three spins called _ 3-bit - compression ( 3b - comp)_. the subroutine _ 3b - comp _ may be implemented in several ways .one implementation is as follows : + _ exchange the states .+ leave the other six states ( , , etc . ) unchanged . _+ if 3b - comp is applied to three spins \{c , b , a } with biases , then spin will acquire the bias : pac1 and pac2 consider _ purification levels _ , where spins are cooled by a factor of at each successive level ; at the purification level , ) with biases that are , therefore calculations are usually done to leading order in . ] .the practicable algorithm pac2 uses any odd number of spins , : one reset spin and computation spins ( see appendix [ app : alg - pac ] for a formal description of pac2 ) .pac2 cools the spins such that the coldest spin can , ideally , attain a bias of .when all spins are cooled , the following biases are reached : this proves an exponential advantage over the best possible reversible cooling techniques ( e.g. , of refs . and ) , which are limited to a cooling factor of . as pac2 can be applied to as few as 3 spins ( ) to 9 spins ( ) , using nmrqc tools , it is potentially suitable for near future applications .pac is simple , as all compressions are applied to three spins and use the same subroutine , _3b - comp_. pac2 always applies compression steps ( _ 3b - comp _ ) to three identical biases . in general , for three spins with biases , , where all biases , 3-bit compression would attribute spin with the bias : using eq .[ eq:3b - comp ] , several algorithms were designed , including pac3 , which always applies 3b - comp to bias configurations of the form ( see formal description in appendix [ app : alg - pac ] ) .when all spins are cooled , the following biases are obtained : the framework of pac was extended to include multiple cycles at each recursive level .consider the simplest case , whereby a three - spin system with equal bias , is cooled by repeating the following procedure times : 1 . _3b - comp _ on increasing the bias of .2 . reset the biases of and back to .the bias of spin c increases to .thus the biases are asymptotically obtained for the three spins , as noted first by fernandez .this variant , where , is the first exhaustive spin - cooling algorithm . by generalizing this `` fernandez algorithm '' to more spins, we obtained a bias configuration that asymptotically approaches the fibonacci series ; when all spins are cooled , the following biases are reached : algorithms based on the 3b - comp were recently reviewed . following the fibonacci algorithm ,a related algorithm was described , which involves compression on four adjacent spins ; the tribonacci algorithm reaches biases in accord with the tribonacci series ( also called 3-step fibonacci series ) , where each term is the sum of the three previous terms .similarly , one can obtain general -step fibonacci series , where each term is the sum of the previous elements , respectively .summing over all previous terms produces the geometric sequence with powers of two : the ac that reaches this series was termed , `` all - bonacci ac '' .we conclude that , with the same number of spins , exhaustive ac reaches a greater degree of cooling than pac algorithms . for example , the cooling factor achieved by fibonacci with 9 spins is .the corresponding enhancement for pac2 is , and for pac3 it is about . the _ partner pairing algorithm ( ppa ) _ was shown to achieve a superior bias than all previous cooling algorithms , and was proven to be optimal .the all - bonacci algorithm appears to produce the same cooling as the ppa : ( this was verified numerically for small biases ) . in all - bonacci , with spins , the coldest spin approaches a bias of ( as long as ) . for the ppa, a calculation of the cooling that yields was not performed , yet it was proven that .see for a proof and see for calculations in the case of .the focus of this paper is on simple algorithms that are semi - optimal and also practicable , owing to their exclusive reliance on simple logic gates .such algorithms could be implemented in the lab if proper physical conditions ( e.g. , ratios ) are achieved , and thus they could have practical implications in the near future .in contrast , the optimal algorithms mentioned above , the ppa and the all - bonacci , do not belong to this category .the all - bonacci requires an unreasonable number of reset steps , and the permutations required in the sorting step of the ppa were not yet translated into simple 2-spin and 3-spin gates .the next section describes , a new semi - optimal algorithm based on pac , and its special variant , mpac , which cool spins such that the coldest spin is asymptotically cooled by a factor of .the optimal algorithms , the ppa and all - bonacci , need about half the spins ( more precisely , spins ) in order to cool ( asymptotically ) the coldest spin to ; we thus see that mpac is semi - optimal in the sense that it needs twice as much spins to reach nearly the same optimal bias .section [ sec : fib - variants ] discusses simple variants of the fibonacci algorithm : -fibonacci , and mfib , which fixes the number of cycles at each recursive level .section [ sec : comparison ] compares the new algorithms , mpac and mfib , to previous cooling algorithms , including the ppa , fibonacci , all - bonacci , and pac algorithms .we show that practicable versions of mpac and mfib ( with small ) reach significant cooling levels for 5 - 11 spin systems ; moreover , in the case of a 5-spin system , semi - optimal cooling levels ( half the optimal cooling ) are attained with reasonable run - time .section [ sec : pure - qubits ] provides some analysis of sopac in case one would like to purify qubits as much as needed in order to obtain scalable nmrqc .we examine the resources required to obtain a highly polarized spin .[ sec : mpac - variants ] the fernandez and fibonacci algorithms described above illustrate the improved level of cooling attainable ( asymptotically ) by repeated compressions that involve the target spin .practicable cooling algorithms that perform a small number of such cycles ( at each recursive level ) offer reasonable cooling at a reasonable run - time . in this sectionwe describe mpac , a new pac - based algorithm , which approaches nearly half the optimal cooling level with only a small number of cycles .the mpac algorithm that we now present is a generalization of pac2 ( see appendix [ app : alg - pac ] for details on pac2 ) . is the purification level , is the bit index , stands for 3b - comp on spins which increases the bias of , denotes a polarization transfer from bit to bit ( or for simplicity , just a swap between their states ) , and is a reset , setting the bias of the reset spin , , to .the algorithm only uses elementary gates : a single gate operates either on a single spin or on a pair of spins ( e.g. , pt between adjacent spins ) , or on three spins ( 3b - comp and pt between next - to - nearest neighbors ) . takes spins at equilibrium and attributes bit with a bias of ._ mpac : _ + ^{m } pt(k-2\rightarrow k)\ ; m_{j-1}(k-2 ) .\nonumber\end{aligned}\ ] ] with three spins ^{m}pt(1\rightarrow 3)m_0(1).\end{aligned}\ ] ] with five spins ^{m}pt(3\rightarrow 5)m_1(3).\ ] ] the recursive formulas above are written from right to left , such that the first step of is reset of the reset spin , , followed by pt from spin 1 to spin 3 , and repetitions of the 4-step sequence in square brackets that ends with _ 3b - comp_. notice that the number of spins required by mpac to achieve a purification level is the same as for pac2 : but now depends on the number of cycles , : .for we get back the algorithm pac2 , where , thus 1pac pac2 ; we often retain here the original name , pac2 .asymptotically , for we get . for three spins , and is the fernandez algorithm described above , and for sufficiently large , spin will acquire a final bias of .this is calculated via .similarly , for five spins and sufficiently large , the final bias of the spin is .the choice of has a strong influence on the run - time , but fortunately , the polarization enhancement also increases rapidly with . for small spin - systems ( up to about 10 spins ) , very small values of ( 26 )are sufficient .figure [ fig : mpac - analysis - for - j<7 ] compares the cooling factors obtained by such mpac variants up to 13 spins .notably , 6pac cools to a similar extent as -pac .it is also evident that 2pac cools significantly better than the single - cycle variant ( 1pac pac2 ) ., where ) for small spin - systems after mpac with various ( see eq .[ eq : mpac - definition ] ) . ] the run - time of mpac ( neglecting and _ 3b - comp _ steps ) is in reset - time units with . the run - time is therefore reduced by a factor of . ] . if ( mpac is pac2 ) the run - time is . in general , for any integer for instance , for 21 spins , , , we get ( as long as the final bias is still much smaller than 1 ) , and for the same , but , . for 11 spins , , and yields in general , for and sufficiently large , spin is attributed a bias of .we denote the asymptotic case ( ) by .we now generalize mpac by replacing the constant by a vector .this added flexibility , in which a different value of is associated with each cooling level , was found to be beneficial in the analysis of the fibonacci algorithm , as we explain in section [ sec : fib - variants ] .we call this new algorithm ; it is defined as follows : + _ : _+ ^{m_j } pt(k-2\rightarrow k)\ ; m_{j-1}(k-2),\nonumber\end{aligned}\ ] ] where ( as before ) is the purification level . denotes resetting the bias of spin to .+ with three spins ^{m_1 } pt(1\rightarrow 3)m_0(1).\end{aligned}\ ] ] with five spins ^{m_2}pt(3\rightarrow 5)m_1(3).\end{aligned}\ ] ] in general , cools spin to a bias of , as is equivalent to performing the fernandez algorithm on bits , where bits and have equal initial biases .therefore ( see details in appendix [ app : vmpac ] ) it is important to mention that could be chosen according to various criteria ; for instance , it could depend on the total number of spins ( or on ) , in addition to its dependence on .the algorithm requires reset steps : hence , the order in which the appear is irrelevant .[ sec : fib - variants ] the fibonacci algorithm is exhaustive , in the sense that a very large number of 3b - comp steps are performed at each recursive level .practicable variants may be obtained by limiting the number of compressions , as we describe here .an algorithm that reaches a distance of from the fibonacci series was defined as follows : + _ -fibonacci : run _ ^{m_{n , k}}\ff(n , k-1),\ ] ] where is a reset step on bits 1 and 2 ( described by in section [ sec : mpac - variants ] ) , are chosen such that the bias of bit is at least , and is the fibonacci number . herewe choose , such that .this condition sets ( see appendix [ app : fib - run ] ) . for three spins fibonacciis ^{m_{3,3}}\ff(3,2)\\\ \ff(3,2 ) & = & m_0(1)\ ; pt(1\rightarrow 2)\ ; m_0(1).\end{aligned}\ ] ] this attributes spin number three with a bias of at least for four spins ^{m_{4,4}}\ff(4,3)\\\ \ff(4,3 ) & = & [ \ff(4,2)\ ; \bb(3)]^{m_{4,3}}\ff(4,2)\\\ \ff(4,2 ) & = & m_0(1)\ ; pt(1\rightarrow 2)\ ; m_0(1),\end{aligned}\ ] ] which attributes spins three and four with biases of at least and respectively .the term -fib hereinafter always refers to , such that . for large spin systems , say or so , -fib is not practicable , as it requires many cycles in the lower recursion levels . to circumvent this problem, we fix the number of compression steps , such that and denote this variant of fibonacci by mfib : _ mfib : run _ + ^{m}\ff(n , k-1).\ ] ] where is a reset step on bits 1 and 2 as before , and is any integer . with three spins ^{m}\ff(3,2),\end{aligned}\ ] ] and spin three is attributed a bias of with four spins ^{m}\ff(4,3)\\\ \ff(4,3 ) & = & \left[\ff(4,2)\ ; \bb(3)\right]^{m}\ff(4,2),\end{aligned}\ ] ] is the same as for three spins , and spin four is attributed a bias of : in general , for spins , the bias of the msb is given by the recursive formula : where specifically , we focus on cases where is a small constant ( ) . for , it can be shown that mfib is outperformed by pac3 .the run - time of mfib is is interesting to compare sopac to other algorithms .we first consider the cooling levels attained by each algorithm . attributes to spin a bias of . in comparison , with spins , the ppa and all - bonacci reach , and ;numerical analysis suggests that the lower bound of is the asymptotic bias . ] and fibonacci asymptotically approaches where is the element of the fibonacci series .while the ppa and the fibonacci algorithm cool the entire spin - system , mpac is defined so that it only polarizes the _ most significant bit ( msb)_. for a fair comparison of run - time we need to cool the entire string in mpac as well . to accomplish this , successive applications of mpac cool the less significant bits .namely , the process yields the asymptotic biases consider the application of cooling algorithms to cool all spins ( with initial biases of 0 ) , until the biases are sufficiently close to the asymptotic biases ; the resulting biases of the first seven spins ( as long as of the coldest spin is still much smaller than 1 ) are given here for and other exhaustive algorithms , as well as for practicable and sopac algorithms : * * ppa and all - bonacci * fibonacci * pac2 1pac * pac3 * 2pac * 4pac * 6pac * 3fib * 4fib the bias configurations are given in units of the initial bias , .-fib is not included , as for each total number of spins it produces different bias series . for -fib yields the biases .we consider a small spin - system comprising five spins .table [ tab : bias-5-spins ] compares the biases ( for the msb ) obtained by previous algorithms ( top ) , mpac ( middle section ) , and mfib ( bottom ) , as well as the number of resets required to create the entire bias series -bit gate is performed in a single computing step , and that the total number of such computing steps is negligible with respect to the duration of the reset steps . in reality this is not the case , and practicable algorithms such as pac and sopac are important . ] .note that the ppa cools better when more resets are allowed , approaching the limit of for ac with 5 spins . with only 28 resets ,the ppa attains a semi - optimal cooling level of ; for a similar cooling level , 4pac and 6pac require 101 and 197 resets , respectively .after 99 resets , the ppa obtains a near - optimal cooling level of .table [ tab : bias-7-spins ] presents a similar comparison for 7 spins .table [ tab : bias-9-spins ] and table [ tab : bias-11-spins ] present similar comparisons for 9 and 11 spins , respectively ; a spin - system of comparable size was recently used for benchmarking quantum control methods .appendix [ app : cool - factors ] compares the number of spins and run - time required by each algorithm to achieve several small cooling factors .the run - time analysis of the pac algorithms is conveniently expressed as a function of the purification level , ( see appendix [ app : alg - pac ] ) .the entire spin - system is cooled by successive applications of each algorithm , as shown above for mpac .in contrast , the fibonacci algorithm and the ppa were designed to generate the entire series of biases .the run - time of -fib is given by ( see calculation in appendix [ app : fib - run ] ) , and the run - time of the ppa was obtained by a computer simulation that iterates between the two steps of the algorithm . . cooling a 5-spin system by various algorithms .the optimal cooling level is . [cols="<,^,^",options="header " , ] +) for 4pac , 6pac and as a function of the purification level starting from an initial bias of . for more detailssee figure [ fig : mpac - analysis - for-0.01 ] . ]
_ algorithmic cooling ( ac ) of spins _ applies entropy manipulation algorithms in open spin - systems in order to cool spins far beyond shannon s entropy bound . ac of nuclear spins was demonstrated experimentally , and may contribute to nuclear magnetic resonance ( nmr ) spectroscopy . several cooling algorithms were suggested in recent years , including practicable algorithmic cooling ( pac ) and exhaustive ac . _ practicable _ algorithms have simple implementations , yet their level of cooling is far from optimal ; _ exhaustive _ algorithms , on the other hand , cool much better , and some even reach ( asymptotically ) an optimal level of cooling , but they are not practicable . we introduce here _ semi - optimal _ practicable ac ( sopac ) , wherein few cycles ( typically 26 ) are performed at each recursive level . two classes of sopac algorithms are proposed and analyzed . both attain cooling levels significantly better than pac , and are much more efficient than the exhaustive algorithms . the new algorithms are shown to bridge the gap between pac and exhaustive ac . in addition , we calculated the number of spins required by sopac in order to purify qubits for quantum computation . as few as 12 and 7 spins are required ( in an ideal scenario ) to yield a mildly pure spin ( 60% polarized ) from initial polarizations of 1% and 10% , respectively . in the latter case , about five more spins are sufficient to produce a highly pure spin ( 99.99% polarized ) , which could be relevant for fault - tolerant quantum computing .
development of the methods that allow improving accuracy of determining the asteroid sizes ( i.e. whether they measuredozens or hundreds meters in diameter ) is important for correct estimateof damage they can cause ( either regional or global catastrophes , respectively ) . at the same timethis research can be interesting for specialists who study shapes and the surface geometry of small bodies of the solar system . in our previous works we proposed the method to estimate sizes of passive cosmic objects which method utilizes the radiolocation probing by ultra - high - resolving nanosecond signals for obtaining radar signatures .the method involves radio pulse strobing of reflected ultra - high - resolving signals from the surface of the cosmic object .the complete coherence of the probing and reflected signals is an essential condition of the method .however such a condition corresponds to idealized case when no phase instabilities exist in the signal processing system .the real sources of reference oscillations have nonzero instant instability of frequency which leads to loss of coherence at large signal lags ( large distances ) .this factor restricts performance of coherent processing methods and leads to reduction of signal - to - noise ratio at the output of a stroboscopic system . in the analysis of time scale transformation of broadband radiosignals the complete coherence of carrier frequencies of the measured and the reference oscillations is commonly assumed .such a concept corresponds to an absence of phase instabilities in the signal processing system . in real devicesthis condition can be broken due to deviations of reference generator frequency and phase , instabilities of delays in a signal path and other factors .these factors restrict performance of coherent processing methods and lead to reduction of signal - to - noise ratio at the output of a stroboscopic system .let us consider the influence of phase instability of the carrier frequencies of the measured and the reference signals on statistical characteristics of the transformed signal in stroboscopic processing .we will describe the loss of coherence by a random process , i.e. by a fluctuation component of a phase difference between the received and the strobe radio signals. the statistical characteristics of the phase difference are considered to be known . in the analysiswe will assume that the coefficient of spectral transformation is large enough to use asymptotic estimates .the model of stroboscopic processing of the reflected signals ( fig .[ fig:1 ] ) differs from one considered in the work by the low - pass filter being replaced with the tracking filter which adaptively tunes to differential frequency of carriers where is the carrier frequency of the probing signal , is the radial velocity of an asteroid . for an exact determination of differential frequency , the radial velocity of an asteroid has to be measured independently using narrow - band methods based on center of mass of the doppler signal spectrum .one of such effective methods is the method of real - time assessment of radial velocity by means of fractional differentiation of a doppler signal considered in the previous work of the authors .let us represent the complex models of the received and the reference signals including phase instability in the form } } , \qquad \dot{a}(t ) = \sum_{k=0}^n { a_1(t - kt_1)e^{j\omega_1 t}},\ ] ] where and are the complex envelopes providing high range resolution ; , are the repetition periods of the signal and the strobe ; ; is the sampling increment ( ) .the value of stroboscopic sample of a signal in the -th sampling period can be presented as } dt.\ ] ] let us assume be a stationary random zero - mean process with correlation window exceeding the duration of strobe signals .let us suppose also that slow phase displacements are tracked by stabilizing system , wherefore one can neglect the correlation of adjacent samples and set .this allows for presentation of the sample in the form }}{2 t } \int_{0}^{t } a(t^{\prime } ) a_1(t^{\prime}-k\delta t ) dt = \dot{y}_{k0 } e^{j\theta_k}.\ ] ] where is the sample of the random process and stands for the stroboscopic sample when there is no phase instability in the signal processing system . to ensure the mode of `` ultra - high - resolution '' of radar signatures of asteroids one has to use nanosecond signals with pulse ratio of order , thus , the aforementioned approximations are perfectly acceptable . the average value ( mathematical expectation ) of samples obtained by averaging over phase can be expressed as where \rangle = \chi_{\theta}(1) ] .variance of samples amounts to under the assumptions made .the ratio of variance to squared mean value is it has the meaning of relative power level of output noise resulting from phase fluctuations .this ratio can be significantly reduced by increasing the spectral transformation coefficient at the expense of sampling step decrease and by using data storage in a system digital filter . in this casethe variance will be lowered by a factor where is the accumulation coefficient .given that the filter and the spectrally compressed signal band are adaptively matched , the value is asymptotically equivalent to the number of sampling steps packed in the signal duration : .( mathematical expectation and root - mean - square deviation ) vs. phase instability for different values of accumulation coefficient .,scaledwidth=100.0% ]we performed numerical simulation of processing in the radio pulse strobing scheme ( fig .[ fig:1 ] ) of the signal ] with the effective duration determined according to the method of moments : .the random process was specified as a sequence of uncorrelated samples with the normal distribution : .the filter s transfer function was rectangular with the bandwidth equal to the width of the transformed signal power spectrum at ( 20db ) level .since the energy of received signal is important for optimal reception under additive noise conditions , the influence of phase instability was estimated as a decrease of the mean signal s energy at the output of the signal processing filter , relative to the energy under full coherence condition ( ) . fig .[ fig:2 ] demonstrates statistical characteristics of the output signal of the stroboscopic signal processing system for different accumulation coefficients in the filter obtained by statistical modeling at specified values ; ; ; . the quantity corresponding to signal s energy decrease at the output of the band - pass filter of the stroboscopic signal processing system vs. phase instability is plotted in fig .[ fig:2 ] .the relative errors caused by phase instability are also shown .. presented results of statistical modeling are obtained by averaging over 100 simulation runs .as previously noted , for a cosmic object of about 50 m in size the range resolution of m can be provided by coherent stroboscopic signal processing of signals with duration ns in the x - range ( ghz ) and frequency band mhz . as it can be seen from fig .[ fig:2 ] decrease of mean received signal power by half corresponds to the phase instability of rad .the value of phase deviation caused by a short - time frequency instability and by finite signal propagation time is it represents the wiener process with normal distribution .the upper - bound estimate of phase difference gives .for operation of stroboscopic radar station at range with acceptable phase instability of rad it is required to ensure .this condition ensures that the noise level does not exceed db at the accumulation coefficient . at the contemporary technology level the stability of reference generators with relative errorno grater than , which corresponds to hz in the x - range , is quite realizable .thus , the system can functionally operate within 5 million km distance .loss of coherence in stroboscopic radar ranging systems caused by phase instabilities of the reference sources leads to sensitivity degradation and is equivalent to the effects of modulating interference .noise reduction at the output of the stroboscopic converter caused by loss of coherence can be achieved by reducing the sample step with corresponding increase of the processing time .we are grateful to vitaly korolev for the help at vectorization of drawings and victor levi for careful reading of manuscript .the work is fulfilled within the framework of projects supported by grants from the russian foundation for basic research 15 - 47 - 02438-r - povolzhie - a and 14 - 02 - 97001-r - povolzhie - a .
we consider the problem of coherence loss in a stroboscopic high resolution radar ranging due to phase instability of the probing and reference radio signals . requirements to the coherence of reference generators in stroboscopic signal processing system are formulated . the results of statistical modeling are presented . coherence loss in stroboscopic radar ranging in the problem of asteroid size estimation v. d. zakharchenko , i. g. kovalenko , v. yu . ryzhkov _ volgograd state university , universitetskij pr . , 100 , volgograd 400062 , russia _ _ keywords _ near - earth objects , asteroids , stroboscopic radar observations , wideband radio signals
terahertz spectroscopy requires a reliable determination of the amplitude and phase of the terahertz radiation .one established method for the generation and coherent detection of continuous - wave terahertz radiation is photomixing .typically , terahertz radiation is generated by illuminating a biased photomixer ( the transmitter , tx ) with the optical beat of two near - infrared lasers , and coherent detection is achieved by measuring the photocurrent in a second photomixer ( the receiver , rx ) which is illuminated by both the optical beat and the terahertz radiation .the accuracy of the quantity of interest , e.g. the transmission of a given sample , depends on the stability of the measured terahertz signal .this signal in turn depends on the stabilities of the responsivities and of the two photomixers and on the stabilities of the optical powers of the two lasers at the two photomixers .all six of these quantities are sensitive to temperature drifts . here, we report on an efficient normalization of the terahertz signal by employing _ both _ photomixers as powermeters .our aim is to minimize the uncertainty of . in order to motivate an expression describing , we start from the generation of the terahertz wave at the transmitter . for a constant bias voltage of ,the transmitter on the one hand behaves like a photoconductive resistor , i.e. the dc photocurrent at the transmitter , , is given by the bias - dependent responsivity and the total incident optical power , where and denote the optical powers of the two lasers at the transmitter .on the other hand , the interference between the two laser beams gives rise to a beat which yields a current oscillating at the difference frequency .thus the transmitter emits a terahertz wave with power . in transmission geometry ,the power at the receiver is given by .the photocurrent in the receiver , , depends on the phase difference between the terahertz wave and the optical beat signal at the receiver .the information on amplitude and phase difference can be separated by phase modulation with a mechanical delay stage or , in our case , a fiber stretcher .we focus on the amplitude where denote the optical powers of the two lasers at the receiver , and and are the responsivity of the receiver and its voltage derivative , respectively . here , we have used the linear photocurrent - voltage characteristic of a photomixer ( see fig . 1 ) , thus with the electric field amplitude .equation ( [ eqn : ithzvsr ] ) assumes perfect spatial overlap of the two lasers which is exactly the case in our setup where the superimposed light is guided in single - mode fibers .however , if the photomixers are illuminated by a free beam , spatial overlap might have to be considered .altogether this yields an expression for the amplitude of the terahertz signal , apparently , depends on all of the four optical powers , which in principle may drift independently . a drift of e.g. the optical output power of laser 1 affects and but not or , whereas an attenuation or mechanical displacement of the beam in only the transmitter branch will change and .polarization effects can influence all components differently .the polarization plays a role here because of the finger structure of our photomixers which more strongly reflects the component of the field parallel to the fingers than the perpendicular one ( see e.g. ) . in order to fully compensate the above mentioned drifts , one would have to measure all four optical power components as well as the responsivities ( or ) , which is not easily practicable . as a sophisticated alternative, we propose to determine dc photocurrents in _ both _ photomixers , treating both of them like photodiodes .we will show that this enables us to compensate for drifts of the responsivities and of the total optical powers illuminating the two photomixers . as a result, the normalized signal is only sensitive to a drift of the relative power of the two lasers . to the best of our knowledge ,biasing a photomixer at the receiver side has not been reported .the transmitter and the unbiased receiver have been discussed above , see eqs .( [ eqn : itx ] ) and ( [ eqn : ithzvsr ] ) . here, we additionally bias the receiver photomixer with a small dc voltage of typically .such a small bias does not change the linear behavior of the receiver photomixer ( see fig .[ fig : ivcharacteristic ] ) , thus the slightly biased photomixer can still be used as a detector . for small values of may write the responsivity as .thus the dc photocurrent is given by typically , ( and ) is not a true dc current because the bias voltage at the transmitter is modulated e.g. in the khz range in order to apply lock - in detection of .therefore , the dc photocurrent can be separated from .even an unbiased photomixer shows a small residual photocurrent due to a photovoltage which arises from inhomogeneous illumination .experimentally , we estimate this offset voltage to be of the order of a few millivolt .the offset voltage itself is sensitive to drifts of the optical power and thus unsuitable for using the photomixer as a reliable powermeter .this task requires a stable voltage , independent of the optical power .hence , the reliability of the normalization is improved if one selects specimen with low offset voltages , employs low optical power , and applies a dc bias voltage which is significantly larger than the self - induced offset voltage of the photomixer . in short , we determine three different currents .the photocurrent at the transmitter and the terahertz photocurrent at the receiver both oscillate at the frequency of the bias modulation of the transmitter , whereas is a true dc current due to the dc bias voltage at the receiver .then , and are used to normalize , where and are constants , e.g. the long - term average values of and , respectively .note that the normalized photocurrent does not depend on the responsivities and anymore .this is valid under the assumption that is equal for a constant and for applying a terahertz field , which is supported by our results described below .let us now examine how this normalization affects the sensitivity to a drift of the optical power .the partial derivatives of and with respect to e.g. and are given by compared to , we find that is less sensitive to a variation of by a factor of in the limit of , a small drift of is fully compensated .moreover , also stays constant for a common attenuation of both lasers , i.e. for power changes . however , full compensation is not achieved if and are significantly different and , at the same time , change differently . therefore , equal laser powers are preferable for the normalization .an analogous calculation yields the same results for the receiver .it is instructive to rewrite in terms of the splitting ratios of the two laser powers at the transmitter and at the receiver , ^{1/2 } \ , [ r_\mathrm{rx}(1-r_\mathrm{rx})]^{1/2 } \ , .\label{eqn : ithznormsplittingratio}\ ] ] the normalized terahertz photocurrent is no longer a function of the total power or of the responsivities of the photomixers , but it only depends on the splitting ratios in the two branches . in the simple case of equal splitting ratios for transmitter and receiver , , eq .( [ eqn : ithznormsplittingratio ] ) reduces to sketch of our experimental setup is given in fig .[ fig : setup ] , for details we refer to refs . . for both seed lasers , the optical power at a given frequencyis actively stabilized to about % , but the tapered amplifier and the photomixers are sensitive to temperature drifts .the photocurrent at the receiver is preamplified and then digitized to determine the terahertz photocurrent and the dc receiver photocurrent .the transmitter photocurrent is measured with the help of a 1k resistor in series with the photomixer .the signals of both optical frequencies are amplified simultaneously in the tapered amplifier , and the resulting beam with a particular value of is sent to both photomixers . thus the simple case of equal splitting ratios ( see eq .( [ eqn : r ] ) ) is applicable to our setup .however , small changes of and in the path from the amplifier to the two photomixers can not be excluded .the gain of the tapered amplifier is wavelength dependent due to reflections at the surfaces of the amplifier chip .a temperature - induced change of the chip length therefore leads to a drift of the ratio of the laser powers which can typically range from 40:60 to 50:50 , equivalent to to 0.5 .we thus expect that fluctuations of the normalized photocurrent are suppressed to below 5% .as a first example we study the effect of an artificially introduced drift of the power of one laser ( and ) . for this measurement , the tapered amplifier was removed from the setup .note that was not measured directly at the photomixers but with a separate photodiode .experimentally , we started with equal laser powers , , and then reduced , keeping the power of the second laser fixed ( see fig .[ fig : normpc ] ) . according to eqs .( [ eqn : ithzvsefield ] ) and ( [ eqn : r ] ) , we expect that the terahertz photocurrent shows the same relative drift as , i.e. , whereas the relative drift of the normalized current is expected to be much smaller ( solid lines in fig .[ fig : normpc ] ) .this is corroborated by the experimental data .however , the measured drift is slightly larger than expected .this is mainly due to the fact that shows additional fluctuations of the order of roughly 3% .if we subtract these deviations between and the straight line from , then the result is very close to the predicted curve ( see crosses in fig . [fig : normpc ] ) .the remaining difference can be attributed to , e.g. , the small power - dependent offset voltage at the receiver caused by inhomogeneous illumination .although the measured data do not fully reach the ideal case of the theoretical prediction , the stability of the signal is significantly enhanced by the normalization .for instance a reduction of of 20% gives rise to only 3% change of the measured value of . in the second example , see fig .[ fig : splittingratio ] , we changed the splitting ratio while keeping the total laser power fixed . in this case , follows the predicted behavior excellently . as a third example, we deployed the complete spectrometer from fig .[ fig : setup ] and monitored the terahertz amplitude at 600ghz over 2 hours while periodically changing the laboratory temperature between 22 and 24 ( see fig .[ fig : data ] ) .the measured amplitude varies by approximately 15% peak - to - peak , whereas the stability of the normalized amplitude is improved to about 4% peak - to - peak ( bottom panel ) . as discussed in section [ sec : setup ] , in our setup the splitting ratio drifts between 0.4 and 0.5 as a function of temperature . according to eq .( [ eqn : r ] ) , this corresponds to a drift of of 4% , in excellent agreement with our data .the dc photocurrents in the transmitter and receiver are correlated to some extent ( see fig . [fig : data ] , second and third panel ) , which here is mainly due to drifts of the tapered amplifier which is common to both optical paths .but there are also significant differences in the fluctuations of and , substantiating the necessity to measure both currents separately .the normalized amplitude is less stable if only or only is used for the normalization .this is evident from the fourth panel of fig .[ fig : data ] , which shows and . in the latter , the factor has been replaced by in eq .( [ eqn : ithznorm ] ) . in fig .[ fig:100percentline ] we show the 100% line of the spectrometer in the frequency range from 50ghz to 800ghz .the measurements were performed directly after switching on the setup . for a step size of 1ghz ,a single run took about 10 minutes , and there was a delay of about 10 minutes between the two runs .the initial change of the temperature gives rise to a drift of the 100% line , which is significantly reduced by the normalization .as discussed above for the data of fig .[ fig : data ] , the normalization is very well suited for temperature - induced drifts .however , it does not have any effect on the noise - like features present in the 100% line .these are caused by e.g. the uncertainty in the determination of the amplitude from the raw data measured as a function of ( see eq .( [ eqn : ithzvsefield ] ) ) and by small fluctuations of the frequency .finally , we varied the receiver bias and measured the noise photocurrent , i.e. the standard deviation of the terahertz photocurrent with blocked terahertz beam .we found that the noise photocurrent depends on the details of the photomixer device .for some devices , the noise is nearly independent of the bias , while other devices show a significant increase of the noise photocurrent when a bias of a few 10mv is applied .presumably , this difference originates in differences in the photomixer resistivity . in order to obtain stable and reliable data ,it is of course desirable to reduce the fluctuations of the laboratory temperature in the first place .however , stabilization to much better than is not a trivial task .moreover , the stabilization achieved via photocurrent normalization may be a significant improvement for terahertz applications requiring measurements outside a regular laboratory .as an alternative to the method described here , one may consider to monitor the optical power of the two lasers with , e.g. , a photodiode .however , the discussed normalization via the photocurrents has two main advantages .it does not require an extra sensor , and the power is measured directly within the photomixers .therefore , the normalization also compensates drifts of the responsivity of the photomixers , e.g. caused by mechanical drifts within the photomixer device .we have described and demonstrated a normalization scheme for the terahertz photocurrent in a continuous - wave photomixing system .this method is based on measuring the dc photocurrents in both the transmitter and receiver photomixers .consequently , no extra sensor is needed and the method can easily be implemented .any change of the laser power illuminating the photomixers can be described as a change of the total laser power in combination with a changing splitting ratio of the two laser powers .the normalization fully compensates drifts of the total laser power as well as drifts of the responsivities , thus the stability of the normalized terahertz signal only depends on the splitting ratio .we have provoked large changes of the terahertz signal by either reducing the power of one laser or by changing the laboratory temperature to simulate unstable ambient conditions . in all cases ,the normalized signal is stable within a few percent .this project is supported by the dfg via sfb 608. 99 k.a .mcintosh , e.r .brown , k.b .nichols , o.b .mcmahon , w.f .dinatale , and t. m. lyszczarz , `` terahertz photomixing with diode lasers in low - temperature - grown gaas , '' appl .. lett . * 67 * , 3844 ( 1995 ) .s. verghese , k.a .mcintosh , s. calawa , w.f .dinatale , e.k .duerr , and k.a .molvar , `` generation and detection of coherent terahertz waves using two photomixers , '' appl .. lett . * 73 * , 3824 - 3826 ( 1998 ) .deninger , t. gbel , d. schnherr , t. kinder , a. roggenbuck , m. kberle , f. lison , t. mller - wirts , and p. meissner , `` precisely tunable continuous - wave terahertz source with interferometric frequency control , '' rev .instr . * 79 * , 044702 ( 2008 ) .a. roggenbuck , h. schmitz , a. deninger , i. cmara mayorga , j. hemberger , r. gsten , and m grninger , `` coherent broadband continuous - wave terahertz spectroscopy on solid - state samples , '' new j. phys . ** 1**2 , 043017 ( 2010 ) .s. matsuura , h. ito , `` generation of cw terahertz radiation with photomixing , '' in _ terahertz optoelectronics , _k. sakai , ed .( springer - verlag berlin heidelberg , 2005 ) 157 - 202 .d. saeedkia , s. safavi - naeini , `` terahertz photonics : optoelectronic techniques for generation and detection of terahertz waves , '' j. lightwave technol .* 26 * 2409 - 2423 ( 2008 ) .s. preu , g.h .dhler , s. malzer , l.j .wang , a.c .gossard , `` tunable , continuous - wave terahertz photomixer sources and applications , '' j. appl . phys . * 109 * 61301 - 61356 ( 2011 ) .bjarnason and e.r .brown , `` sensitivity measurement and analysis of an eras : gaas coherent photomixing transceiver , '' appl .. lett . * 87 * , 134105 ( 2005 ) .a. roggenbuck , k. thirunavukkuarasu , h. schmitz , j. marx , a. deninger , i. cmara mayorga , r. gsten , j. hemberger , and m. grninger , `` using a fiber stretcher as a fast phase modulator in a continuous wave terahertz spectrometer , '' j. opt .b * 29 * , 614 - 620 ( 2012 ) .i. cmara mayorga , e.a .michael , a. schmitz , p. van der wal , r. gsten , k. maier , and a. dewald , `` terahertz photomixing in high energy oxygen- and nitrogen - ion - implanted gaas , '' appl .. lett . * 91 * , 031107 ( 2007 ) .biased uni - travelling - carrier photodiodes have been employed as receivers , see t. nagatsuma , a. kaino , s. hisatake , k. ajito , h .- j .song , a. wakatsuki , y. muramoto , n. kukutsu , and y. kado , piers online * 6 * , 390 - 394 ( 2010 ) , but there the purpose of biasing is not related to normalization .brown , k.a .mcintosh , f.w .smith , k.b .nichols , m.j .manfra , c.l .dennis , and j.p .mattia , `` milliwatt output levels and superquadratic bias dependence in a low - temperature - grown gaas photomixer , '' appl . phys .* 64 * , 3311 - 3313 ( 1994 ) .fig . 1 .current - voltage characteristic of an illuminated photomixer . at low voltages ,the photocurrent is proportional to the voltage ( black points : data , red line : linear fit ) .consequently , the photomixer converts a terahertz electric field linearly into a photocurrent ( see eq .( [ eqn : ithzvsefield ] ) ) and can be used as a detector .inset : photocurrent - voltage characteristic for larger voltages ( see also ) .drift of the measured terahertz photocurrent ( black ) and of the normalized photocurrent ( red ) upon reduction of the power of one of the two lasers . corresponds to equal laser powers , i.e. , or .solid lines : predictions according to eqs .( [ eqn : ithzvsefield ] ) and ( [ eqn : r ] ) .full symbols : measured data .crosses are obtained by subtracting the deviation between black symbols and black line from the red symbols .fig . 4 .normalized terahertz photocurrent vs. splitting ratio of the two laser powers , , for a constant total power .the solid line depicts our expectation , see eq .( [ eqn : r ] ) . is scaled such that .effect of a periodic variation of the laboratory temperature by ( top panel ) on the stability of the measured terahertz signal ( black line , bottom panel ) .the drift is strongly suppressed in the normalized photocurrent ( red line , bottom panel ) .second and third panel : dc photocurrents and at transmitter and receiver , respectively .fourth panel : normalized photocurrent if only ( green ) or only ( blue ) is used for the normalization , e.g. .( black ) and of the normalized photocurrent ( red ) upon reduction of the power of one of the two lasers . corresponds to equal laser powers , i.e. , or .solid lines : predictions according to eqs .( [ eqn : ithzvsefield ] ) and ( [ eqn : r ] ) .full symbols : measured data .crosses are obtained by subtracting the deviation between black symbols and black line from the red symbols .fig3normpc.eps . ] vs. splitting ratio of the two laser powers , , for a constant total power .the solid line depicts our expectation , see eq .( [ eqn : r ] ) . is scaled such that .fig4ithzvssplittingratio.eps . ]k ( top panel ) on the stability of the measured terahertz signal ( black line , bottom panel ) .the drift is strongly suppressed in the normalized photocurrent ( red line , bottom panel ) .second and third panel : dc photocurrents and at transmitter and receiver , respectively .fourth panel : normalized photocurrent if only ( green ) or only ( blue ) is used for the normalization , e.g. . fig5photocurrentcorrectionfigpaper.eps . ]
in a continuous - wave terahertz system based on photomixing , the measured amplitude of the terahertz signal shows an uncertainty due to drifts of the responsivities of the photomixers and of the optical power illuminating the photomixers . we report on a simple method to substantially reduce this uncertainty . by normalizing the amplitude to the dc photocurrents in both the transmitter and receiver photomixers , we achieve a significant increase of the stability . if , e.g. , the optical power of one laser is reduced by 10% , the normalized signal is expected to change by only 0.3% , i.e. , less than the typical uncertainty due to short - term fluctuations . this stabilization can be particularly valuable for terahertz applications in non - ideal environmental conditions outside of a temperature - stabilized laboratory .
the kolmogorov - johnson - mehl - avrami ( kjma ) model finds application in a vast ambit of scientific fields which ranges from thin film growth to materials science to biology and pharmacology , let alone the applied probability theory . in the majority of these studiesthe authors made use of a simplified version of the kjma formula : the stretched exponential , where is the fraction of the transformed phase , and ( the latter known as avrami s exponent ) being constants .the model , in principle , is simple because it rests on a poissonian stochastic process of points in space , to which a growth law is attached .in fact , owing to the poissonian process , the nucleation takes place everywhere in the space i.e. also in the already transformed phase . this partially fictitious nucleation rate ( ) , for we are dealing with a poissonian process , is linked to the actual ( real ) nucleation rate ( ) according to : ] namely , which is never satisfied ( ) . for the kjma - compliant growththe contribution of the containing term becomes \\ \\= \pi^{2}i_{a}^{2}\int_{0}^{t}dt_{1}\int_{0}^{t_{1}}dt_{2}r^{2}(t , t_{1 } ) \big [ r^{2}(t , t_{2})-r^{2}(t_{1},t_{2})\big]\\ \\ = \frac{x_{e}^{2}}{2}-\pi^{2}i_{a}^{2}\int_{0}^{t}dt_{1}\int_{0}^{t_{1}}dt_{2}r^{2}(t , t_{1})r^{2}(t_{1},t_{2 } ) .\end{array}\ ] ] it is possible to show that for linear growth the last term of eqn.[secondterm ] is equal to , consequently we get , which coincides with the term of the same order in the kjma series eqn.[serie3 ] . let s now briefly consider the contribution of the containing term in the general expression eqn.[uncov1 ] .fig.3 shows the integration domains and the correlation circles for the three nuclei born at time ( ) .the configuration integral over in eqn.[coeffic1 ] becomes function . in the drawing and . in the case of kjma - compliant growthall the circles are entirely within the circle . ] where is assumed . in this equation , is the area spanned by the third nucleus when the first and the second are located at and , respectively .because of the possible overlap between the correlation circles and , and for is encompassed within this area is a function of as (r_{12}-r(t_{2},t_{3})-r(t_{1},t_{3 } ) ) , \end{array}\ ] ] where , with being the overlap area of two circles of radius and at relative distance ^{2}}+\\ \\ + \rho_{1}^{2}\arccos \frac{\rho_{1}^{2}+x^{2}-\rho_{2}^{2}}{2x\rho_{1}}+ \rho_{2}^{2}\arccos\frac{\rho_{2}^{2}+x^{2}-\rho_{1}^{2}}{2x\rho_{2}}. \end{array}\ ] ] it turns out that the computation of the third order term of the series , eqn.[f3 ] , is a formidable task indeed .we do not attempt to perform the exact estimate of this term which , however , must coincide with the same order term of the kjma series . on the other hand ,an approximate evaluation of this term by using an oversimplified form of the area , is possible by formally rewriting this area as ] . in the case of complete - overlap ( )we get , [r^{2}(t , t_{3})-r^{2}(t_{1},t_{3 } ) ] , \end{array}\ ] ] that for the linear growth ( ) gives to be compared with the exact value which brings an uncertainty of .let us now consider the parabolic growth . in this casethe series expansion of the function , given by eqn.[kjma2 ] , can be performed by employing the same computation pathway discussed above , where now and . in this case implying and the two last coefficients being and .it is worth pointing out that in such an evaluation the transformed fraction , , is comprehensive of the contribution of phantoms .in fact , we recall that eqn.[kjma2 ] is the kjma solution with the phantom included nucleation rate , . the containing term of eqn.[uncov1 ] gives the extended volume fraction .as far as the third term is concerned ( contribution ) , it is possible to show that also for the integral eqn.[secondterm ] coincides with the third term of eqn.[parabola1 ] , .however , it is important to stress that eqn.[secondterm ] does not coincide , in this case , with the integral over the function of the exact solution eqn.[coeffic1 ] , since in the latter equation enter the actual nuclei , only . from the mathematical point of view , in the case of parabolic growth , the correlation circle is not contained within the integration domain as depicted in fig.2b .in other words , for kjma - non - compliant growth laws the area is a function of and the term of order in eqn.[coeffic1 ] does not coincide with of eqn.[parabola1 ] ( parabolic growth ) . in particular , under these circumstanceswe get (r(t , t_{2})-r_{1}-r(t_{1},t_{2 } ) ) , \end{array}\ ] ] where with the overlap area of two circles of radius and at distance ( eqn.[areoverlap ] ) .the ultimate aim of this section is to propose a simple formula for describing the kinetics on the basis of the `` actual '' extended transformed fraction , . on the ground of eqn.[uncov2 ]the transformed fraction can be rewritten in the general form ,\ ] ] where embodies the contributions of correlations among nuclei .it is worth noticing that for kjma - complaint growths ( random nucleation ) eqn.[uncov2 ] actually reproduces the kjma formula .in fact by identifying with the phantom included nucleation rate ( ) one gets leading to the formula . on the other hand ,working with the actual nucleation rate , in eqn.[uncov2 ] , and the series has infinite terms . can therefore be expanded as a power series of the extended actual volume fraction , . moreover , by exploiting the homogeneity properties of the functions ( see below ) , it is possible to attach a physical meaning to the power series coefficients , in terms of nucleation rate and growth law .also , for constant the coefficients of this series only depend upon growth law .to the aim of achieving a suitable compromise between handiness and pliability , we retain the liner approximation of obtaining the following kinetics ,\ ] ] with and constants . for the sake of completeness we point out that , according to its physical meaning , the parameter should be unitary .nevertheless , the substitution of the infinite expansion with only two terms authorizes the introduction of the new parameter . in any case, the values are found to be nearly one ( see fig.7 below ) . in order to study the transition kinetics in terms of actual nuclei and to test eqn.[sim2 ] , we worked out 2d computer simulations for several growth laws at constant nucleation rate , .as typical for this kind of study , , the simulation is performed on a lattice ( square in our case ) where , in order to mimic the continuum case , the lattice space is much lower than the mean size of nuclei .in particular , the transformation takes place on a square lattice whose dimension is with a nucleation rate of .it is worth reminding that , since the nucleation is poissonian , it occurs on the entire lattice independently of whether the space is already transformed or not .the computer simulation can be run taking into account the presence of phantoms or not . in the former casethe outputs have been labeled as `` w '' while the latter as `` wo '' .as far as the growth laws are concerned , we limited ourself to the power laws , for and .the results of the simulations are displayed in figs.4a - c for the kjma - non - compliant growths .in particular , the fractional surface coverage, , as a function of the actual extended fraction , with and without the contribution of phantoms , are reported ( curves labeled with `` w '' and `` wo '' , respectively ) . , is shown as a function of the extended fraction , for the power law ( ) where , and in figures a , b and c , respectively .the kinetics with ( w ) and without ( wo ) the inclusion of phantoms are displayed . ]the contribution of phantom overgrowth to the transformation kinetics is highlighted in fig.5 and shows that this effect brings an uncertainty on which ranges from to on going from to . in the case of parabolic growththis figure is lower than . , and for , and . ]these results are in qualitative agreement with previous studies on phantom overgrowth , although performed for a different nucleation laws .as discussed in more details below , the results displayed in figs.4,5 are universal , i.e. they only depend on power exponent , , and nucleation law ( in the present case ) .accordingly , the lower the more important is phantom overgrowth .in fact , let us consider a phantom , which starts growing at time , located at from the center of an actual nucleus which starts growing at ( fig.6a ) . for kjma - non - compliant growth , ( with integer ) , the phantom overtakes the actual nucleus at time , that is the solution of the equation , namely denotes the location of the phantom which start growing at time when the size of the actual nucleus is .the phantom overtakes the actual nucleus at time when the size of the actual nucleus is . ] where and .the graphical solution of eqn.[ovegrowth ] is depicted in fig.6b and indicates that ( and therefore ) decreases with .this is in agreement with the results of fig.5 which shows that the overgrowth phenomenon is more important at greater .as far as the guess function eqn.[sim2 ] is concerned , it matches the simulation curves with a very high degree of correlation .for instance the output of the fit to the curve , gives , and a squared correlation coefficients practically .for the sake of completeness the and fitting parameters are shown in fig.7 , where is found to be nearly one .this is in agreement with the theoretical value predicted by eqn.[uncov2 ] . as a function of growth exponent , for kjma - non - compliant growths . ]the behavior of the transformed fraction for kjma - compliant growths are reported in fig.8 for , and . .in the graph the kinetics for is compared with the kjma - compliant growth with , and ( from the top , respectively ) . in the insetthe kinetics for and are displayed together with the truncated kjma series expansions eqn.[serie3 ] and eqn.[parabola1 ] , respectively . ]these kinetics are very close to each other and differs , markedly , from that at also reported in the same figure . in the inset ,the kinetics for and are compared with the kjma series expansion eqn.[parabola1 ] and eqn.[serie3 ] , respectively .the fact that the curves for , and collapse on the same curve , can be rationalized computing the coefficients of the series expansion of for integer . in particular , by employing the method discussed in the previous section the two last coefficients of the series ( e.g. eqn.[serie3 ] ) are , and , for and , respectively .we also performed computer simulations of phase transitions for non - constant actual nucleation rate .the output of this computation is displayed in fig.9 where the behavior of the nucleation rate is shown in the inset as function of .in particular , the actual nucleation rate is given by the function . to the kineticshas been shown as dashed line .the correlation coefficient of the fit is in fact for the parameters and . the actual nucleation rate , as a function of , is also reported on the right scale in units . ] also in this case the function eqn.[sim2 ] has been found to match the kinetics with high degree of correlation where , again , the independent variable is the actual extended surface fraction .let us address in more detail the question of the dependence of volume fraction on extended volume fraction . to this end , we discuss the second order term of the exact solution eqn.[uncov1 ] , namely where eqn.[area0 ] has been employed .we point out that in eqn.[changevar1 ] is a second order homogeneous function of , and variables .accordingly , for the growth law , using the re - scaled variables , , and the integral becomes eqn.[changevar2 ] takes the form where + )=\frac{\int_{0}^{1}d\tau'i_{a}(\tau')\int_{0}^{\tau'}i_{a}(\tau '' ) d\tau '' \int_{\delta(\tau')}a(r_{1}',\tau',\tau'')d\mathbf{r}'_{1 } } { ( \int_{0}^{1}i_{a}(\tau')(1-\tau')^{2n}d\tau')^{2}}$ ] depends on and actual nucleation rate .it is apparent that in the case discussed in the previous section , , as well as higher order coefficients , is a function of , only . in this casethe transformed fraction is expected of the form . on the other hand , in the case ofa constant `` phantom - included '' nucleation rate , , and eqn.[uncov1 ] becomes an integral equation for the unknown .with reference to the second order term , in this case eqn.[changevar2 ] takes the general form which now implies the series ( note that this is a series expansion in terms of phantom - included extended surface ) . in the specific case of kjma - compliant growths, however , these series reduces to the exponential series with constant coefficients .it is instructive to estimate the first two coefficients in the case of linear growth . for constant phantomincluded nucleation rate the untransformed fraction satisfies the integral equation , the first order term of this equation , namely of the order of , gives . by substituting in eqn.[integraleq1 ] ,we get using dimensionless variables , and eqn.[integraleq2 ] eventually becomes where is given through eqn.[aria1 ] .notably , the last term in eqn.[integraleq3 ] has been already estimated in eqn.[secondterm ] and is equal to .the coefficient of is eventually computed as , that is the expected result .it is worth noting that the present approach can also be applied to different convex shapes other than circles and spheres , provided the orientation of nuclei is the same ( with a possible exception for triangle ) .this aspect has been discussed in details in refs., .we conclude this section by quoting the recent results of ref. . in this noteworthy contributionthe author faced the problem of describing the kinetics in terms of the actual nucleation rate .an ingenious application of the so called `` differential critical region '' approach makes it possible to find the kinetics by solving an appropriate integral equation . on the other hand, the different method employed in the present work , based on the use of correlation function , pertains to the same class of stochastic approaches on which `` kolmogorov s method '' is rooted .it could be enlightening , for the present topic , to demonstrate that the two approaches are in fact equivalent .we have shown that , employing the correlation function approach , the constraints on growth laws underlying the kjma theory can be eliminated . in other words , the present modeling is not constrained to any form of the growth law . the actual extended volume fraction is shown to be the natural variable of the kinetics , which implies universal curves . besides , we proposed a formula to fit experimental data by using the measurable actual extended coverage .the displacement of the kinetics from the exponential law , i.e. the parameter in eqn.[sim2 ] , may give insights into the microscopic growth law of nuclei .kolmogorov , _ bull .urss ( cl ._ , * 3 , 355 ( 1937 ) ; selected works of a.n .kolmogorov , edited by a.n .shiryayev ( kluwer , dordrecht , 1992 ) , engish translation vol.2 , pag.188 . *
the theory of kolmogorov - johnson - mehl - avrami ( kjma ) for phase transition kinetics is subjected to severe limitations concerning the functional form of the growth law . this paper is devoted to side step this drawback through the use of correlation function approach . moreover , we put forward an easy - to - handle formula , written in terms of the experimentally accessible actual extended volume fraction , which is found to match several types of growths . computer simulations have been done for corroborating the theoretical approach .
recently , there has been much interest on two - way relay networks ( twrns ) in which two source nodes and without a direct link communicate with each other via a relay node .the architecture of twrns makes it possible to better exploit the channel multiplexing of uplink and downlink wireless medium .the source nodes initially send their data to the relay node .the received data is combined employing a certain method according to the amplify - and - forward ( af ) or the decode - and - forward ( df ) mode and gets broadcasted from the relay back to both source nodes . with the application of network coding and channel estimation techniques , and can perform self - interference cancelation and remove their own transmitted codewords from the received signal .four time slots needed in a traditional one - way transmission for the forward and backward channels to accomplish one - round information exchange between and via the relay node can be reduced to two in twrns by comparison . in a realistic multi - user wireless network , e.g. is-856 system which has more relaxed delay requirements , the transmission power is fixed while the rate can be adapted according to the channel conditions . moreover , automatic repeat request ( arq ) techniques have been applied to improve the transmission reliability above the physical layer . prior works , also show there is a compromise between transmission rate and arq such that the network average successful throughput , i.e. , the goodput , can be maximized at an optimal rate .in addition to the goodput analysis , we in this paper are interested in energy - efficient operation . in such cases , the energy consumption due to retransmission should also be taken into account to evaluate the energy efficiency with respect to . hence , we investigate the joint optimization of ( at the physical layer ) and the number of arq retransmissions ( at the data - link layer ) by adopting a cross - layer framework in twrns . the remainder of this paper is organized as follows .section ii introduces the twrn model , channel assumptions as well as its general working mechanism . in sectionsiii and iv , we investigate the markov chain model under both af and df modes to derive the analytical expressions for the bit energy consumption and goodput . in sectionv , the numerical results are shown to compare the system performance in af and df modes .section vi provides the conclusion .in figure 1 , we depict a 3-node twrn where source nodes and can only exchange information via the relay node . codewords and from and , respectively , have equal length and unit energy .all nodes are working in half - duplex mode and the channels between and the relay , and and the relay are modeled as complex gaussian random variables with distributions and . without loss of generality , we also assume channel reciprocity such that and have identical distributions as and , respectively .odd and even time slots have equal length , which is the time to transmit one codeword , and are dedicated for uplink and downlink data transmissions , respectively .ack and nack control packets are assumed to be always successfully received and the trivial processing time is ignored .additive gaussian noise at the receiver terminals is modeled as .there are two more key assumptions : 1 ) channel codes support communication at the instantaneous channel capacity levels , and outages , which occur if transmission rate exceeds the instantaneous channel capacity , lead to packet errors and are perfectly detected at the receivers ; 2 ) depending on whether packets are successfully received or not , ack or nack control frames are sent and received with no errors . based on above network formulations, we can further discuss the twrn working procedure according to the current network states under af and df relay schemes , and find out the inherent impact of the transmission rate on network performances .the twrn in af mode can be visualized as two bi - directional cascade channels where in the odd time slots , and send individual codewords simultaneously to the relay and the signals are actually superimposed in the wireless medium .the relay will then amplify the received signals proportional to the average received power and broadcast the combined signals back to and in the even time slots . according to fig.[fig : twrn ] , the received signals at the relay in odd time slots is where , are the transmit power of and respectively. the relay will forward with a scaling factor which is where is the relay s transmit power . here, we normalize the variance of the channel between and as and by using a normalized distance factor while assuming the variances of the other links are proportional to where is the path loss coefficient , we then have and . at the end of even time slots , the received signals on and can be written as where , , and are i.i.d gaussian noise components . assuming the instantaneous channel state information is perfectly known at and , the self - interference part can be removed from and and the signals for decoding can be represented by the cascade channel instantaneous rate from to and from to are hence represented by to describe the network mechanism more accurately , we need to formulate the protocol of twrn in af mode as follows : 1 .each transmission round contains two consecutive time slots . in the odd slot , source nodes and transmit codewords to the relay with transmission rate bit / sec / hz , and the relay in the following even slot broadcasts back to source nodes .2 . at the end of one transmission round , and perform self - interference cancelation ( sic ) to subtract their own weighted messages , and decode and , respectively .if the decoding fails , an outage event will be declared on that cascade link .the outage event on the cascade link ( or ) is defined as the probability of the event ( or ) .ack or nack packets would be sent back to the relay based on successful transmission or outage .the relay will also notify and whether a new codeword or an old codeword should be ( re)transmitted in the next odd time slot with the control packet information .the network state transition diagram of the af twrn can be modeled as a markov chain as shown in fig .[ fig : af - markov ] , where the probability on each path denotes the probability of the transition between two states . and are defined as the outage probabilities on the cascade and links and are given by ( [ eq : af - outage0 ] ) and ( [ eq : af - outage ] ) can be determined using the cumulative distribution function of the random variable and and are independent exponential distributed with mean and , and is a constant . in this context , we know ,\,\hspace{0.3cm}\mu_{2}=e[|h_{r2}|^2 ] , \hspace{0.3 cm } for \hspace{0.3 cm } p_{12}\\ \mu_{1}=e[|h_{2r}|^2],\,\hspace{0.3cm}\mu_{1}=e[|h_{r1}|^2 ] , \hspace{0.3 cm } for \hspace{0.3 cm } p_{21}\\ \alpha=\frac{1}{\beta^2 } \end{array } \right.\end{aligned}\ ] ] with the given network parameters , and can be derived accordingly . in fig .[ fig : af - state ] , it is explicitly seen at the beginning of each odd time slot that both and transmit to the relay such that at the beginning of each even time slot , the relay is always in the ready - for - broadcasting state and will consequently transition to , , , or with certain probabilities .we first determine the following four equations according to the state transitions in the markov chain to derive the probability of each state : after solving the set of equations in ( [ eq : af - markov - equations ] ) , we obtain . we know from the inherent characteristics of the af twrn that the data exchange only happens in the broadcasting phases with successful packet delivery . therefore , the system goodput is defined similarly as in by ( [ eq : af - goodput ] ) indicates through the terms and that a higher transmission rate will result in higher packet error rates ( outage ) , leading to more arq retransmissions which equivalently reduce the data rate .intuitively , a balance between transmission rate and the number of arq retransmissions needs to be found such that the goodput is maximized .energy efficiency has always been a major concern in wireless networks .recently , power or energy efficiency in wireless one - way relay networks have been extensively studied . in ,the average bit energy consumption is minimized by determining the optimal number of bits per symbol , i.e. , the constellation size , in a specific modulation format .similarly as in one - way relay channels , the outage probabilities in twrns are functions of the transmission rate .for instance , there could be an increased number of outage events on the cascade channels when codewords are transmitted at a high rate .in such a case , more retransmissions and higher energy expenditure are needed to accomplish the reliable packet delivery . therefore , we are interested in a possible realization of twrn operation , which can provide a well - balanced performance on both the goodput and the required energy .we evaluate this by formulating the average bit energy consumption required for successfully exchanging one information bit between and .we then evaluate by considering long - term transmissions on twrn .regardless of the previous state , whenever the relay is in state and is broadcasting , the resulting state would be any of the other four states previously described .assuming there are rounds of two - way transmission , each of which consists of a pair of consecutive time slots and each codeword has bits .therefore , with , where is the number of transmission round corresponding to state , the average bit energy consumption could be derived as the ratio of total bits successfully exchanged over total energy consumption : df twrn differs from the af twrn in that there is a crucial intermediate decoding procedure at the the relay when it has received the codeword from the uplink transmission .if both source nodes are allowed to send codewords to the relay simultaneously in the uplink transmission , the decoding at the relay has to deal with the multiple access problem in a realistic application , with successive interference cancelation techniques . to reduce the hardware complexity and increase the feasibility of implementation, we hereby adopt the df twrn mode from where the relay performs sequential decode - and - forward .the outage probabilities on , , , links are denoted as , , and . the protocol for sequential df twrnis described as follows : 1 . in the initial state ,the relay s buffer is empty and the relay first polls on until it receives codeword successfully with probability .then , the state moves to which means the relay holds in the buffer .otherwise , the state remains as with probability .if the relay already has , it starts polling .the state either changes to with probability upon successfully receiving , or stays in with .3 . when the relay has both and , it generates a new codeword according to a gaussian codebook of with equal power allocation .then at rate , it broadcasts to and , which will perform sic to decode and , respectively .accordingly , the state will transit to , , or with corresponding probabilities , , or respectively .4 . at the beginning of next transmission round, the relay will decide to poll a new codeword from ( or ) based on the previous state being , ( or ) or just retransmits the old if the previous state was .the network state transition diagram of the df twrn can be modeled as in fig .[ fig : df - markov ] with detailed probabilities on each path .since the relay receives and decodes and at different time slots , the received signals at the relay from uplink transmissions can be represented as similar to ( [ eq : af - sic ] ) , the signals for decoding at and after sic has been performed can be written as similarly as in the discussion of the goodput of af twrn in section iii , the data exchange only occurs upon the successful signal receptions at and at the end of the broadcasting time slot .hence , initially , it is necessary to calculate the probability of being in state stays 3 , i.e. , .we start from calculating the outage probabilities on the forward and backward channels as the probabilities of buffer states can be solved by noting the following relations from fig .[ fig : df - markov ] : solving the equations in ( [ eq : df - markov - equations ] ) with given outage probabilities , we can obtain the following results for the buffer states : where the polynomial in the denominators is denoted by therefore , the system goodput in the df mode can be derived as in the df twrn is more complicated to calculate than in the af twrn where each transmission round has fixed power as can be seen in ( [ eq : af - eb ] ) .hence , in the df scenario , we have to separate the energy expenditure into two parts , energy consumption in the first stage and energy consumption in the second stage .the first stage denotes the state transition from any of 4 previous states to state , where the relay holds two codewords and in its buffer and is ready to broadcast .the second stage is that the relay broadcasts its newly generated codeword and the state transits back to any of the four states again .considering the relay s buffer is to be loaded with both codewords and from any of the previous states on the first stage , the energy consumption conditioned on the previous state , , , or on this particular transition will be on the second stage , the energy consumption for broadcasting is always , so the average bit energy consumption for one information bit successfully exchanged on the df twrn can be computed as whenever the state probabilities and outage probabilities are known .in this section , we present the numerical results to evaluate the system performance of twrn in both af and df modes .the network configurations are assumed to be as follows : relay is located in the middle between and which means .the power spectrum density of the gaussian white noise is and the channel bandwidth is set to hz .path loss coefficient is .we also assume the same transmit power for both source nodes and the relay , which is and define the snr by .firstly , we are particularly interested in how the goodput varies as a function of the transmission rate at specific snr values . in fig .[ fig : eff - r ] , and are plotted as functions of , with solid and dashed lines corresponding to af and df modes , respectively . on each curve with a given specific value ,it s immediately seen that the goodput first increases within low range and then begins to drop once the rate is increased beyond the optimal which maximizes the goodput .additionally , at low values of the rate , the af twrn has higher goodput , while beyond a certain rate , df starts to outperform , regardless of the snr value . to better illustrate the goodput performance in af and df modes , we look into transmission efficiency by defining a normalized rate and plotting it in fig .[ fig : normal - r ] .the normalized rate is always decreasing when increases at all snr s in both af and df .in other words , increasing outage probabilities due to increasing has eventually resulted in more arq retransmissions . specifically in the high snr scenario ,the normalized rate levels off between 0.6 and 0.7 in af and 0.9 and 1 in df , which means the transmission efficiency does nt change too much within this rate range .in addition , the af mode seems to have higher normalized rate in low rate range . in fig .[ fig : eb - r ] , we analyze the energy efficiency .we notice that the difference of the average bit energy consumptions between two modes is insignificant up until , but df stills has a better energy efficiency with a lower regardless of .however when compared with the corresponding points ( ) in fig .[ fig : normal - r ] , it is shown that even though df can achieve a slightly lower than af , it also suffers a lower transmission efficiency in the metric of lower normalized rate . above in low snr scenario of , af predominates with both higher normalized rate and lower until approaches about bits / sec / hz .similar results can be observed on the high snr scenarios also . in fig .[ fig : normal - snr ] , we study the impact of snr on the normalized rate in both af and df .basically the normalized rate is increasing as snr increases at all transmission rates . at low snr e.g. , df performs better than af .as snr approaches , the normalized rate gets close to 1 ( or 0.7 ) in af ( or df ) mode .consequently , one way to improve on the transmission efficiency is to increase snr in the twrn .considering overall impacts of rate and snr , we can always find an scheme for the twrn to achieve optimality in respect to the goodput , the average bit energy consumption or the transmission efficiency .in this paper , we have studied the two - way relay networks working in amplify - and - forward and decode - and - forward modes . in each mode, we set up a markov chain model to analyze the state transition in details .arq transmission is employed to guarantee the successful packet delivery at the end and mathematical expressions for the goodput and bit energy consumption have been derived .several interesting results are observed from simulation results : 1 ) the transmission rate can be optimized to achieve a maximal goodput in both af and df modes ; 2 ) generally the transmission efficiency is higher in af within a certain range , while the df can achieve a slightly higher energy efficiency instead ; 3 ) increasing snr will always increase the normalized rate regardless of .hence , it s possible the network performance be optimized in a balanced manner to maintain a relatively high goodput as well as a low .v. erceg , l. j. greenstein , s. y. tjandra , s. r. parkoff , a. gupta , b. kulic , a. a. julius , and r. bianchi , `` an emperically based path loss model for wireless channels in suburban environments , '' ieee _ j.select .areas commun , _ vol .17 , no . 7 , pp.1205 - 1211 , jul . 1999 .
in this paper , we study two - way relay networks ( twrns ) in which two source nodes exchange their information via a relay node indirectly in rayleigh fading channels . both amplify - and - forward ( af ) and decode - and - forward ( df ) techniques have been analyzed in the twrn employing a markov chain model through which the network operation is described and investigated in depth . automatic repeat - request ( arq ) retransmission has been applied to guarantee the successful packet delivery . the bit energy consumption and goodput expressions have been derived as functions of transmission rate in a given af or df twrn . numerical results are used to identify the optimal transmission rates where the bit energy consumption is minimized or the goodput is maximized . the network performances are compared in terms of energy and transmission efficiency in af and df modes .
multiple - description coding ( mdc ) aims at creating separate descriptions individually capable of reproducing a source to a specified accuracy and when combined being able to refine each other .traditionally quantizer based mdc schemes consider only two descriptions . among the few vector quantizer based approaches which consider more than two descriptions are . in closed form expressions for the design of lattice vector quantizersare given for the symmetric case where all packet - loss probabilities and side entropies are equal .-channel system .descriptions are encoded at an entropy of , .the erasure channel either transmits the description errorless or not at all.,width=340 ] in iterative vector quantizer design algorithms are proposed for the asymmetric case where packet - loss probabilities and side entropies are allowed to be unequal .in this paper we consider the asymmetric case for an arbitrary number of descriptions , where the description is encoded at an entropy of , for , see fig .[ fig : nchannel ] .the total rate is then given by the sum of the entropies of the individual descriptions .due to the asymmetry , the total distortion depends not only on how many descriptions are received ( as is the case in the symmetric situation ) , but also on _ which _ descriptions make it to the decoder .we derive analytical expressions for the central and side quantizers which , under high - resolution assumptions , minimize the _ expected distortion _ at the receiver subject to entropy constraints on the total rate .in contrast to our design allows for simple adaptation of our quantizers to changing source - channel characteristics and entropy constraints , effectively avoiding iterative quantizer design procedures .let be an arbitrary i.i.d .source and let be a real lattice with voronoi regions , given by where is a realization of and we define , where denotes vector transposition .we consider one central lattice ( central quantizer ) and several sublattices ( side quantizers ) , where and , is the number of descriptions .the trivial case leads to a single - description system , where we would simply use one central quantizer and no side quantizers .we assume that sublattices are geometrically similar to , i.e. they can be obtained from by applying change of scales , rotations and possible reflections .the sublattice index , n_i\in \mathbb{z}^+ ] . to simplify the design of the index assignment map we assume sublattices are clean , specifically we require that no points of lies on the boundaries of the voronoi regions of .a source vector is quantized to the nearest reconstruction point in the central lattice .hereafter follows index assignments ( mappings ) , which uniquely map all s to reconstruction points in each of the sublattices .this mapping is done through a labeling function , and we denote the individual component functions of by . in other words , the injective map that maps into , is given by where and .each -tuple is used only once when labeling points in in order to make sure that can be recovered unambiguously when all descriptions are received .since lattices are infinite arrays of points , we adopt the procedure used in and construct a shift invariant labeling function , so only a finite number of points must be labeled .we generalize the approach of and construct a product lattice which has central lattice points and sublattice points from the sublattice in each of its voronoi regions .the voronoi regions of the product lattice are all similar so by labeling only central lattice points within one voronoi region of , the rest of the central lattice points may be labeled simply by translating this voronoi region throughout . without loss of generality ,we let and by construction we let be a geometrical similar and clean sublattice of as well as . with this choice of , we only label central lattice points within , which is the voronoi region of around origo . with thiswe get the following shift invariant property for all and all . using standard high - resolution assumptions for lattice quantizers , the expected central distortion can be expressed as where is the normalized second moment of inertia of the central quantizer and it can be shown that the side distortion for the description is given by the minimum entropy needed to achieve the central distortion is given by where is the component - wise differential entropy of the source .the side entropies are given by index assignment is done by a labeling function , that maps central lattice points to sublattice points .an optimal assignment minimizes the expected distortion when descriptions are received and is invertible so the central quantizer can be used when all descriptions are received . at the receiving side , is reconstructed to a quality that is determined by the received descriptions .if no descriptions are received we reconstruct using the expected value , ] and . from ( [ eq : expdist ] ) we see that the distortion may be split into two terms , one describing the distortion occurring when the central quantizer is used on the source , and one that describes the distortion due to the index assignment .an optimal index assignment minimizes the second term in ( [ eq : expdist ] ) for all possible combinations of descriptions .we can rewrite this term using the following theorem [ theo : sums ] for any we have see .the cost functional to be minimized can then be written as we minimize this cost functional subject to a constraint on the sum of the side entropies .we remark here that the side entropies depend solely on and and as such not on the particular choice of -tuples . in other words , for fixed s and a fixed , the index assignment problem is solved if ( [ eq : costfunctional2 ] ) is minimized .the problem of choosing and such that certain entropy constraints are not violated is independent of the assignment problem and deferred to section [ sec : optq ] .the first term in ( [ eq : costfunctional2 ] ) describes the distance from a central lattice point to the weighted centroid of its associated -tuple .the second term describes the weighted sum of pairwise squared distances ( wspsd ) between elements of the -tuples .it can be shown , c.f .proposition [ prop : growthriemann2 ] , that , under a high - resolution assumption , the second term in ( [ eq : costfunctional2 ] ) is dominant , from which we conclude that in order to minimize ( [ eq : costfunctional2 ] ) we must use -tuples with the smallest wspsd .these -tuples are then assigned to central lattice points in such a way , that the first term in ( [ eq : costfunctional2 ] ) is minimized .this problem can be posed and solved as a linear assignment problem . to obtain -tupleswe center a region around all sublattice points , and construct -tuples by combining sublattice points from the other sublattices ( i.e. ) within in all possible ways and select the ones that minimize ( [ eq : costfunctional2 ] ) . for each is possible to construct different -tuples , where is the number of sublattice points from the sublattice within the region .this gives a total of -tuples when all are used .let be the volume of .since and we need -tuples for each , we see that so in order to obtain at least -tuples , the volume of must satisfy for the symmetric case , i.e. , , we have , which is in agreement with the results obtained in . by centering around each , we make sure that the map is shift - invariant .however , this also means that all -tuples have their first coordinate ( i.e. ) inside . to be optimalthis restriction must be removed which is easily done by considering all cosets of each -tuple .the coset of a fixed -tuple , say where , is given by , for all .the -tuples in a coset are distinct modulo and by making sure that only one member from each coset is used , the shift - invariance property is preserved .before we outline the design procedure for constructing an optimal index assignment we remark that in order to minimize the wspsd between a fixed and the set of points it is required that forms a sphere centered at . 1 . center a sphere at each and construct all possible -tuples where and .notice that all -tuples have their first coordinate ( ) inside and they are therefore shift - invariant .make large enough so at least distinct -tuples are found for each .2 . construct cosets of each -tuple .the central lattice points in must now be matched to distinct -tuples .this is a standard linear assignment problem where only one member from each coset is ( allowed to be ) matched to a central lattice point in . as observed in ,having equality in ( [ eq : vtilde ] ) , i.e. using the minimum , will not minimize the wspsd .instead a slightly larger region must be used . forthe _ practical _ construction of the -tuples this is not a problem , since we simply use e.g. twice as large a region as needed and let the linear assignment algorithm choose the optimal -tuples .however , in order to _ theoretically _ describe the performance of the quantizers we need to know the optimal . in an expansion factor introduced and used to describe how much had to be expanded from the theoretical lower bound ( [ eq : vtilde ] ) , to make sure that the optimal -tuples could be constructed by combining sublattice points within the region . adopting this approach leads to where e.g. for the two - dimensional case .analytical expressions for are given in .in this section we derive high - resolution approximations for the expected distortion . however , we first introduce proposition [ prop : riemann2 ] which relates the sum of distances between pairs of sublattice points to , the dimensionless normalized second - moment of an -dimensional sphere .hereafter follows proposition [ prop : growthriemann2 ] which determines the dominating term in the expression for the expected distortion .[ prop : riemann2 ] for and , we have for any pair of sublattices , , & \approx \psi^{2/l}\nu^{2/l } g(s_l ) n_\pi\prod_{m=0}^{k-1}n_m^{2/l(k-1)}. \end{split}\ ] ] let , i.e.the set of sublattice points associated with the central lattice points within .furthermore let be the set of unique elements of , where .finally , let so that contains all the elements which are in the -tuples that also contains a specific .let be the set of unique elements. for sublattice and we have observe that each is used times , so given , we have since .hence , with , we have & \approx n_i\nu_j \psi^{2/l } \nu^{2/l}g(s_l)\prod_{m=0}^{k-1}n_m^{2/l(k-1 ) } , \end{split}\ ] ] which is independent of , so that which completes the proof . [ prop : growthriemann2 ] let be chosen such that for all . for and have see .the expected distortion ( [ eq : expdist ] ) can by use of theorem [ theo : sums ] be written as by use of propositions [ prop : riemann2 ] , [ prop : growthriemann2 ] and eq .( [ eq : d0 g ] ) it follows that ( [ eq : da ] ) can be written as where depends on and and is given by the total expected distortion is obtained by summing over including the cases where and , \prod_{i=0}^{k-1}p_i \\ & \quad + \psi^{2/l}\nu^{2/l } g(s_l)\prod_{m=0}^{k-1}n_m^{2/l(k-1)}\hat{\beta } , \end{split}\ ] ] where and . using ( [ eq : rc ] ) and ( [ eq : ri ] ) we can write the expected distortion as a function of entropies , which leads to \prod_{i=0}^{k-1}p_i\\ & + \psi^{2/l } \hat{\beta } g(s_l)2^{2(h(x)-r_c)}2^{\frac{2k}{k-1}\left(r_c - \frac{1}{k}\sum_{i=0}^{k-1}r_i\right)}. \end{split}\ ] ]in this section we consider the situation where the total bit budget is constrained , i.e. we find the optimal scaling factors , and , subject to entropy constraints on the sum of the side entropies , where is the target entropy .we also find the optimal bit - distribution among the descriptions .first we observe from ( [ eq : daopt ] ) that the expected distortion depends upon the _ sum _ of the side entropies and not the individual side entropies . in order to be optimalit is necessary to achieve equality in the entropy constraint , i.e. . from ( [ eq : ri ] ) we have which can be rewritten as where is constant for fixed target and differential entropies .writing ( [ eq : tmptau ] ) as and inserting in ( [ eq : adistopt ] ) leads to \prod_{i=0}^{k-1}p_i \\ & \quad + \psi^{2/l}\nu^{-2/l(k-1)}\tau_*^{2/l(k-1 ) } g(s_l)\hat{\beta}. \end{split}\ ] ] the optimal is found by differentiating ( [ eq : edisttmp ] ) w.r.t . , equating to zero and solving for , which leads to at this point we still need to find expressions for the optimal ( or , equivalently , optimal given ) .let , where , hence .from ( [ eq : ri ] ) we have which can be rewritten as where , after inserting the optimal from ( [ eq : optnurt ] ) we obtain an expression for the optimal index value , that is it follows from ( [ eq : ri ] ) that so that . in addition , since rates must be positive , we obtain the following inequalities hence , the individual side entropies can be arbitrarily chosen as long as they satisfy ( [ eq : ai ] ) and .to verify theoretical results we present in this section experimental results obtained by using two - dimensional zero - mean unit - variance gaussian source vectors .[ fig : a2k4_perf ] shows the theoretical expected distortion ( [ eq : adistopt ] ) and the numerical expected distortion obtained for descriptions when using the quantizer at a total entropy bits / dimension . in this setupwe have and packet - loss probabilities are fixed at except for which is varied in the range \%$ ] .as is varied we update according to ( [ eq : optnurt ] ) and arbitrarily pick the index values such that .however , index values are restricted to a certain set of integers and the side entropies might therefore not sum exactly to . to make surethe target entropy is met with equality we then rescale as .we see from fig . [fig : a2k4_perf ] a good correspondence between the theoretically and numerically obtained results .[ fig : a2k4perf ]this research is supported by the technology foundation stw , applied science division of nwo and the technology programme of the ministry of economics affairs .v. a. vaishampayan , n. j. a. sloane , and s. d. servetto , `` multiple - description vector quantization with lattice codebooks : design and analysis , '' _ ieee trans ._ , vol .47 , no . 5 , pp .1718 1734 , july 2001 .j. stergaard , j. jensen , and r. heusdens , `` entropy constrained multiple description lattice vector quantization , '' in _ proc .acoust . , speech , and signal proc ._ , vol . 4 , may 2004 , pp601 604 .
we present analytical expressions for optimal entropy - constrained multiple - description lattice vector quantizers which , under high - resolutions assumptions , minimize the expected distortion for given packet - loss probabilities . we consider the asymmetric case where packet - loss probabilities and side entropies are allowed to be unequal and find optimal quantizers for any number of descriptions in any dimension . we show that the normalized second moments of the side - quantizers are given by that of an -dimensional sphere independent of the choice of lattices . furthermore , we show that the optimal bit - distribution among the descriptions is not unique . in fact , within certain limits , bits can be arbitrarily distributed .
the electron density is a fundamental parameter in plasma physics .knowledge of the three - dimensional ( 3d ) electron density structure is very important for our understanding of physical processes in the solar corona , such as the coronal heating and the acceleration of the solar wind ( _ e.g. _ , ; ) .the density structure of the corona strongly affects the propagation of cmes .the density is also important for estimates of the alfvn mach number and compression rate of cme - driven shocks , and for the interpretation of solar radio emission such as type ii and type iv radio bursts produced by coronal eruptions .the k corona arises from thomson scattering of photospheric white light from free electrons ( _ e.g. _ , ) .because the emission is optically thin , the measured signal is a contribution from electrons all along the line of sight ( los ) .the derivation of the electron density in the k corona from the total brightness ( ) or polarized radiance ( ) is a classical problem of coronal physics , first addressed by and . because of difficulties in generally separating the k - coronal component from the f - coronal component arising from interplanetary dust scattering ( _ e.g. _ , ; ) , most of the inversion techniques are practical for measurements . the f - coronal polarization is not very well understood ( see reviews by ; ) .it is generally accepted that the polarized contribution of the f corona can be ignored within 5 , but some observations show that the f corona is almost unpolarized even at elongations ranging from 10 to 16 ( , ; ) .in addition , the f corona dominates the total brightness of the corona beyond about 4 or 5 and make it difficult to recover the much fainter k - corona emission . to retrieve the electron density of the corona from a single 2d -image , one needs to assume some special geometries for the distribution of electrons along the los .the previous studies have modeled the electron density distribution in several ways , including the simple spherically symmetric model , the axisymmetric model , or the models that take into account large - scale structures , such as polar plumes in the coronal holes , or active streamers in the equatorial regions . among the above inversion methods , the spherically symmetric inversion ( ssi ) developed by ( called the _ van de hulst _ inversion thereafter ) , was the most representative and commonly used .he found that the density integral for signals becomes invertible if the latitudinal and azimuthal gradients in electron density are weaker than the radial gradient ( _ i.e. _ , a local spherical symmetry approximation ) .this classic inversion technique has been applied to establish the standard density models of the coronal background at equator and pole in the solar minimum and maximum ( see ) and the density models of near - symmetric coronal structures such as streamers and coronal holes .the ssi method was also used to analyze detailed density distribution of fine coronal structures observed in eclipses ( _ e.g. _ , ; ) , and to derive the 2d density distribution of the entire corona , when the spherical symmetry is assumed holding locally .the importance of the ssi method for coronal density determination has been demonstrated by wide applications of the derived densities such as in testing models of the acceleration mechanism of the fast solar wind , interpreting sources of type ii and type iv radio bursts , and determining the coronal magnetic field strengths from fast magnetosonic waves by global coronal seismology . however , in contrast to extensive applications , studies on evaluation of the ssi method are few in the literature .compared the white light densities to those determined from the density - sensitive euv line ratios of si 350/342 observed by soho / cds , and found that densities determined from these two different analysis techniques match extremely well in the low corona for a very symmetric solar minimum streamer structure .similarly , compared densities of various coronal structures determined by inverting mlso mk4 maps and from the line ratios of o 1032/1037.6 observed by soho / uvcs , and found that the mean densities in a streamer by the two methods are consistent , while the coronal densities for a coronal hole and an active region are within a factor of two .these results are encouraging , and suggest that the 2d white - light density distribution in coronal structures can be very useful for other studies , but a detailed assessment is required for its better application , such as information about the limitation of the ssi method and the uncertainty of derived densities . to achieve this goal, one may use synthetic images from 3d densities of the corona reconstructed by tomographic techniques (;, ; ; ; ; ) , or simulated by global 3d mhd models .the tomographic technique is a sophisticated method which reconstructs optically thin 3d coronal density structures using observations from multiple viewing directions .the use of this method in solar physics was previously proposed by , and later this method has been applied to the soho / lasco and stereo / cor1 data . for a solar coronal tomography based on observations made by a single spacecraft or only from the earth - based coronagraph , data typically need to be gathered over a period of half solar rotation , so , generally , only structures that are stationary over about two weeks can be reliably reconstructed .therefore , this technique is not applicable to eclipses or , perhaps , periods of high level of solar activity , although the 3d coronal electron density can be routinely computed .similarly , as global mhd models of the corona need measurements of photospheric magnetic field data over a solar rotation , it is also difficult using the mhd method to reconstruct dynamic or rapidly - evolving coronal structures matching to observations. thus , the ssi analysis could be very useful in some cases when the tomography is not suitable .in addition , the ssi method is also useful in order to investigate the coronal density variability over a long term period ( several solar cycles ) when modern quality synoptic observations were not available and relate it to the modern state of the art reconstructions . in this study, we choose the 3d coronal density obtained by tomography as a model in order to estimate uncertainties of the ssi method . compared the tomographic reconstruction and a 3d mhd model of the corona , and found that at lower heights the mhd models have better agreement with the tomographic densities in the region below 3.5 , but become more problematic at larger heights .they also showed that the tomographic reconstruction has more smaller - scale structures within the streamer belt than the model can reproduce .moreover , the tomographic reconstruction is entirely based on _ coronal _ observations , while the mhd models are primary based on photospheric boundary conditions .this suggests that the tomographic reconstructions are more realistic and thus may be more suitable to be used as a model in order to evaluate uncertainties of the ssi method .this article is organized as follows .section [ sctssi ] describes two ssi methods and their relationship .section [ sctrlt ] presents the evaluation of the ssi method .we demonstrate the 3d density reconstruction by the ssi method based on real data in section [ sctd3d ] .the discussion and conclusions as well as the potential extension of our work are given in section [ sctdc ] .following derivations in , at a point , p , on the plane of sky ( pos ) can be expressed as , \frac{\rho^2}{r^2 } n(s)ds , \label{eqpb}\ ] ] where the pos is defined as a plane crossing the sun s center and perpendicular to the los , is the electron density , is the perpendicular distance between the los and sun center , is the radial distance from the sun center , and is the distance measured from p along the los .these distances are related by . is the mean solar brightness . , is the thompson scattering cross section for a single electron , where is the classical electron radius .note that is the thompson scattering cross section " as referred to in . is the limb darkening coefficient .in addition , and are geometrical factors given by , \label{eqbr}\end{aligned}\ ] ] where the angle is defined by .if electron density is a function of only , equation ( [ eqpb ] ) can be written in the form (r)\frac{\rho^2 dr}{r\sqrt{r^2-\rho^2 } } , \label{eqpbr}\ ] ] where and are in units of , and .developed a technique for the inversion of measurements ( called the _ van de hulst _ inversion ) . to implement this technique, one needs first to fit the data using a curve in the polynomial form , specifically , once the coefficients are determined , the solution of equation ( [ eqpbr ] ) is then given by where where is the gamma function . note that the constant here is different from that in , because we have used the expression of in .more generally , the index of radial power law in equation ( [ eqpbsum ] ) is real but not necessarily integer valued .this leads to the modified van de hulst technique assuming , where and are the fit coefficients , then and in equation ( [ eqnr ] ) need to be replaced with and ( _ e.g. _ , ; ; ). developed another ssi technique by assuming the radial electron density distribution in the polynomial form , , and have used it to the inversion of total brightness observations .we here adopt this technique for the inversion , and call it the spherically symmetric polynomial approximation ( sspa ) method for short . by substituting the polynominal for in equation ( [ eqpbr ] ) ,we obtain where (r^{-k})\frac{\rho^2 dr}{r\sqrt{r^2-\rho^2 } } , \label{eqgkr}\ ] ] for easier calculation , using the substitution the integral can be transformed into {\rm cos}^k\theta\,d\theta , \label{eqgktht}\ ] ] where is the angle from the pos to a direction from the sun center to a point of distance along the los . since the integral can be numerically calculated for all desired impact distances and exponents , substituting the observed curve along a radial trace for the left - hand side of equation ( [ eqpbg ] ) the only unknowns are the coefficients .we determine the coefficients by a multivariate least - squares fit to the curve of ( using the function of _ svdfit _ provided by idl , interactive data language ) .the radial distribution of electron density is then obtained by substituting the resulting coefficients directly into the polynomial form . to select appropriate degrees of polynominal we did some experiments using coronal density models for the region between 1.5 and 6 in , and found that choosing the first five terms ( ) can determine with the relative errors within 5% and reproduce measurements with the relative errors within 1% .we thus use the 5-degree polynomial fits for all the sspa inversions in our study . to look into the relationship between the sspa and van de hulst inversions theoretically ,we use taylor series to approximate the functions and in the case when with in unit of , for the taylor polynomial approximations of degree six above , the relative errors are less than 1% when . since and are on the order of for very large , by keeping only the terms of we have , which corresponds to the point source approximation .applying this approximation to equation ( [ eqnr ] ) for the van de hulst inversion , we obtain this verifies the polynomial form of assumed in the sspa method .conversely , if is given in the same form as equation ( [ eqnrapx ] ) , likewise we can recover as assumed in the van de hulst inversion from the sspa inversion using equations ( [ eqpbg ] ) and ( [ eqgktht ] ) with the point source approximation for .thus , we have proved that the sspa and van de hulst inversions are identical on the order of , and their difference in higher orders can be reduced by increasing the degrees of .note that the above analysis is also valid for the case when is real . in section [ sctstrm ], we will show that the coronal densities determined from cor1 images by the sspa and van de hulst inversions with =5 are almost identical .therefore , the assessment of the sspa inversion in the following is equivalent to that of the van de hulst inversion .as a model for evaluation of ssi method we used a 3d coronal electron density obtained by tomography method applied to white - light coronagraph data .the data were acquired by the inner coronagraph ( cor1 ) telescopes aboard the twin _ solar terrestrial relation observatory _ ( stereo ) spacecraft .cor1 observes the white - light k corona from about 1.4 to 4 in a waveband of 22.5 nm wide centered on h line at 656 nm with a time cadence of 5 minutes .regularized tomographic inversion method by with the limb darkening coefficient =0.6 provided the reconstruction of a 3d coronal electron density for the period of 2008 february 114 ( consisting of 28 images from cor1-b ) that corresponds to carrington rotation ( cr ) 2066 .the scattered light in the data was removed by subtracting a combination of the monthly minimum and the roll minimum backgrounds .the reconstruction domain is a rectangular grid covering a spherical region between 1.5 and 4 .we evaluate the sspa inversion method using this 3d density reconstruction as a model . in order to synthesize images observed by cor1-a and -b at a given time , we first determine the orbital positions of the spacecraft in carrington coordinates using the routine provided by solarsoftware ( ssw ) .the 3d density grid is then transformed from the carrington heliographic system to the projected coordinate system viewed from stereo - a or -b .therefore , the los integral of in equation ( [ eqpb ] ) becomes a simple summation in the -direction ( defined along los ) .figure [ fig : necr ] shows the tomographic reconstructed 3d coronal electron density at 2 .figure [ fig : nemap ] shows the density distributions of the corona in the pos for cor1-a and -b at 12:00 ut on 8 february 2008 .figure [ fig : pbmap ] shows the corresponding synthetic images , which represent the ideal measurements without contamination by the scattered light and instrumental noises .coronal streamers are the most conspicuous , large - scale structures in the extended corona .the streamer belt is a usually continuous sheet of enhanced density associated with the magnetic neutral sheet or the current sheet (; ; ; , ) .its shape gets progressively deformed from a rather flat plane at minimum solar activity to a highly warped surface at maximum solar activity ( _ e.g. _ , ) .some previous studies have suggested that the ssi assumptions are suitable to symmetric streamers at low latitude , in particular , the solar minimum streamer belt ( _ e.g. _ , ; ; ) .here we first test the sspa inversion of the solar minimum streamer belt .a streamer belt during cr 2066 corresponding to solar minimum of solar cycles 23 is shown in figure [ fig : necr ] .for instance , we use the synthetic images at 12:00 ut on 8 february 2008 when the separation angle between stereo - a and -b was about 45 .figure [ fig : pbmap ] shows that the streamer belt on the east limb is almost edge - on , as inferred from the similar appearance in cor1-a and -b . in the edge - on condition , the coronagraph is looking along the streamer belt ; _ i.e. _ , where all the streamers are at the same latitude behind each other along the los .in contrast , the streamer belt on west limb in cor1-b is face - on , showing a distinctly different shape from that in cor1-a .we choose a radial trace near the middle of the east - limb streamer ( see marked positions in figure [ fig : pbmap ] ) , where the profiles for cor1-a and -b are nearly identical ( figure [ fig : pblc ] ) , and suppose that this location best meets the ssi condition .we then derive the radial distribution of electron density by fitting the data along this radial trace between 1.6 and 3.9 using both the sspa method and the van de hulst technique ( with =5 in ssw ) for an illustrated comparison , and show the inversion results in figure [ fig : pbinv ] .note that since cor1 does not perform well at .55 at some position angles due to interference from the occulter ( see ) , this led to defective signals and thus unreliable reconstructions within this region , so we constrain all sspa inversions in the following to regions with .we find that the electron densities obtained from both inversion methods agree well in radial distribution with the model 3d densities in the pos for selected longitudes corresponding to the edge - on streamer positions ( see figure [ fig : pbinv ] ( c ) and ( d ) ) .this confirms our theoretical prediction that the ppsa and the van de hulst inversions are equivalent .the average ratio of the sspa to the model densities along the selected radial cut between 1.6 and 3.0 is 0.86.06 and 0.95.10 for cor1-a and -b , respectively .note that weak increase in density with height between 3.5 and 4.0 may be due to fov effects ( see discussion in section [ sctdc ] ) .in addition , the lower panels of figure [ fig : pbinv ] show that both the tomographic and sspa density profiles determined for streamers are comparable ( with differences by % ) to the previous obtained by ssi method in the solar minimum . to examine towhat extent the sspa solution is consistent with the 3d density model , we compare the sspa density profile with the model density profiles in 13 angular sections to the pos within 30 in figures [ fig : pbinm](a ) and ( b ) , which show that the sspa solution is closest to the distribution of 3d densities in the pos .this is as expected because the integrals along the los are most heavily weighted toward the regions near the pos ( see ; ) .we evaluate the sspa inversion of the streamer as a function of longitude in the following .this corresponds to make the sspa inversion of synthetic images at different virtual observing times " .we synthesize the images at nine times for cor1-a and -b , in a period from 20:00 ut , 6 february to 04:00 ut , 10 february with intervals of 10 hours , centered at 12:00 ut on 8 february 2008 ( the time for the instance analyzed in the last section ) .the sun rotated by 44 over this period , approximately equal to the separation angle between stereo - a and b. so the ending - time location analyzed in cor1-a is approximately superposed with the starting - time location in cor1-b as shown in figure [ fig : necr ] .we fit the data along the same radial trace between 1.6 and 3.9 at these nine times using the sspa method . figures [ fig : neva ] and [ fig : nevb ] show comparisons of the obtained density profiles with the model densities in the pos for cor1-a and cor1-b , respectively . for the analyzed locations with longitude between and ( approximately between the limbs and as shown in figure [ fig : necr ] ) , we find that the density profiles derived by sspa method have the similar shape as the model profiles over almost the whole fov range ( 1.63.8 ) , but the magnitudes are smaller than the model by about 20%50% .the reason for good agreements during this period could be that the streamers in the los at the analyzed location are near the pos .the model density increases near the edge of fov seen in figures [ fig : neva](c)-(e ) and figures [ fig : nevb](g)-(i ) likely result from the fov effects in the tomographic inversion .figure [ fig : netim ] shows the sspa density as a function of longitude at five heights ( 1.6 , 2.0 , 2.5 , 3.0 and 3.5 ) .the comparison with the model confirms the result above that the better agreement lies at the locations with the longitude in 60 .we also find that the agreement tends to be better at lower heights .the average difference between the sspa and model densities are % for the region between 1.6 and 2.0 , while %50% for 2.53.5 .the reason could be that the ssi condition is better meet at lower heights where the streamer has a larger extent in the los , and thus the los integration in equation ( [ eqgktht ] ) over a very long distance ( compared to the width of the streamer ) becomes more reasonable .now we evaluate the latitudinal dependence of sspa inversions of the corona . by assuming that the ssi condition holds locally for all angular positions around the sun, the 2d coronal density can be derived from a image by fitting the radial profile at each angular position using the sspa method .for the case analyzed in section [ sctstrm ] , we reconstruct 2d coronal densities in the pos by fitting radial profiles between 1.6 and 3.7 for 360 position angles with intervals of 1 from both synthetic and observed images , and show the results in figure [ fig : ne2d ] .compared to the tomographic densities in the pos ( see figure [ fig : nemap ] ) , the sspa coronal densities determined from the observed images do not show very low - density regions around streamers , while those from the synthetic images have a small region of zero ( or negative ) densities at the southern side of the west - limb streamer , where the synthetic signals are very low ( below the background level in non - streamer regions ) .the zero density regions in tomographic reconstructions could be caused either ( or both ) due to coronal dynamics or due to real very small density in these coronal regions ( see discussions in section [ sctdc ] ) .for a quantitative comparison , we plot the sspa and model density profiles as a function of position angles at four heights ( 1.6 , 2.0 , 2.5 , and 3.0 ) in figure [ fig : neap ] . for the three edge - on streamers marked with s1s3, we measure their peak densities and angular widths ( in fwhm ) .the ratio of peak densities from the sspa to the model is on average 0.82.06 , 0.72.09 , and 0.62.12 , and the ratio of angular widths is on average 1.24.02 , 1.67.12 , and 1.90.27 at 2.0 , 2.5 , and 3.0 , respectively .the results indicate that the sspa inversion underestimates the peak density of streamers by about 20%40% , while overestimates the angular width by about 20%90% .the increase of deviations with height in the peak density of streamers by the sspa method from the model may be caused by narrowing of the streamer ( belt ) with height ( see figure 2 of ) . as the streamer width alongthe los decreases with the height , the condition for the ssi assumption becomes worse .the reason for the latitudinal spreading of the sspa density relative to the model may lie in that the analyzed streamers are not exactly edge - on , _i.e. _ , the streamer belt at the analyzed locations is actually tilted somewhat away from the equatorial direction ( or the los near the limb , see figure [ fig : necr ] ) .in such a case at the locations near the edge of streamer in the image the contribution is larger from points along the los behind or in front of the pos , leading to an overestimation of the model density in the pos by the sspa inversion at these places .in addition , we notice for the face - on streamer ( marked s4 ) in cor1-b , its sspa density profiles are consistent with the model densities as well , this is because this streamer is by coincidence located near the pos at this instance ( figure [ fig : necr ] ) . in figure[ fig : neap ] , we also compare the sspa densities inverted from the synthetic image with those from the observed image , and find that they are generally consistent except in the region near the occulter of cor1 , where the sspa inversions from the observed are much larger ( times ) than the model densities ( see panels ( a ) and ( e ) in figure [ fig : neap ] ) .the version of tomographic reconstruction used here most likely underestimated the density there as the solution at grid points close to the occulter is less constrained by the observational data .in the above sections , we evaluated the sspa method by comparing the derived density distribution as a function of radial distance , longitude and latitude with the 3d tomographic model .the assessments indicate that the sspa inversion can determine the 2d coronal density from a image , which approximately agrees with the 3d tomographic densities for streamers near the pos .this suggests that we may reconstruct a 3d density of the corona by applying the sspa inversion to a 14 day data set from cor1-a or -b . in this section, we demonstrate the sspa 3d density reconstructions using the same data set ( consisting of 28 images from cor1-b ) as used for the reconstruction of 3d tomographic model , and the simultaneous cor1-a data set .the data cadence of about 2 images per day corresponds to a longitudinal step of about .first we determine the 2d density distribution by fitting the radial data between 1.6 and 3.7 using the sspa inversion at 142 angular positions ( with intervals of 2.5 ) surrounding the sun for each image .then we map the east - limb and west - limb density profiles of all images to make a synoptic map at a certain height , based on their carrington coordinates ( neglecting the inclination of the solar rotational axis to the ecliptic ) .a 3d density reconstruction is made of 25 synoptic maps for the radial heights from 1.5 to 3.9 with an interval of 0.1 .for each synoptic map , we convert the irregular grid into the regular grid using the idl function , _trigrid_. although the 3d density reconstruction can be made using the sspa method from the cor1 data with a cadence as high as 5 minutes , the higher temporal resolution may not help improve its actual angular resolution in longitude due to intrinsic limitations of the sspa method ( see appendix ) .figure [ fig : synab ] shows the sspa - reconstructed 3d coronal density at 1.6 , 2.0 and 2.5 for cor1-a and -b .some discontinuities of the density are seen at two longitudes which separate the two regions made of the east - limb and west - limb inversions. these flawed structures may be due to temportal changes of streamers and/or the effect due to neglecting the solar axial tilt in the reconstruction .we smooth these discontinuities by averaging the two reconstructions from cor1-a and cor1-b and then make a smoothing . in figure[ fig : syncp ] , we show the ratio of tomographic density to the sspa average density for the cor1-a and -b reconstructions with smoothing for two cases , one for the sspa densities obtained using the synthetic data ( panels ( a)-(c ) ) , and the other for the sspa densities obtained using the real data ( panels ( d)-(f ) ) .both cases indicate that the density ratios in the streamer belt are very close to 1 , within a factor of two or so ( .51 at 1.6 , at 2.0 , and at 2.5 ) .the good consistency validates that the ssi assumptions are very appropriate for the streamer belt in the solar minimum . for a quantitative comparison between the tomography and the sspa densities ( obtained from real data ), we also show their density profiles along an equatorial cut in figure [ fig : synpf ] .we find that they best match at 2 .the same result is also indicated by the pixel - to - pixel scatter plots in figure [ fig : pxsct ] .we obtain the ratio of the sspa to the tomographic average density is 1.02 , and their linear pearson correlation coefficient is 0.93 for the reconstructions at 2 .in addition , the scatter plots show that the dispersion is larger for those pixels with smaller values , where the sspa densities are much larger than those obtained by the tomography .in this study , we have , for the first time , evaluated the ssi method using the 3d model of the coronal electron densities reconstructed by tomography from the stereo / cor1 data .our study is instructive for more efficient use of the ssi technique to invert the observations from ground- and space - based coronagraphs , in particular , the cor1 data .we demonstrate both theoretically and observationally that the sspa method and the van de hulst inversion are equivalent ssi techniques when the radial densities or signals are assumed in the polynomial form of high degrees ( more than two ) .the polynomial degree of five is suitable for cor1 data inversions .thus , assessment results of the sspa method can also be applied to the van de hulst inversion technique .we determine radial profiles of the streamer density from the cor1-a and -b synthetic images as well as their longitudinal and latitudinal dependencies .we find that the sspa density values are close to the model for the core of streamers near the pos , with differences ( or uncertainties if we regard the model input as a true solution ) typically within a factor of .this result is consistent with those evaluated using uv spectroscopy .we find that the sspa density profiles tend to better match the model at lower heights ( 2.5 ) .our results confirm the suggestion in some previous studies that the ssi assumption is appropriate for the edge - on streamers or the streamer belt during the solar minimum .we suggest that the edge - on condition for streamers may be determined by tomography method or by examining the consistency between simultaneous measurements from cor1-a and -b in radial distribution , when the two spacecraft have a small angular separation ( _ e.g. _ , less than 45 ) .we also find the sspa streamer densities are more spread in both longitudinal and latitudinal directions than in the model .we demonstrate the application of the sspa inversion for reconstructions of the 3d coronal density near the solar minimum , and show that the sspa 3d density for the streamer belt is roughly consistent in both position and magnitude with the tomographic reconstruction .the synoptic density maps derived by the sspa method show some discontinuities at the longitudes that separate the regions made of the east- and west - limb inversions. these discontinuities may be due to temporal changes of coronal structures and/or the effect of tilt of the solar rotation axis on the poles visibility that is not considered .they can be smoothed out during post - processing by smoothing and averaging the density distributions from cor1-a and -b , but such treatments will reduce the spatial resolution . in comparison , the tomographic inversion can fully take into account the tilt effect of the solar pole and produce the density distributions smoother in these discontinuity regions .we estimate that the sspa method may resolve the coronal density structure near the pos with an angular resolution of in longitudinal direction . given this limitation , the sspa reconstruction using data with higher cadence ( _ e.g. _ , more than three images per day ) would not help improve its actual angular resolution .although the current state of the tomographic method has allowed to routinely obtain the 3d coronal densities , we speculate that the ssi method could be complementary to the tomography when used for the interpretation of observations in such cases as during maximum of solar activity , in some regions where tomography gives zero density values , or in the regions near the edges of fov . the zero density values in the tomographic reconstruction ( so called zero - density artifacts " , zdas ) could be caused either ( or both ) due to coronal dynamics , or due to real very small density in these coronal regions which are below the error limit in the tomography method .the latter reason is also supported by results of mhd modeling . in the former case , the ssi method could be complementary to tomography , while in the latter case the sspa method gives much larger values than in the tomographic model .the fov artifacts in the tomography are due to the finite coronagraph fov that causes the reconstructed density to increase at the regions close to the outer reconstruction domain .however , this fov artifacts can be reduced by extending the outer reconstruction domain beyond the coronagraph fov limit .another way to obtain more correct density values in this region could be by using the sspa method which does not imply strict outer boundaries for los integration .thus the estimate of uncertainties of the sspa method should be limited to the regions with distances less than about 3.5 for the tomographic model used .also , the use of mhd model in order to produce artificial data can be useful for this test .however this will be a subject of future research .the tomography generally assumes that the structure of the corona is stable over the observational interval , _e.g. _ , two weeks of observations made by a single spacecraft , although for some coronal regions that are exposed to the spacecraft for only about a week during the observation the stationary assumption can be reduced to about a week .however , such an assumption is hard to meet during solar maximum or times of enhanced coronal activity .the cme catalog in the nasa cdaw data center shows that the cme occurrence rate increases from .5 per day near solar minimum to near solar maximum during solar cycle 23 .although our assessment results for the sspa method are based on a static coronal model , their validity may not be limited to the static assumption . because the key factor for the sspa inversion for obtaining a good estimate of the 2d coronal density is an instantaneous ( local ) symmetric condition for coronal densities along the los. the minimal size of this local symmetry is limited by the angular resolution in longitude which is about .therefore , the ssi method can be used to estimate the density of a dynamic coronal structure in terms of weighted average over the region with this angular size in longitude . for this reasonit may be a better choice to use a combination of the tomography and the ssi inversions for interpretations of radio bursts and shocks produced by cmes in the case when coronal structures of interest evolve quickly with time .in addition , the ssi method is also often applied to the cases when observational data are not suitable for tomography , _e.g. _ , solar eclipses .the 3d mhd models of the corona using the synoptic photospheric magnetic field data have been successfully used to interpret solar observations , including total eclipses and ground - based ( _ e.g. _ mlso / mk4 ) images of the corona ( _ e.g. _ , ; ) .we suggest that the evaluation of the ssi method based on such a global mhd model may be necessary in the future .the profits using such mhd models to estimate the uncertainties of the ssi method could be in avoidance of the zda and fov effects .the modeled corona also allows evaluations of the ssi method down to very low heights ( _ e.g. _ , the region between 1.1 and 1.5 ) where the corona is much more structured .moreover , a simulated time dependent corona from a time - evolving mhd model would allow us to estimate the uncertainties in tomography and the ssi inversion when they are applied to the dynamic corona , especially during the solar maximum .this needs detailed investigations in the future .* estimates of angular resolution of the sspa method in longitude * + to determine the angular resolution of the sspa method in the longitudinal direction , a numerical experiment is performed using 2d coronal models .we construct a 2d density model by first using the equatorial background density model to build a background corona of rotational symmetry in the equatorial plane , and then inserting two structures into it .the structures have the angular width of , the density contrast ratio of to the background , and the longitudinal profile following a step function or gaussian function . for the gaussian - type structure ,its fwhm is set as .figures [ fig : reso](a ) and ( b ) show the two types of 2d coronal models with about the same fov as cor1 , where two structures are separated by an angle of 2 , thus the width of the gap between them is also equal to in the step - profile case .we assume that the two structures ( marked a and b ) are located at the longitudes of and , respectively , i.e. defining their middle position as the origin of longitude .so for the cases shown in figures [ fig : reso](a ) and ( b ) , the origin of longitude is just located in the pos at the west limb .we synthesize data for the west - limb region from 1.5 to 4 at longitudes in the range from to 90 using equation ( [ eqpb ] ) , and then derive the electron density using the sspa method by fitting the synthetic data .the panels ( c)-(e ) show comparisons of the sspa density with the model density in the pos at 2 as a function of longitudes for different angular distances between two artificial structures with the density contrast ratio =10 .we find that for both types of the structure ( in the step or gaussian profile ) , the minimum resolvable distance ( or angular resolution ) is about 50 .to examine the effects of radial distance , structure width , and density contrast on the obtained resolution , we make a parametric study , and show the results in panels ( f)-(h ) .we find that the resolution is only slightly dependent on the radial distance and structure width , which is better for the lower heights and narrower widths , but almost independent on the density contrast .in addition , we also find that the obtained sspa peak density is about a half of the model density for the analyzed structures in most of the cases , which is consistent with our results for edge - on streamers .these numerical experiments suggest that for coronal structures with more smoothing profile and larger extension in the longitudinal direction and with lower density contrast to the background , which can be regarded as better conditions meeting the spherically symmetric assumption , the sspa solutions are more accurate .the work of tw was supported by the nasa cooperative agreement nng11pl10a to the catholic university of america and nasa grant nnx12ab34 g .we very much appreciate to dr .maxim kramar for his suggestions that led to an improved estimation of angular resolution of the sspa method in appendix .we also thank the anonymous referee for his / her valuable comments in improving the manuscript .airapetian , v. , ofman , l. , sittler , e. c. , kramar , m. : 2011 , _ astrophys .j. _ * 728 * , 67 .doi : 10.1088/0004 - 637x/728/1/67 allen , c. w. : 2000 , allen s astrophysical quantities ( 4th edition ) , arthur n. cox ( ed . ) , springer - verlag , isbn 0 - 387 - 98746 - 0 .barbey , n. , auchre , f. , rodet , t. , vial , j .- c . : 2008 , _ solar phys . _ * 248 * , 409 .doi : 10.1007/s11207 - 008 - 9151 - 6 barbey , n. , guennou , c. , auchre , f. : 2013 , _ solar phys ._ * 283 * , 227 .doi : 10.1007/s11207 - 011 - 9792 - 8 billings , d. e. : 1966 , a guide to the solar corona , academic press , new york .blackwell , d. e. , petford , a. d. : 1966a , _ mon . not .soc . _ * 131 * , 383 .ads : http://adsabs.harvard.edu/abs/1966mnras.131..383b blackwell , d. e. , petford , a. d. : 1966b , _ mon . not ._ * 131 * , 399 .ads : http://adsabs.harvard.edu/abs/1966mnras.131..399b blackwell , d. e. , dewhirst , d. w. , ingham , m. f. : 1967 , _ advances in astronomy and astrophysics _ , z. kopal ( ed . ) , new york , * 5 * 1 .butala , m. d. , frazin , r. a. , kamalabadi , f. : 2005 , _ j. geophys ._ * 110 * , a09s09 .doi : 10.1029/2004ja010938 butala , m. d. , hewett , r. j. , frazin , r. a. , kamalabadi , f. : 2010 , _ solar phys ._ * 262 * , 495 .doi : 10.1007/s11207 - 010 - 9536 - 1 caroubalos , c. , hillaris , a. , bouratzis , c. , alissandrakis , c. e. , preka - papadema , p. , polygiannakis , j. _ et al . _ : 2004 , _ astron .astrophys . _ * 413 * , 1125 .doi : 10.1051/0004 - 6361:20031497 cho , k .- s ., lee , j. , moon , y .-j . dryer , m. , bong , s .- c . , kim , y .- h . , park , y. d. : 2007 , _ astron .astrophys . _ * 461 * , 1121 .doi : 10.1051/0004 - 6361:20064920 cranmer , s. r. , kohl , j. l. , noci , g. , antonucci , e. , tondello , g. , huber , m. c. e. _ et al . _ : 1999 , _ astrophys .j. _ * 511 * , 481 .doi : 10.1086/306675 davila , j. m. : 1994 , _ astrophys .j. _ * 423 * , 871 .doi : 10.1086/173864 feng , x. s. , zhou , y. f. , wu , s. t. : 2007 , _ astrophys . j. _ * 655 * , 1110 .doi : 10.1086/510121 frazin , r. a. , janzen , p. : 2002, _ astrophys .j. _ * 570 * , 408 .doi : 10.1086/339572 frazin , r. a. , vsquez , a. m. , kamalabadi , f. , park , h. : 2007 , _ astrophys .j. _ * 671 * , l201 .doi : 10.1086/525017 frazin , r. a. , lamy , p. , llebaria , a. , vsquez , a. m. : 2010 , _ solar phys ._ * 265 * , 19 .doi:10.1007/s11207 - 010 - 9557 - 9 frazin , r. a. , vsquez , a. m. , thompson , w. t. , hewett , r. j. , lamy , p. , llebaria , a. , vourlidas , a. , burkepile , j. : 2012 , _ solar phys . _ * 280 * , 273 .doi : 10.1007/s11207 - 012 - 0028 - 3 gibson , s. e. , bagenal , f. : 1995 , _ j. geophys_ , * 100 * , 19865 .doi : 10.1029/95ja01905 gibson , s. e. , fludra , a. , bagenal , f. , biesecker , d. , del zanna , g. , bromage , b. : 1999 , _ j. geophys ._ * 104 * , 9691 .doi : 10.1029/98ja02681 gopalswamy , n. , lara , a. , yashiro , s. , nunes , s. howard , r. a. : 2003 , _ solar variability as an input to the earth s environment , international solar cycle studies ( iscs ) symposium _ , wilson , a. ( ed . ) , esa publ .division , noordwijk , * sp-535 * , 403 .groth , c. p. t. , de zeeuw , d. l. , gombosi , t. i. , powell , k. g. : 2000 , _ j. geophys .* 105 * , 25053 .doi : 10.1029/2000ja900093 guhathakurta , m. , fisher , r. r. : 1995 , _ geophys .* 22 * , 1841 .doi : 10.1029/95gl01603 guhathakurta , m. , holzer , t. e. , macqueen , r. m. : 1996 , _ astrophys .j. _ * 458 * , 817 .doi : 10.1086/176860 hayashi , k. : 2005 , _ astrophys ._ * 161 * , 480 .doi : 10.1086/491791 hayes , a. p. , vourlidas , a. , howard , r. a. : 2001 , _ astrophys .j. _ * 548 * , 1081 .doi : 10.1086/319029 hu , y. q. , feng , x. s. , wu , s. t. , song , w. b. : 2008 , _ j. geophys .res . _ * 113 * , 3106 .doi : kimura , h. , mann , i. : 1998 , _ earth planets space _ * 50 * , 493 .ads : http://adsabs.harvard.edu/abs/1998ep koutchmy , s. : 1994 , _ adv .space res . _* 14 * , 29 .doi : 10.1016/0273 - 1177(94)90156 - 2 koutchmy , s. , lamy , p. l. : 1985 , _ properties and interactions of interplanetary dust _, r. h. giese & p. l. lamy ( eds . ) , _ assl _ * 119 * , 63 .kramar , m. , jones , s. , davila , j. m. , inhester , b. , mierla , m. : 2009 , _ solar phys . _ * 259 * , 109 .doi : 10.1007/s11207 - 009 - 9401 - 2 kramar , m. , davila , j. , xie , h. , antiochos , s. : 2011 , _ ann .geophys . _* 29 * , 1019 .doi : 10.5194/angeo-29 - 1019 - 2011 kramar , m. , inhester , b. , lin , h. , davila , j. : 2013 , _ astrophys . j. _ * 775 * , 25 .doi : 10.1088/0004 - 637x/775/1/25 kramar , m. , airapetian , v. , miki , z. , davila , j. : 2014,_solar phys ._ , in press .doi : 10.1007/s11207 - 014 - 0525 - 7 kwon , r .- y ., kramar , m. , wang , t. j. , ofman , l. , davila , j. m. , chae j. : 2013 , _ astrophys . j. _ * 776 * , 55 . doi : 10.1088/0004 - 637x/776/1/55 lallement , r. , qumerais , e. , lamy , p. , bertaux , j .- l . ,ferron , s. , schmidt , w. : 2010 , _soho-23 : understanding a peculiar solar minimum _ , s. r. cranmer , j. t. hoeksema , and j. l. kohl .( eds . ) , _ asp conf . _* 428 * , 253 .lee , k .- s . , moon , y .-j . , kim , k .- s . ,lee , j .- y . , cho , k .- s . , choe , g. s. : 2008 , _ astron .astrophys . _ * 486 * , 1009 .doi : 10.1051/0004 - 6361:20078976 linker , j. a. , miki , z. , biesecker , d. a. , forsyth , r. j. , gibson , s. e. , lazarus , a. j. _ et al . _ : 1999 , _j. geophys .* 104 * , 9809 .doi : 10.1029/1998ja900159 lionello , r. , linker , j. a. , miki , z. : 2009 , _ astrophys .j. _ * 690 * , 902 .doi : 10.1088/0004 - 637x/690/1/902 manchester , w. b. , gombosi , t. i. , roussev , i. , de zeeuw , d. l. , sokolov , i. v. , powell , k. g. , tth , gbor : 2004 , _j. geophys .res . _ * 109 * , a02107 .doi : 10.1029/2003ja010150 manchester , w. b. , gombosi , t. i. , de zeeuw , d. l. , sokolov , i. v. , roussev , i. i. , powell , k. g. _ et al . _ : 2005 , _ astrophys .j. _ * 622 * , 1225 .doi : 10.1086/427768 mann , i. : 1992 , _ astron .astrophys . _ * 261 * , 329 .ads : http://adsabs.harvard.edu/abs/1992a miki , z. , linker , j. a. , schnack , d. d. , lionello , r. , tarditi , a. : 1999 , _ phys . plasma _* 6 * , 2217 .doi : 10.1063/1.873474 minnaert , m. : 1930 , _ zs .* 1 * , 209 .ads : http://adsabs.harvard.edu/abs/1930za......1..209m munro , r. h. jackson , b. v. : 1977 , _ astrophys .j. _ * 213 * , 874 .doi : 10.1086/155220 november , l. j. , koutchmy , s. : 1996 , _ astrophys .j. _ * 466 * , 512 .doi : 10.1086/177528 odstril , d. , pizzo , v. j. : 1999 , _ j. geophys .* 104 * , 493 .doi : 10.1029/1998ja900038 odstril , d. , linker , j. , lionello , r. , miki , z. , riley , p. , pizzo , v. j. , luhmann , j. g. : 2002 , _j. geophys .res . _ * 107 * , 1493 .doi : 10.1029/2002ja009334 qumerais , e. lamy , p. : 2002 , _ astron .astrophys . _ * 393 * , 295 .doi : 10.1051/0004 - 6361:20021019 qumerais , e. , lallement , r. , koutroumpa , d. , lamy , p. : 2007, _ astrophys .j. _ * 667 * , 1229 .doi : 10.1086/520918 ramesh , r. , kishore , p. , mulay , s. m. , barve , i. v. , kathiravan , c. , wang , t. j. : 2013 , _ astrophys .j. _ * 778 * , 30 .doi : 10.1088/0004 - 637x/778/1/30 reames , d. v. : 1999 , _ space sci .rev . _ * 90 * , 41 .doi : 10.1023/a:1005105831781 riley , p. , linker , j. a. , miki , z. : 2001 , _j. geophys .res . _ * 106 * , 15889 .doi : 10.1029/2000ja000121 riley , p. , linker , j. a. , miki , z. , lionello , r. , ledvina , s. a. , luhmann , j. g. : 2006 , _ astrophys. j. _ * 653 * , 1510 .doi : 10.1086/508565 saez , f. , llebaria , a. , lamy , p. , vibert , d. : 2007 , _ astron .astrophys . _ * 473 * , 265 .doi : 10.1051/0004 - 6361:20066777 saito , k. : 1970 , _ ann .tokyo astron ., ser . 2 _ , * 12 * , 53 .saito , k. , poland , a. i. , munro , r. h. : 1977 , _ solar phys ._ * 55 * , 121 .doi : 10.1007/bf00150879 schulz , m. 1973 , _ astrophys .space sci . _* 24 * , 371 .doi : 10.1007/bf02637162 shen , c. , liao , c. , wang , y. , ye , p. , wang , s. : 2013 , _ solar phys ._ * 282 * , 543 .doi : 10.1007/s11207 - 012 - 0161-z sokolov , i. v. , roussev , i. i. , gombosi , t. i. , lee , m. a. , kta , j. , forbes , t. g. , manchester , w. b. , sakai , j. i. : 2004 , _ astrophys . j. lett ._ * 616 * , l171 .doi : 10.1086/426812 thompson , w. t. , wei , k. , burkepile , j. t. , davila , j. m. , st .cyr , o. c. : 2010 , _ solar phys ._ * 262 * , 213 .doi:10.1007/s11207 - 010 - 9513 - 8 thompson , w. t. : 2006 , _ astron .astrophys . _ * 449 * , 791 .doi : 10.1051/0004 - 6361:20054262 usmanov , a. v. , goldstein , m. l. , besser , b. p. , fritzer , j. m. : 2000 , _j. geophys .res . _ * 105 * , 12675 .doi : 10.1029/1999ja000233 van de hulst , h. c. : 1950 , _ bull .* 11 * , 135 .ads : http://adsabs.harvard.edu/abs/1950ban....11..135v van der holst , b. , sokolov , i. v. , meng , x. , jin , m. , manchester , w. b. , iv , tth , g. , gombosi , t. i. : 2014 , _ astrophys . j. _ * 782 * , 81 .doi : 10.1088/0004 - 637x/782/2/81 vsquez , a. m. , frazin , r. a. , hayashi , k. , sokolov , i. v. , cohen , o. , manchester , w. b. , iv , kamalabadi , f. : 2008 , _ astrophys .j. _ * 682 * , 1328 .doi : 10.1086/589682 yashiro , s. , gopalswamy , n. , michalek , g. , st .cyr , o. c. , plunkett , s. p. , rich , n. b. , howard , r. a. : 2004 , _ j. geophys .res . _ * 109 * , 7105 .doi : 10.1029/2003ja010282 zucca , p. , carley , e. p. , bloomfield , d. s. , gallagher , p. t. : 2014 , _ astron .astrophys . _ * 564 * , 47 .doi : 10.1051/0004 - 6361/201322650
determination of the coronal electron density by the inversion of white - light polarized brightness ( ) measurements by coronagraphs is a classic problem in solar physics . an inversion technique based on the spherically symmetric geometry ( spherically symmetric inversion , ssi ) was developed in the 1950s , and has been widely applied to interpret various observations . however , to date there is no study about uncertainty estimation of this method . in this study we present the detailed assessment of this method using a three - dimensional ( 3d ) electron density in the corona from 1.5 to 4 as a model , which is reconstructed by tomography method from stereo / cor1 observations during solar minimum in february 2008 ( carrington rotation , cr 2066 ) . we first show in theory and observation that the spherically symmetric polynomial approximation ( sspa ) method and the van de hulst inversion technique are equivalent . then we assess the sspa method using synthesized images from the 3d density model , and find that the sspa density values are close to the model inputs for the streamer core near the plane of the sky ( pos ) with differences generally less than a factor of two or so ; the former has the lower peak but more spread in both longitudinal and latitudinal directions than the latter . we estimate that the sspa method may resolve the coronal density structure near the pos with angular resolution in longitude of about 50 . our results confirm the suggestion that the ssi method is applicable to the solar minimum streamer ( belt ) as stated in some previous studies . in addition , we demonstrate that the sspa method can be used to reconstruct the 3d coronal density , roughly in agreement with that by tomography for a period of low solar activity ( cr 2066 ) . we suggest that the ssi method is complementary to the 3d tomographic technique in some cases , given that the development of the latter is still an ongoing research effort .
data from national sex surveys provide quantitative information on the number of sexual partners , the degree , of an individual . usually , surveys involve a random sample of the population stratified by age , economical and cultural level , occupation , marital status , etc .the respondents are asked to provide information on sexual attitudes such as the number of sex partners they have had in the last 12 months or in their entire life .although in most cases the response rate is relatively small , the information gathered is statistically significant and global features of sexual contact patterns can be extracted . in particular, it turns out that the number of heterosexual partners reported from different populations is well described by power - law sf distributions .table [ tab : scn ] summarizes the main results of surveys conducted in sweden , united kingdom , zimbabwe and burkina faso . the first thing to notice is the gender - specific difference in the number of sexual acquaintances .this is manifested by the existence of two different exponents in the sf degree distributions , one for males ( ) and one for females ( ) .interestingly enough , the predominant case in table [ tab : scn ] , no matter whether data refers to time frames of 12 months or to entire life , consists of one exponent being smaller and the other larger than 3 .this is certainly a borderline case that requires further investigation on the value of the epidemic threshold .the differences found in the two exponents and have a further implication for real data and mathematical modeling . in an exhaustive survey , able to reproduce the whole network of sexual contacts , the total number of female partners reported by men should equal the total number of male sexual partners reported by women .mathematically , this means that the number of links ending at population ( of size ) equals the number of links ending at population ( of size ) , which translates into the following closure relation : assuming that the degree distributions for the two sets are truly scale - free , then , with the symbol standing for the gender , and being the minimum degree .moreover , if and for any , eq .( [ closure ] ) gives the relation between the two population sizes as which implies that the less heterogeneous ( in degree ) population must be larger than the other one .[ cols="^,^,^,^,^,^,^,^,^",options="header " , ] although both epidemic thresholds , and , tend to zero as the population goes to infinity , the scaling relations , and , are characterized by two different exponents , and .table [ tab : exp ] reports the expression of these two exponents as a function of and , showing that is always smaller than .in particular , for the most common case ( see table [ tab : scn ] ) , i.e. when one degree distribution exponent is in the range ,3]$ ] , and the other one is larger than , the value of found for bipartite networks is two times smaller than . as a consequence , the results show that in finite bipartite populations the onset of the epidemic takes place at larger values of the spreading rate .in other words , it could be the case that for a given transmission probability , in the unipartite representation shown in fig .[ fig1 ] ( b ) the epidemic would have survived infecting a fraction of the population , while when only crossed infections are allowed , as in fig .[ fig1 ] ( a ) , the same disease would not have produced an endemic state .moreover , the difference between the epidemic thresholds predicted by the two approaches increases with the system size .this dependency is shown in fig .[ fig2 ] , where we have reported , as a function of the system size , the critical thresholds obtained by numerically solving eqs .( [ threshold ] ) and ( [ threshold2 ] ) with the values of and found for the lifetime distribution of sexual partners in sweden and u.k .to check the validity of the analytical arguments and also to explore the dynamics of the disease above the epidemic threshold , we have conducted extensive numerical simulations of the sis model in bipartite and unipartite computer - generated networks .bipartite and unipartite graphs of a given size are built up ( see methods section ) having the same degree distributions , and , and thus they only differ in the way the nodes are linked .a fraction of infected individuals is initially randomly placed on the network and the sis dynamics is evolved : at each time step susceptible individuals get infected with probability if they are connected to an infectious one , and get recovered with probability ( hence , the effective transmission probability is ) .after a transient time , the system reaches a stationary state where the total prevalence of the disease , , is measured ( see methods ) .the results are finally averaged over different initial conditions and network realizations .[ fig3 ] shows the fraction of infected individuals as a function of for several system sizes and for the bipartite ( ( a ) and ( b ) ) and unipartite ( ( c ) and ( d ) ) graphs . in this figure, the infection probability has been rescaled by the theoretical value given by eq .( [ threshold ] ) .the purpose of the rescaling is twofold .first it allows to check the validity of the theoretical predictions and , at the same time , it provides a clear comparison of the results obtained for bipartite networks with those obtained for the unipartite case .again we have used the values of and extracted from the lifetime number of sexual partners reported for sweden and u.k . . fig .[ fig3 ] indicates that the analytical solution , eq .( [ threshold ] ) , is in good agreement with the simulation results for the two - gender model formulation .conversely when the bipartite nature of the underlying graph is not taken into account , the epidemic threshold is underestimated , being smaller than .in addition to this , the error in the estimation grows as the population size increases , in agreement with our theoretical predictions .the inclusion of the bipartite nature of contact networks to describe crossed infections in the spread of stds in heterosexual populations is seen to affect strongly the epidemic outbreak and leads to an increase of the epidemic threshold .our results show that , even in the cases when the epidemic threshold vanishes in the infinite network size limit , the epidemic incidence in finite populations is less dramatic than actually expected for unipartite scale - free networks .the results also point out that the larger the population , the greater the gap between the epidemic thresholds predicted by the two models , therefore highlighting the need to accurately take into account all the available information on how heterosexual contact networks look like .our results also have important consequences for the design and refinement of efficient degree - based immunization strategies aimed at reducing the spread of stds .in particular , they pose new questions on how such strategies have to be modified when the interactions are further compartmentalized by gender and only crossed infections are allowed .we finally stress that the present approach is generalizable to other models for disease spreading ( _ e.g. _ the `` _ susceptible - infected - removed _ '' model ) and other processes where crossed infection in bipartite networks is the mechanism at work .synthetic bipartite networks construction starts by fixing the number of males , and the two exponents and of the power - law degree distributions corresponding to males and females respectively .the first stage consists of assigning the connectivity ( , ... , ) to each member of the male population by generating random numbers with probability distribution ( , with ) .the sum of these random numbers fixes the number of links of the network .the next step is to construct the female population by means of an iterative process . for this purposewe progressively add female individuals with a randomly assigned degree following the distribution ( , with ) .female nodes are incorporated until the total female connectivity reaches the number of male edges , . in this way onesets the total number of females .once the two sets of males and females with their corresponding connectivities are constructed each one of the male edges is randomly linked to one of the available female edges avoiding multiple connections .finally those few female edges that did not receive a male link in the last stage are removed and the connectedness of the resulting network is checked .synthetic unipartite networks has been constructed in two ways .the simplest one consists of taking the two sets of males and females constructed for the bipartite network and apply a rewiring process to the entire population , _ i.e. _ allowing links between individuals of the same sex . in the second method , a set of individuals whose connectivitiesare randomly assigned following the degree distribution is generated before applying a wiring process between all pairs of edges . in both methods the wiring process avoids multiple and self connections and those isolated edges that remain at the end of the network construction are removed .the connectedness of the networks is also checked .montecarlo simulations of sis dynamics are performed using networks of sizes ranging from to .the initial fraction of infected nodes is set to of the network size .the sis dynamics is initially evolved for a time typically of time - steps and after this transient the system is further evolved over consecutive time windows of steps . in these time windowswe monitor the mean value of the number of infected individuals , .the steady state is reached if the absolute difference between the average number of infected individuals of two consecutive time windows is less than .we thank k.t.d .eames and j.m .read for their useful suggestions .is supported by mec through the ramn y cajal program .this work has been partially supported by the spanish dgicyt projects fis2006 - 12781-c02 - 01 and fis2005 - 00337 , and by the italian to61 infn project .fenton , k. a. , korovessis , c. , johnson , a. m. , mccadden , a. , mcmanus , s. , wellings , k. , mercer , c. h. , carder , c. , copas , a. j. , nanchahal , k. , macdowall , w. , ridgway , g. , field , j. & erens , b. ( 2001 ) _ the lancet _ * 358 * , 1851 - 1854 .
the spread of sexually transmitted diseases ( _ e.g . chlamydia , syphilis , gonorrhea , hiv _ ) across populations is a major concern for scientists and health agencies . in this context , both data collection on sexual contact networks and the modeling of disease spreading , are intensively contributing to the search for effective immunization policies . here , the spreading of sexually transmitted diseases on bipartite scale - free graphs , representing heterosexual contact networks , is considered . we analytically derive the expression for the epidemic threshold and its dependence with the system size in finite populations . we show that the epidemic outbreak in bipartite populations , with number of sexual partners distributed as in empirical observations from national sex surveys , takes place for larger spreading rates than for the case in which the bipartite nature of the network is not taken into account . numerical simulations confirm the validity of the theoretical results . our findings indicate that the restriction to crossed infections between the two classes of individuals ( males and females ) has to be taken into account in the design of efficient immunization strategies for sexually transmitted diseases . disease spreading has been the subject of intense research since long time ago . on the one hand , epidemiologists have developed mathematical models that can be used as a guide to understanding how an epidemic spreads and to design immunization and vaccination policies . on the other hand , data collections have provided information on the local patterns of relationships in a population . in particular , persons who may have come into contact with an infectious individual are identified and diagnosed , making it possible to contact - trace the way the epidemic spreads , and to validate the mathematical models . however , up to a few years ago , some of the assumptions at the basis of the theoretical models were difficult to test . this is the case , for instance , of the complete network of contacts -the backbone through which the diseases are transmitted . with the advent of modern society , fast transportation systems have changed human habits , and some diseases that just a few years ago would have produced local outbreaks , are nowadays a global threat for public health systems . a recent example is given by the severe acute respiratory syndrome ( sars ) , that spread very fast from asia to north america a few years ago . therefore , it is of utmost importance to carefully take into account as much details as possible of the structural properties of the network on which the infection dynamics occurs . strikingly , a large number of statistical properties have been found to be common in the topology of real - world social , biological and technological networks . of particular relevance because of its ubiquity in nature , is the class of complex networks referred to as scale - free ( sf ) networks . in sf networks , the number of contacts or connections of a node with other nodes in the system , the degree ( or connectivity ) , follows a power law distribution , . recent studies have shown the importance of the sf topology on the dynamics and function of the system under study . for instance , sf networks are very robust to random failures , but at the same time extremely fragile to targeted attacks of the highly connected nodes . in the context of disease spreading , sf contact networks lead to a vanishing epidemic threshold in the limit of infinite population when . this is because the exponent is directly related to the first and second moment of the degree distribution , and , and the ratio determines the epidemic threshold above which the outbreak occurs . when , is finite while goes to infinity , that is , the transmission probability required for the infection to spread goes to zero . conversely , when , there is a finite threshold and the epidemic survives only when the spreading rate is above a certain critical value . the concept of a critical epidemic threshold is central in epidemiology . its absence in sf networks with has a number of important implications in terms of prevention policies : if diseases can spread and persist even in the case of vanishingly small transmission probabilities , then prevention campaigns where individuals are randomly chosen for vaccination are not much effective . our knowledge of the mechanisms involved in disease spreading as well as on the relation between the network structure and the dynamical patterns of the spreading process has improved in the last several years . current approaches are either individual - based simulations or metapopulation models where network simulations are carried out through a detailed stratification of the population and infection dynamics . in the particular case of sexually transmitted diseases ( stds ) , infections occur within the unique context of sexual encounters , and the network of contacts is a critical ingredient of any theoretical framework . unfortunately , ascertaining complete sexual contact networks in significatively large populations is extremely difficult . however , here we show that it is indeed possible to make use of known global statistical features to generate more accurate predictions of the critical epidemic threshold for stds .
the feasibility of processing graphs in the data stream model was one of the early questions investigated in the streaming model .however the results were not encouraging , even to decide simple properties such as the connectivity of a graph , when the edges are streaming in an arbitrary order required space . in comparison to the other results in the streaming model , which required polylogarithmic space , graph alogithms appeared to difficult in the streaming context and did not receive much attention subsequently .however in recent years , with the remergence of social and other interaction networks , questions of processing massive graphs have once again become prominent .technologically , since the publication of , it had become feasible to store larger quantities of data in memory and the semi - streaming model was proposed in . in this modelwe assume that the space is ( near ) linear in the number of vertices ( but not necessarily the edges ) .since its formulation , the model has become more appealing from the contexts of theory as well as practice . from a theoretical viewpoint, the model still offers a rich potential trade - off between space and accuracy of algorithm , albeit at a different threshold than polylogarithmic space . from a practical standpoint , in a variety of contexts involving large graphs , such as image segmentation using graph cuts ,the ability of the algorithm to retain the most relevant information in main memory has been deemed critical .in essence , an algorithm that runs out of main memory space would become unattractive and infeasible .in such a setting , it may be feasible to represent the vertex set in the memory whereas the edge set may be significantly larger . in the semi - streaming model ,the first results were provided by on the construction of graph spanners .subsequently , beyond explorations of connectivity , and ( multipass ) matching , there has been little development of algorithms in this model . in this paperwe focus on the problem of graph sparsification in a single pass , that is , constructing a small space representation of the graph such that we can estimate the size of any cut .graph sparsification remains one of the major building blocks for a variety of graph algorithms , such as flows and disjoint paths , etc . at the same time, sparsification immediately provides a way of finding an approximate min - cut in a graph .the problem of finding a min - cut in a graph has been one of the more celebrated problems and there is a vast literature on this problem , including both deterministic as well as randomized algorithms see for a comprehensive discussion of various algorithms .we believe that a result on sparsification will enable the investigation of a richer class of problems in graphs in the semi - streaming model .in this paper we will focus exclusively on the model that the stream is adversarially ordered and a single pass is allowed . from the standpoint of techniques , our algorithm is similar in spirit to the algorithm of alon - matias - szegedy , where we simultaneously sample and estimate from the stream .in fact we show that in the semi - streaming model we can perform a similar , but non - trivial , simultaneous sampling and estimation .this is pertinent because sampling algorithms for sparsification exist , which use edges .however these algorithms sample edges in an iterative fashion that requires the edges to be present in memory and random access to them .[ [ our - results ] ] our results : + + + + + + + + + + + + our approach is to recursively maintain a summary of the graph seen so far and use that summary itself to decide on the action to be taken on seeing a new edge . to this end , we modify the sparsification algorithm of benczur and karger for the semi streaming model .the final algorithm uses a single pass over the edges and provides approximation for cut values with high probability and uses edges for node and edge graph .let denote the input graph and and respectively denote the number of nodes and edges . denotes the value of cut in . indicates the weight of in graph . a graph is * k - strong connected * if and only if every cut in the graph has value at least .* k - strong connected component * is a maximal node - induced subgraph which is k - strong connected .the * strong connectivity * of an edge is the maximum such that there exists a -strong connected component that contains . in ,they compute the strong connectivity of each edge and use it to decide the sampling probability .algorithm [ alg : sparsify ] is their algorithm .we will modify this in section [ sec : algdesc ] .[ alg : sparsify ] * benczur - karger*( ) + compute the strong connectivity of edge for all here is a parameter that depends on the size of and the error bound .they proved the following two theorems in their paper .[ thm : benczur_errorbound ] given and a corresponding , every cut in has value between and times its value in with probability .[ thm : benczur_spacebound ] with high probability has edges . throughout this paper , denotes the input sequence . is a graph that consists of ,,, . is the strong connectivity of in and is weight of an edge in .each edge has weight 1 in . where scalar multiplication of a graph and addition of a graph is defined as scalar multiplication and addition of edge weights .in addition , if and only if . is a sparsification of a graph , i.e. , a sparsified graph after considering in the streaming model .we can not use algorithm [ alg : sparsify ] in the streaming model since it is not possible to compute the strong connectivity of an edge in without storing all the data .the overall idea would be to use a strongly recursive process , where we use an estimation of the connectivity based on the current sparsification and show that subsequent addition of edges does not impact the process .the modification is not difficult to state , which makes us believe that such a modification is likely to find use in practice .the nontrivial part of the algorithm is in the analysis , ensuring that the various dependencies being built into the process does not create a problem . for completenessthe modifications are presented in algorithm [ alg : streamsparsify ] .[ alg : streamsparsify ] * stream - sparsification * + we use given ; once again is a constant which determines the probability of success .we prove two theorems for algorithm [ alg : streamsparsify ] .the first theorem is about the approximation ratio and the second theorem is about its space requirement . for the simplicity of proof, we only consider sufficiently small .[ thm : correctness ] given , is a sparsification , that is , with probability .[ thm : space ] if , has edges .we use a sequence of ideas similar to that in benczur and karger .let us first discuss the proof in .in that paper , theorem [ thm : benczur_errorbound ] is proved on three steps .first , the result of karger , on uniform sampling is used .this presents two problems .the first is that they need to know the value of minimum cut to get a constant error bound .the other is that the number of edges sampled is too large .in worst case , uniform sampling gains only constant factor reduction in number of edges . to solve this problem , benczur and karger decompose a graph into -strong connected components . in a -strong connected component ,minimum - cut is at least while the maximum number of edges in -strong connected component(without -strong connected component as its subgraph ) is at most .they used the uniform sampling for each component and different sampling rate for different components . in this way, they guarantee the error bound for every cut .we can not use karger s result directly to prove our sparsification algorithm because the probability of sampling an edge depends on the sampling results of previous edges .we show that the error bound of a single cut by a suitable bound on the martingale process . using that we prove that if we do not make an error until edge , we guarantee the same error bound for every cut after sampling edge with high probability . using union bound , we prove that our sparsification is good with high probability .we prove theorem [ thm : correctness ] first .first , we prove the error bound of a single cut in lemma [ thm : errorbound_singlecut ] . the proof will be similar to that of chernoff bound . in lemma [ thm : errorbound_singlecomponent ] is a parameter and we use different for different strong connected components in the later proof .[ thm : errorbound_singlecut ] let with be a cut in a graph such that and .the index of the edges corresponds to the arrival order of the edges in the data stream .let be an event such that for all .let be a sparsification of .then , < 2\exp(-\beta^2pc/4) ] .then , if and only if .as already mentioned , we can not apply chernoff bound because there are two problems : 1 . are not independent from each other and 2 .values of are not bounded .the second problem is easy to solve because we have .let be random variables defined as follows : if happens , .thus , & = & { \mathbb{p}}[a_{c}\wedge(|\sum_j x_j-\sum_j \mu_j|>\beta pc ) ] \nonumber \\ & = & { \mathbb{p}}[a_{c}\wedge(|\sum_j y_j-\sum_j \mu_j|>\beta pc ) ] \nonumber \\ & \leq & { \mathbb{p}}[|\sum_j y_j-\sum_j \mu_j|>\beta pc ] \label{eqn : conclusion}\end{aligned}\ ] ] the proof of ( [ eqn : conclusion ] ) is similar to chernoff bound . however , since we do not have independent bernoulli random variables , we need to prove the upperbound of ] .[ thm : yibound ] \leq \exp(\mu_j(e^t-1)) ] .let .observe that for .so is decreasing function .also we have since .hence , therefore , & \leq & \mu_j(\exp(t)-1)+1 \nonumber \\ & \leq & \exp(\mu_j(e^t-1 ) ) .\nonumber\end{aligned}\ ] ] from case 1 and 2 , \leq\exp(\mu_j(e^t-1)) ] .[ thm : sibound ] let . for any and , \leq\exp(\sum_{k = j}^{l}\mu_j(e^t-1)) ] by lemma [ thm : yibound ] .assume that \leq\exp(\sum_{k = j+1}^{l}\mu_k(e^t-1)) ] for any and .now we prove lemma [ thm : errorbound_singlecut ] . remember that we only need to prove < 2\exp(-\beta^2pc/4) ] and \leq \exp(-\beta^2\mu/4) ] first . by applying markov s inequality to for any , we obtain }{\exp(t(1+\beta)\mu ) } \nonumber \\ & \leq & \frac{\exp(\mu(e^t-1))}{\exp(t(1+\beta)\mu)}. \nonumber\end{aligned}\ ] ] the second line is from lemma [ thm : sibound ] . from this point , we have identical proof as chernoff bound that gives us bound for . to prove that <\exp(-\beta^2pc/4) ] .consider a cut whose value is in .if holds , every edge in is also sampled with probability at least . by lemma [ thm : errorbound_singlecut ] , \leq 2\exp(-\beta^2p\alpha k/4)=2(n^{4+d}m)^{-\alpha} ] . from lemma [ thm : errorbound_singlestep ] , theorem [ thm : correctness ] is obvious .j < i.h_j \in ( 1\pm\epsilon)g_j)\wedge(h_i\notin ( 1\pm\epsilon)g_i)]={\mathcal{o}}(1/n^d)$ ] . for the proof of theorem [ thm : space ] , we use the following property of strong connectivity .[ thm : connectivity_bound ] if the total edge weight of graph is or higher , there exists a -strong connected components . , total edge weight of is at most .let be a cut .since , .total edge weight of is since each edge is counted for two such cuts .similarly , has edges . therefore , if , total edge weight of is at most .let . is a set of edges that sampled with .we want to bound the total weight of edges in . .let be a subgraph of that consists of edges in . does not have -strong connected component .suppose that it has .then there exists the first edge that creates a -strong connected component in . in that case, must be in the -strong connected component .however , since weight is at most , that component is at least -strong connected without .this contradicts that .therefore , does not have any -strong connected component . by lemma [ thm : connectivity_bound ] , .now we prove theorem [ thm : space ] . if the total edge weight is the same , the number of edges is maximized when we sample edges with smallest strong connectivity .so in the worst case , in that case , is at most .let this value be . then, total number of edges in is , we prove a simple space lowerbound for weighted graphs , where the lowerbound depends on .[ thm : lowerbound ] for , bits are required in order to sparsify every cut of a weighted graph within factor where is maximum edge weight and is minimum edge weight .let be a set of graphs such that there is a center node and other nodes are connected to by an edge whose weight is one of .then , .for , they must have different sparsifications .so we need bits for sparsfication .it is easy to show that .now we use the same proof idea for unweighted simple graphs .since we can not assign weight as we want , we use nodes as a center instead of having one center node . in this way, we can assign degree of a node from to .[ thm : unweighted_lowerbound ] for , bits are required in order to sparsify every cut of a graph within .consider bipartite graphs where each side has exactly nodes and each node in one side has a degree or .for each degree assignment , there exists a graph that satisfies it .let be a set of graphs that has different degree assignments .then , . can not have the same sparsification .so we need at least bits . another way of viewing the above claim is a direct sum construction , where we need to use bits to count upto a precision of .we presented a one pass semi - streaming algorithm for the adversarially ordered data stream model which uses edges to provide error bound for cut values with probability .if the graph does not have parallel edges , the space requirement reduces to . we can solve the minimum cut problem or other problems related to cuts with this sparsification . for the minimum cut problem , this provides one - pass -approximation algorithm .andrs a. benczr and david r. karger . approximating s - t minimum cuts in o(n2 )time . in _stoc 96 : proceedings of the twenty - eighth annual acm symposium on theory of computing _ , pages 4755 , new york , ny , usa , 1996 .chandra s. chekuri , andrew v. goldberg , david r. karger , matthew s. levine , and cliff stein .experimental study of minimum cut algorithms . in _soda 97 : proceedings of the eighth annual acm - siam symposium on discrete algorithms _ , pages 324333 , philadelphia , pa , usa , 1997 .society for industrial and applied mathematics .jianxiu hao and james b. orlin . a faster algorithm for finding the minimum cut in a graph . in _ soda 92 : proceedings of the third annual acm - siam symposium on discrete algorithms _ , pages 165174 , philadelphia , pa , usa , 1992 . society for industrial and applied mathematics .david r. karger .global min - cuts in rnc , and other ramifications of a simple min - out algorithm . in _ soda 93 : proceedings of the fourth annual acm - siam symposium on discrete algorithms _ , pages 2130 , philadelphia , pa , usa , 1993 .society for industrial and applied mathematics .david r. karger .random sampling in cut , flow , and network design problems . in _ stoc 94 : proceedings of the twenty - sixth annual acm symposium on theory of computing _ , pages 648657 , new york , ny , usa , 1994 .acm .daniel a. spielman and nikhil srivastava .graph sparsification by effective resistances . in _stoc 08 : proceedings of the 40th annual acm symposium on theory of computing _ , pages 563568 , new york , ny , usa , 2008 .
analyzing massive data sets has been one of the key motivations for studying streaming algorithms . in recent years , there has been significant progress in analysing distributions in a streaming setting , but the progress on graph problems has been limited . a main reason for this has been the existence of linear space lower bounds for even simple problems such as determining the connectedness of a graph . however , in many new scenarios that arise from social and other interaction networks , the number of vertices is significantly less than the number of edges . this has led to the formulation of the semi - streaming model where we assume that the space is ( near ) linear in the number of vertices ( but not necessarily the edges ) , and the edges appear in an arbitrary ( and possibly adversarial ) order . however there has been limited progress in analysing graph algorithms in this model . in this paper we focus on graph sparsification , which is one of the major building blocks in a variety of graph algorithms . further , there has been a long history of ( non - streaming ) sampling algorithms that provide sparse graph approximations and it a natural question to ask : since the end result of the sparse approximation is a small ( linear ) space structure , can we achieve that using a small space , and in addition using a single pass over the data ? the question is interesting from the standpoint of both theory and practice and we answer the question in the affirmative , by providing a one pass space algorithm that produces a sparsification that approximates each cut to a factor . we also show that space is necessary for a one pass streaming algorithm to approximate the min - cut , improving upon the lower bound that arises from lower bounds for testing connectivity .
let us consider a regression model in the continuous time with the levy noise where is an unknown function , is some unobserved noise and is the noise intensity .the problem is to estimate the function on the observations when . in this paperwe consider the estimation problem in the adaptive setting , i.e. when the regularity of is unknown .note that if is the brownian motion , then we obtain the well known `` signal+white noise '' model ( see , for example , , , and etc ) .it should be noted also that the model is very popular in the statistical radio - physics .this is the estimation problem of the signal , observed under the white noise , when the signal noise ratio goes to infinity . in this paperwe assume that the noise is the levy process with unknown distribution on the skorokhod space ] and by we denote all these distributions for which the parameters and satisfy the conditions where the bounds and are such that for any first of all , we need to eliminate the large jumps in the observations , i.e. we transform this model as the parameter will be chosen later .so , we obtain that where the functions and with the truncated threshold .let be an orthonormal basis in ] and \scriptstylej\scriptscriptstylej ] .moreover , note that for any \to\bbr ] , the integrals are well defined with , , where , and . in the sequelwe denote by . to estimate the function we use the following fourier serie where coefficients can be estimated by the following way .the first we estimate as and for taking into account here that for such the integral we obtain from that these fourier coefficients can be represented as setting we obtain that for any now , according to the selection model approach developed in - we need to define for any the following functions where and .[ pr.sec : mapr.0 ] the following upper bound holds .^{n}}{u\in[0,1]^{n}}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\left\vert b_{{\mathchoice{1,\varepsilon}{1,\varepsilon}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}(u ) \right\vert \le \varkappa_{{\mathchoice{q}{q}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{ } } } } \,.\ ] ] taking into account that and for we immediately the upper bound . before the formulation we recall the novikov inequalities , , also referred to as the bichteler jacod inequalities , see , providing bounds moments of supremum of purely discontinuous local martingales for where is some positive constant .now , for any we set [ pr.sec : mapr.1 ] for any fixed truncated model parameter and for any vector with where .we estimate the function for ] , the weights belong to some finite set from ^n\scriptstyle{\varepsilon}\to 0 \scriptscriptstyle{\varepsilon}\to 0\scriptstyle{\varepsilon}\scriptscriptstyle{\varepsilon}\scriptstyle{\varepsilon}\scriptscriptstyle{\varepsilon}\scriptstyle{\varepsilon}\to 0 \scriptscriptstyle{\varepsilon}\to 0\scriptstyle{\varepsilon}\scriptscriptstyle{\varepsilon} ] , now we define the set as note , that these weight coefficients are used in for continuous time regression models to show the asymptotic efficiency . in the sequelwe need to estimate the variance parameter from . to this end we set for any + 1}{j=[1/\varepsilon]+1}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\,{\widehat}{t}^2_{{\mathchoice{j,\varepsilon}{j,\varepsilon}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\ , , \quad n=1/\varepsilon^{2}\,,\ ] ] where are the estimators for the fourrier coefficients with respect to the trigonometric basis , i.e. [ re.sec:mo.1 ]note that the similar sharp oracle inequalities was obtained before in the papers and for the nonparametric regression models in the discrete and continuous times respectively . in this paperwe obtain these inequalities for the model selection procedures based on any arbitrary orthogonal basic function .we use the trigonometric function only to estimate the noise intensity .first we set the following constant which will be used to describe the rest term in the oracle inequalities .we set where we start with the sharp oracle inequalities .[ th.sec:mrs.1 ] assume that for the model the condition holds .then for any , the estimator of given in satisfies the following oracle inequality [ co.sec:oi.1 ] assume that for the model the condition holds .if the variance parameter is known , then for any , the estimator of given in with the truncate parameter satisfies the following oracle inequality we need to study the estimate .[ pr.sec : mapr.3 ] assume that in the model the unknown function is continuously differentiable .then , for any where . the proof of this proposition is given in section [ sec : pr ] .it is clear that in the case when were obtain that now using this proposition we can obtain the following inequality .[ th.sec:mrs.20 ] assume that for the model the condition holds and the unknown function is continuously differentiable .then the procedure with , for any , satisfies the following oracle inequality \label{sec : mrs.1 + } & + \varepsilon^2 \frac{\psi_{{\mathchoice{q,\varepsilon}{q,\varepsilon}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}+(\vert\dot{s}\vert+1)^{2 } g_{{\mathchoice{1,q}{1,q}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}+g_{{\mathchoice{2,q}{2,q}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}}{\delta}\,,\end{aligned}\ ] ] where now we study the robust risks defined in for the procedure . moreover , we assume also that the upper bound for the basis functions in may be dependent on , i.e. , such that for any [ th.sec:mrs.2 ] assume that for the model the condition holds and the unknown function is continuously differentiable. then robust risks of the procedure with , for any , satisfy the following oracle inequality where the term is such that under the conditions and for any and we study the asymptotically efficience properties for the procedure , with respect to the robust risks defined by the distribution family . to this endwe assume that the unknown function belongs to the following ellipsoid in \,:\ , \sum_{{\mathchoice{j=1}{j=1}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}^{\infty}\,a_{{\mathchoice{j}{j}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\,\theta^2_{{\mathchoice{j}{j}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\,\le \r\}\ ] ] where \right)^{2i}\scriptstylej=0 \scriptscriptstylej=0 ] is the set of times continuously differentiable functions \to\bbr ] onto , i.e. since is a convex and closed set in ] and , moreover , so , setting , we obtain that taking into account now that we obtain and therefore , in view of as to the last term in this inequality , in appendix we show that for any now it is easy to see that where so , in view of lemma [ le.sec:app.3 ] and reminding that we obtain & = \frac{1}{v_{{\mathchoice{\varepsilon}{\varepsilon}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}}\ , \sum_{j=1}^{d}\,\frac{s^{*}_{{\mathchoice{j}{j}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{ } } } } } { s^{*}_{{\mathchoice{j}{j}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}+\,1 } = \frac{1}{v_{{\mathchoice{\varepsilon}{\varepsilon}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}}\ , \sum_{j=1}^{d}\ , \left ( 1 - \frac{j^k}{d^k_{{\mathchoice{\varepsilon}{\varepsilon}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{ } } } } } \right ) \,.\end{aligned}\ ] ] therefore , using now the definition , the inequality and the limit , we obtain that taking here limit as implies theorem [ th.sec:ef.1 ] .first we suppose that the parameters , in and in are known .let the family of admissible weighted least squares estimates given by . consider the pair \ ] ] where and satisfy the conditions in .denote the corresponding setimate as note that for sufficiently small the pair belongs to the set .[ th.sec:up.1 ] the estimator admets the following asymptotic upper bound * proof . * substituting and taking into account the definition one gets where .note now that for any the expectation and , in view of the upper bound , therefore , where .setting we obtain that for each one can check directly here that where the coefficient is given in .moreover , due to the condition therefore , to estimate the last term in the right hand of , we set it is easy to check that therefore , taking into account that by the definition of the pinsker constant in , we arrive at the inequality hence theorem [ th.sec:up.1 ] . combining theorem [ th.sec:up.1 ] and theorem [ th.sec:mrs.2 ] yields theorem [ th.sec:ef.2 ] .in this section we apply the model selection procedure the following problem from the statistical signals theory .assume that the observed through some noise unknown signal in the model has the following form where are orthonormal known basis functions in , but the signals number and the coefficients are unknown .the problem consists to estimation of on the observation . in the statistical radio - physicsthis mens that we need to detect the number of received signals in multipath connection channels . in this casethe coefficients are the signal amplitudes . for this problem we use the lse family defined as this estimate can be obtained from with the weights .the number of estimators is a some function of , i.e. , such that for any . as a risk for the signals numberwe use where the risk is defined in and is some integer number ( maybe random ) from the set . in this casethe cost function has the following form . so , for this problem the lse model selection procedure is defined as note that theorem [ th.sec:mrs.2 ] implies that the robust risks of the procedure with , for any , satisfy the following oracle inequality where the last term satisfies the property .in this section we report the results of a monte carlo experiment to assess the performance of the proposed model selection procedure .in we chose with , ] .we calculated the empirical quadratic risk defined as and the relative quadratic risk the expectations was taken as an average over replications , i.e. we used the cost function with : empirical risks [ cols="<,^,>",options="header " , ] [ re.sec:siml.1 ] it should be noted that the lse procedure is more appropriate than shrinkage method for such number detection problem .first note that where . it should be noted that to study the last term in the right hand side of the inequality we set for any function from ] where .so , taking into account that we obtain that so , setting we obtain that so , we obtain that where .moreover , taking into account that for any , from \scriptstyleq\scriptscriptstyleq\scriptstylei , j=2 \scriptscriptstylei , j=2\scriptstylei\scriptscriptstylei\scriptstylej\scriptscriptstylej\scriptstyle0 \scriptscriptstyle0\scriptstylei\scriptscriptstylei\scriptstylej\scriptscriptstylej\scriptstyleq\scriptscriptstyleq\scriptstyle1 \scriptscriptstyle1\scriptstyle0 \scriptscriptstyle0\scriptstyle4 \scriptscriptstyle4\scriptstyle\varepsilon\scriptscriptstyle\varepsilon\scriptstyle0 \scriptscriptstyle0\scriptstyle\varepsilon\scriptscriptstyle\varepsilon\scriptstyle0 \scriptscriptstyle0\scriptstyle2 \scriptscriptstyle2\scriptstyle\varepsilon\scriptscriptstyle\varepsilon\scriptstyle0 \scriptscriptstyle0\scriptstylet\scriptscriptstylet\scriptstyle1,\varepsilon\scriptscriptstyle1,\varepsilon\scriptstyle\varepsilon\scriptscriptstyle\varepsilon\scriptstyle\varepsilon\scriptscriptstyle\varepsilon\scriptstyle2,\varepsilon\scriptscriptstyle2,\varepsilon\scriptstyleq\scriptscriptstyleq\scriptstyle\varepsilon\scriptscriptstyle\varepsilon\scriptstyle2,\varepsilon\scriptscriptstyle2,\varepsilon\scriptstyle0 \scriptscriptstyle0\scriptstyleq\scriptscriptstyleq\scriptstyle\varepsilon\scriptscriptstyle\varepsilon\scriptstyle\varepsilon\scriptscriptstyle\varepsilon\scriptstyleq\scriptscriptstyleq\scriptstyle*\scriptscriptstyle*\scriptstyle*\scriptscriptstyle*\scriptstyleq\scriptscriptstyleq\scriptstyleq\scriptscriptstyleq\scriptstyle*\scriptscriptstyle*\scriptstyleq\scriptscriptstyleq\scriptstyle\varepsilon\scriptscriptstyle\varepsilon\scriptstyleq\scriptscriptstyleq\scriptstyle*\scriptscriptstyle*\scriptstyleq\scriptscriptstyleq\scriptstyle\varepsilon\scriptscriptstyle\varepsilon\scriptstyleq\scriptscriptstyleq\scriptstyle*\scriptscriptstyle*\scriptstyleq\scriptscriptstyleq\scriptstyleq\scriptscriptstyleq\scriptstyle\varepsilon\scriptscriptstyle\varepsilon\scriptstylej=[1/\varepsilon]+1 \scriptscriptstylej=[1/\varepsilon]+1\scriptstylej\scriptscriptstylej\scriptstyle\varepsilon\scriptscriptstyle\varepsilon\scriptstylej=[1/\varepsilon]+1 \scriptscriptstylej=[1/\varepsilon]+1\scriptstylej\scriptscriptstylej\scriptstylej=[1/\varepsilon]+1 \scriptscriptstylej=[1/\varepsilon]+1\scriptstylej\scriptscriptstylej\scriptstylej\scriptscriptstylej ] .note that for the continiously differentiable functions ( see , for example , lemma a.6 in ) the fourrier coefficients for any satisfy the following inequality + 1}{j=[1/\varepsilon]+1}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\,t^2_{{\mathchoice{j}{j}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{ } } } } \le 4\varepsilon\left(\int^{1}_{{\mathchoice{0}{0}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\vert\dot{s}(t)\vert \d t\right)^{2 } \le 4\varepsilon \vert\dot{s}\vert^{2 } \,.\ ] ] the term can be estimated by the same way as in , i.e. + 1}{j=[1/\varepsilon]+1}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\,t^{2}_{{\mathchoice{j}{j}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{ } } } } \le 4\varepsilon^{3}\varkappa_{{\mathchoice{q}{q}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\vert\dot{s}\vert^{2}\,.\ ] ] moreover , taking into account that for the expectation we can represent the last term in as + 1}{j=[1/\varepsilon]+1}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\,(\eta^{a}_{{\mathchoice{j}{j}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}})^{2 } = \varepsilon^{2}\,\check{\varkappa}_{{\mathchoice{q}{q}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}(n-[1/\varepsilon ] ) + \varepsilon \ , b_{{\mathchoice{2,\varepsilon}{2,\varepsilon}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}(x ' ) \,,\ ] ] where the function is defined in and .we remind that \scriptstylej=[\sqrt{1/\varepsilon}]+1 \scriptscriptstylej=[\sqrt{1/\varepsilon}]+1\scriptstylej\scriptscriptstylej\scriptstyleq\scriptscriptstyleq\scriptstyleq\scriptscriptstyleq\scriptstyleq\scriptscriptstyleq\scriptstyleq\scriptscriptstyleq\scriptstyle*\scriptscriptstyle* ] and is the levy process of the form with nonzero constants and .we denote by and the distributions of the processes and on the skorokhod space ] we set where is the continuous part of the process in \scriptstyle0\le s\le t\scriptscriptstyle0\le s\le t\scriptstyles\scriptscriptstyles\scriptstyle2 \scriptscriptstyle2\scriptstyles\scriptscriptstyles\scriptstyle2 \scriptscriptstyle2\scriptstyle\{\delta x_{{\mathchoice{s}{s}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\in\varrho_{{\mathchoice{2}{2}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\gamma\}\scriptscriptstyle\{\delta x_{{\mathchoice{s}{s}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\in\varrho_{{\mathchoice{2}{2}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\gamma\} ] .* proof . *note that to show this proposition it suffices to check that for any any for taking into account that the processes and have the independent homogeneous increments , to this end one needs to check only that for any and to check this equality note that the process is the gaussian martingale .from here we directly obtain the squation . hence proposition [ pr.sec : app.1++ ] . in this sectionwe consider the following continuous time parametric regression model where with the unknown parameters and the process is defined in .note now that according to proposition [ pr.sec : app.1++ ] the distribution of the process is absolutely continuous with respect to the on ] .let be a prior density on having the following form : where is some continuously differentiable density in .moreover , let be a continuously differentiable function such that , for each , where for any measurable integrable function we denote & = \int_{\bbr^d}\,\int_{{\mathchoice{{{\cal x}}}{{{\cal x}}}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}\ , h(x,\theta)\,f(x,\theta)\,\phi(\theta)\d \p_{{\mathchoice{\xi}{\xi}{\lower.25ex\hbox{ } } { \lower0.25ex\hbox{}}}}(x)\ , \d \theta\,,\end{aligned}\ ] ] where \scriptstylej\scriptscriptstylej\scriptstylej\scriptscriptstylej\scriptstyle\xi\scriptscriptstyle\xi\scriptstylej\scriptscriptstylej$}}}}\,.\end{aligned}\ ] ] now by the bouniakovskii - cauchy - schwarz inequality we obtain the following lower bound for the quadratic risk to study the denominator in the left hand of this inequality note that in view of the reprentation therefore , for each , and taking into account that we get hence lemma [ le.sec:app.3 ] . bichteler k. , jacod j. _ calcul de malliavin pour les diffusions avec sauts : existence dune densit dans le cas unidimensionnel_. sminaire de probabilit , xvii , lecture notes in math . , * 986 * , springer , berlin , 1983 , 132157 . el - behery i.n . and macphie r.h .( 1978 ) maximum likelihood estimation of the number , directions and strengths of point radio sources from variable baseline interferometer data ._ ieee transactions on antennas and propagation _ , * 26 * ( 2 ) , 294 301 .flaksman , a.g .( 2002 ) adaptive signal processing in antenna arrays with allowance for the rank of the impule - response matrix of a multipath channel _ izvestiya vysshikh uchebnykh zavedenij .radiofizika _ , * 45 * ( 12 ) , 1064 1077 .galtchouk , l. and pergamenshchikov , s. ( 2013 ) uniform concentration inequality for ergodic diffusion processes observed at discrete times . _ stochastic processes and their applications _ , * 123 * , 91109 galtchouk , l. and pergamenshchikov , s. ( 2009 ) adaptive asymptotically efficient estimation in heteroscedastic nonparametric regression via model selection , _/ hal-00326910/fr/_. konev v. v. and pergamenshchikov s. m. sequential estimation of the parameters in a trigonometric regression model with the gaussian coloured noise ._ statistical inference for stochastic processes _ * 6 * ( 2003 ) 215235 .konev v. v. and pergamenshchikov s. m. nonparametric estimation in a semimartingale regression model .. _ journal of mathematics and mechanics of tomsk state university _ * 3 * ( 2009 ) 2341 .konev v. v. and pergamenshchikov s. m. nonparametric estimation in a semimartingale regression model .part 2 . robust asymptotic efficiency ._ journal of mathematics and mechanics of tomsk state university _ * 4 * ( 2009 ) 3145 .manelis , b.v .( 2007 ) algorithms for tracking and demodulation of the mobile communication signal in the conditions of the indiscriminating multipath propagation , _ radiotekhnika _ , 4 , 1621 ( in russian ) .trifonov a.p ., kharin a.v ., chernoyarov o.v ., kalashnikov k. s. ( 2015 ) determining the number of radio signals with unknown phases . _ international journal on communications antenna and propagation _ , * 5 * ( 6 ) , 367374 . trifonov a.p . , kharin a.v ., chernoyarov o.v .( 2016 ) estimation of the number of orthogonal signals with unknown parameters ._ journal of communications technology and electronics _ , * 61 * ( 9 ) , 1026 1033 .
we consider a nonparametric robust estimation problem in the continuous time for the functions observed on the fixed time interval with a noise defined by the levy processes with jumps . an adaptive model selection procedure is proposed . a sharp non - asymptotic oracle inequalities for the robust risks is obtained and the robust efficiency is shown . we apply this procedure to signals number detection problem in the multipath connection channel . _ msc : _ primary 62g08 , secondary 62g05 _ keywords _ : non - asymptotic estimation ; robust risk ; model selection ; sharp oracle inequality ; asymptotic efficiency .
we utilize an elastic rod model described by that has been parameterized to represent dna .the equations of motion are as follows : in this system of equations , equation ( [ intro:1 ] ) represents the balance of force and linear momentum and equation ( [ intro:2 ] ) represents the balance of torque and angular momentum according to newton s laws. equations ( [ intro:3 ] ) and ( [ intro:4 ] ) are continuity relations expressed in a non - inertial reference frame . here , and are independent variables representing time and fiducial arclength , respectively .the functions that we wish to study are the four three - vector functions , , , and .the matrices , and and the scalar are as follows : matrix is the linear density of the moment of inertia tensor. matrices and represent the elastic properties of the rod ( : young s modulus , and : shear modulus , : torsional rigidity , and : bend stiffness ) . is the linear mass density and is equal to for an isotropic model of dna . the matrices have the following values for an isotropic model of dna : \label{intro:6}\\ c_{ik}&=&\left(\matrix{c_1 & 0 & 0 \cr 0 & c_2 & 0 \cr 0 & 0 & c_3 \cr}\right)\equiv \left(\matrix{8.16 & 0 & 0 \cr 0 & 8.16 & 0 \cr 0 & 0 & 21.6 \cr}\right)\times10^{-10}\left[\frac { km}{s^2}\right ] \label{intro:7}\\ d_{ik}&=&\left(\matrix{d_1 & 0 & 0 \cr 0 & d_2 & 0 \cr 0 & 0 & d_3 \cr}\right)\equiv \left(\matrix{2.7 & 0 & 0 \cr 0 & 2.7 & 0\cr 0 & 0 & 2.06 \cr}\right)\times10^{-28}\left[\frac { km^3}{s^2}\right ] \label{intro:8}\end{aligned}\ ] ] for isotropic bending and shear we will denote and .these definitions will be used below .the above values have been adapted from , . in biological terminology the components of to the three dna helical parameters describing translation and the components of correspond to the three dna helical parameters describing rotation .if we attach a local coordinate frame , denoted by , to each base - pair the axes will point in the direction of the major groove , minor groove and along the dna helical axis , respectively .the shape of the dna can be described by a three - dimensional vector function which is related to and by a suitable mathematical integration . in elastic rod terminologythe three - dimensional vector function gives the centerline curve of the rod as a function of and , showing only how the rod bends . to show the twist , shear and extension of the rod we must attach `` director '' frames made of the orthogonal triples at regular intervals along the rod .the director frames are evenly spaced along the rod when the rod is not extended and are all parallel to each other ( with pointing along the rod ) when the rod is not bent or twisted .any deformation of the rod will be indicated by a corresponding change in the orientation of the director frames .the vector ( ) is the translational velocity of in time ( space ) : the directors are of constant unit length , so the velocity of each director ( ) is always perpendicular to itself , and it must also be perpendicular to the axis of rotation of the local frame . we will denote of the unstrained state by . in general an unstrained rod can have any shape .the most simple case is when the rod is straight with unit extension , , because in the unstrained state : similarly , an unstressed elastic rod may have an intrinsic bend and/or twist , denoted by , which must be subtracted from the total to give the dynamically active part .the simplest case is a rod with no intrinsic bend or twist and .for short wave lengths ( section [ section:2 ] ) we will suppose that and . in section [ section:3 ] we will consider a straight rod with an intrinsic twist and .the results from this section are valid for any shape in which the curvature of the rod is much greater than the wavelength of the disturbance being propagated regardless of the intrinsic shape of the rod .let us search as usual for the equilibrium point of the system ( [ intro:1]-[intro:4 ] ) by setting , , and to constants .it is easy to see that an equilibrium point of equations ( [ intro:1]-[intro:4 ] ) is , and we shall suppose that each variable differs slightly from the equilibrium value so we retain only the linear terms in the equations .the linearized system ( [ intro:1]-[intro:4 ] ) is : in the system ( [ linear:1]-[linear:4 ] ) and further in this paper , , and denote the small deviations from the equilibrium values , e.g. . in the usual mannerwe search for a solution to ( [ linear:1]-[linear:4 ] ) using harmonic analysis .we assume that each variable depends on time and space as follows : where denotes one of the four vector variables from the system of equations ( [ linear:1]-[linear:4 ] ) and denotes its amplitude .is a scalar quantity corresponding to a frequency of oscillation and that is a vector quantity that describes a rotation , so there should be no confusion between the two variables . ]let us substitute ( [ linear:4a ] ) to the system ( [ linear:1]-[linear:4 ] ) : for short wavelength we search for solutions with , so in the limit we neglect all terms which contain cross products and the system ( [ linear:4b]-[linear:4e ] ) becomes : here division into longitudinal and transversal components has been obtained by defining and .equations ( [ linear:4k]-[linear:4n ] ) represent four types of waves that can propagate in the elastic rod with velocity , ( according to the order of equations above ) : * shear waves ( ) : \over 3.22 \cdot 10^{-15 } [ k / m]}\approx 5.03 \r a / ps\ ] ] * extension waves ( ) : \over 3.22 \cdot 10^{-15 } [ k / m]}\approx 8.2 \r a / ps\ ] ] * bend waves ( ) : \over 4.03 \cdot 10^{-34 } [ km]}\approx 8.2 \r a / ps\ ] ] * twist waves ( ) : \over 2\cdot 4.03 \cdot 10^{-34 } [ km]}\approx 5.0 \r a / ps\ ] ] these results were obtained for a rod with arbitrary shape because all terms that define the rod shape in equations ( [ linear:4b]-[linear:4e ] ) were omitted .so , if wavelength tends to zero ( is the least space parameter of the problem ) four wave types can propagate along the rod .it is well known that the measurement of wave velocity is a usual method for determining elastic properties of solids .so , wave velocity ( transversal and longitudinal ) and young s modulus ( shear modulus ) are uniquely determined by formulae similar to the ( [ linear:4p]-[linear:4s ] ) .hakim et al . determined the velocity of sound in dna . based on elastic constants obtained from , and used in this work ,the velocity is approximately two times less than that obtained by hakim et al . .experiments to measure elastic properties of single dna molecules have been reported using scanning tunnelling microscopy , fluorescence microscopy , fluorescence correlation spectroscopy , optical tweezers , bead techniques in magnetic fields , optical microfibers , low energy electron point sources ( electron holography ) and atomic force microscopy ( afm ) .each method differs in the molecular properties probed , spatial and temporal resolution , molecular sensitivity and working environment , so each method gives close but not the same value of elastic constants .so if we evaluate extension / compression velocity using constants obtained from the data reported in the result will be two times slower than the value reported here and based on data from and .here we concentrate our attention on the case of the straight , twisted rod with no intrinsic bend or shear . in this sectionwe suppose that and . as we will see this case allows for a complete analytical analysis of extension / compression ( section [ sound ] ) , twist ( section [ twist ] ) , and bend / shear ( section [ coupled ] ) waves of arbitrary wave length. equations ( [ linear:1]-[linear:4 ] ) in component form are : it is easy to see that equations ( [ linear:11 ] ) and ( [ linear:17 ] ) ( also ( [ linear:14 ] ) and ( [ linear:20 ] ) ) are independent from all other equations and describe extension ( twist ) waves .these two wave types will be discussed in the next two sections . from equations ( [ linear:11 ] ) and ( [ linear:17 ] )it is easy to obtain two wave equations for the extension / compression waves : these small amplitude waves have velocity ( [ linear:4q ] ) and these equations have an harmonic solution[multiblock footnote omitted ] : where are arbitrary constants ( wave amplitudes ) and .using elastic constants obtained from data in and the velocity of sound in dna is equal to ( see equation ( [ linear:4q ] ) ) and dispersion is linear .this is the velocity of propagation of harmonic waves in dna according to the linear approximation . in the linear approximationthe amplitude of the wave must be small , but it can have any wavelength . in the same manner equations ( [ linear:14 ] ) and ( [ linear:20 ] ) yield two wave equations which describe twist waves : linear solutions of these equations can be written as : where are the arbitrary constants ( wave amplitudes ) and .these solutions describe twist waves with velocity ( see equation ( [ linear:4s ] ) ) . in the general case, the solutions of these equations will be dalembertian waves described by ( [ sec1:11]-[sec1:12 ] ) .again the dispersion law is linear . substituting ( [ linear:4a ] ) into the remaining equations of system ( [ linear:9]-[linear:20 ] ) yield : \label{linear:42}\\ -i\omega i\vec\omega_\bot & = & ikd\vec\omega_\bot+ d\left[\vec\omega_0\times\vec\omega_\bot\right]+ c\left[\vec\gamma_0\times\vec\gamma_\bot\right]\label{linear:43}\\ -i\omega\vec\gamma_\bot + \left[\vec\omega_\bot\times\vec\gamma_0\right ] & = & ik\vec\gamma_\bot + \left[\vec\omega_0\times\vec\gamma_\bot\right]\label{linear:44}\\ -i\omega\vec\omega_\bot + \left[\vec\omega_\bot\times\vec\omega_0\right]&= & ik\vec\omega_\bot \label{linear:45}\end{aligned}\ ] ] here , the definitions , , and are used .this is an homogeneous system of linear equations with unknowns , , and .solutions other than the trivial solution exist only if the determinant of the coefficients is zero .this condition is satisfied if : let us discuss the untwisted rod ( )-[linear:20 ] ) consists of two independent subsystems .equations ( [ linear:9 ] , [ linear:13 ] , [ linear:15 ] , [ linear:19 ] ) and ( [ linear:10 ] , [ linear:12 ] , [ linear:16 ] , [ linear:18 ] ) present two different polarizations of bend / shear wave . ] . equation ( [ linear:46 ] ) yields : it will be convenient to use dimensionless variables ( marked by asterisk ) , in this section and define .equation ( [ linear:47 ] ) becomes(asterisks are omitted ) : with solutions : ^ 2k^2+\gamma^2_0 } \pm \sqrt{[g-1]^2k^2+\gamma^2_0 } \right\}\ ] ] there are a total of four solutions .the sign before the braces merely changes the direction of propagation and will not be discussed further .the sign between the roots produces two different branches of solutions as presented in fig.[figure:1 ] . the `` upper '' branch .] corresponds to bend waves and the `` lower '' branch to shear waves . for a rod of infinite length, any value of can be chosen that satisfies equation ( [ linear:48 ] ) , but for a rod of finite length there will be an additional constraint that must be satisfied .if the rod is fixed at both ends then the usual restriction of having a node at each end applies .this introduces the constraint between the length of the rod , , and the possible wavelengths , , so that only discrete points along either sub - branch in figure 1 will be observed .the primary results are that , in the linear approximation , four different types of waves can propagate through a uniform elastic rod .these waves correspond to extension(compression ) , twist , bend and shear .an extension or twist wave will propagate without exciting other modes or changing shape , and it has a linear dispersion relation .bend and shear waves behave rather differently and have a nonlinear dispersion relation .each type obeys a dispersion law that describes two additional sub - branches . utilizing constants suitable for dna we find that , in the limit of small wavelengths , extension and bend propagate with a velocity of approximately 8 / ps and twist and shear propagate with a velocity of approximately 5 / ps .is is also significant that the dispersion relation for bend and shear waves is coupled with the inherent twist of dna .we propose that these physical phenomena enable proteins interacting with dna to accomplish highly sophisticated tasks .for example the difference in extension and twist velocities can be utilized to measure the distance between two points on the dna .since the dispersion law for bend and shear waves depends on intrinsic twist , a mechanism for measuring dna topology exists because of the relation between twist , linking number and writhe .other protein - protein communications can certainly be established to assist cellular mechanisms .we suggest a molecular dynamics simulation experiment to check the correspondence between the linear theory of the propagation of waves in elastic rods and an all atom simulation of dna . in all - atommolecular dynamics simulations of dna the number of base pairs that can be simulated for a significant length of time is from tens to less than hundreds of base pairs . for a simulation of dna with fixed endsit is sufficient to apply a sharp impulse ( function excitation ) to one end of the dna and measure the time of propagation of this disturbance to the other end as a measure of the velocity of extension / compression waves in dna according to equation ( [ linear:4q ] ) .the propagation of twist can be similarly tested by applying a torque . to measure dispersionone must make some simple calculations before the simulation and then excite one end of the dna with a driving force of the appropriate frequency . in this mannera standing wave can be established in the dna .the wave frequency and wave number ( ) are related by the well known relation : where is the wave frequency , is the wave velocity and is the wavelength . in this casethe wave length ( wave number ) is easy to evaluate : and is the frequency of the driving oscillation .the values obtained from simulation can be compared with the number obtained from formula ( [ linear:48 ] or [ linear:48 ] ) as a means of correcting our choice of constants .we express our thanks to dr .jan - ake gustafsson , dr .iosif vaisman and dr .yaoming shi for stimulating and helpful discussions .we would also like to thank yaoming shi for a critical reading of the manuscript .this work was conducted in the theoretical molecular biology laboratory which was established under nsf cooperative agreement number osr-9550481/leqsf ( 1996 - 1998)-si - jfap-04 and supported by the office of naval research ( no-0014 - 99 - 1 - 0763 ) .30 john j. tyson , bela novak , garrett m. odell , kathy chen and dennis thron . chemical kinetic theory : understanding the cell - cycle regulation .trends biochem .21 , 89 - 96 , ( 1996 ) .landau l.d ., lifshitz e.m .fluid mechanics .pergamon press .landau l.d ., lifshitz e.m .electrodynamics of continous media .pergamon press .j.c.simo , j.e.marsden , p.s.krishnaprasad , archive for rational mechanics and analysis , * 104 * , 125 - 184,(1988 ) .a ) yaoming shi , andrey e. borovik , and john e. hearst .elastic rod model incorporating shear extension , generalized nonlinear schrdinger equations , and novel closed - form solutions for supercoiled dna . j.phys . * 103*,(8),3166 - 3183 , ( 1995 ) .b ) martin mcclain , yaoming shi , t.c.bishop , j.e.hearst .visualization of exact hamiltonian dynamics solutions .( to be published ) .j.d.moroz , p.nelson , entropic elasticity of twist - storing polymers , macromolecules , 31 , 6333 - 6347 , ( 1998 ) . c. bouchiat , m. m .elasticity model of a supercoiled dna molecule , phys .* 80*,(7),1556 - 1559 , ( 1998 ) .m.b.hakim , s.m.lindsay , j.powell .the speed of sound in dna .biopolymers , 23 , 1185 - 1192 ( 1984 ) guckenberger r. , heim m. , cevc g. , knapp h.f . , wiegr w. , hillebrand a. science * 266 * , 1994 , 1538 - 1540 .yanagida m. , hiraoka y. , katsura i. , cold spring harbor symp .. biol . * 47 * , 1983 , 177 .wennmalm s. , edman l. , rigler r. proc .sci . * 94 * , 1997 , 10641 - 10646 .smith s.b ., cui y. , bustamante c. science . * 271 * , 1996 , 759 - 799 .wang d.w . , yin h. , landick r. , gelles j. , block s.m .j. * 71 * , 1997 , 1335 - 1346 .smith s.b ., finzi l. , bustamante c. science . * 258 * , 1992 , 1122 - 1126 .strick t.r . ,allemand j.f ., bensimon d. , bensimon a. , croquette v. science . *271 * , 1996 , 1835 - 1837 ., lebrun a. , heller ch. , lavery r. , viovy j. , chatenay d. , caron f. science . * 271 * , 1996 , 792 - 794 .fink h. w. , sch ch .science . * 398 * , 1999 , 407 - 410 .hansma h.g . ,sinsheimer r.l . , li m.q ., hansma p.k .nucleic acids res . *20 * , 1991 , 3585 - 3590 .bishop t. , zhmudsky o. currents in computational molecular biology 2001 .les publications crm , montreal , 2001 .information transmission along dna .p. 105 - 106 .isbn 2 - 921120 - 35 - 6 .
it is shown that information transmission inside a cell can occur by means of mechanical waves transmitted through dna . the propagation of the waves is strongly dependent on the shape of the dna thus proteins that change the shape of dna can alter signal transmission . the overall effect is a method of signal processing by dna binding proteins that creates a `` cellular communications network '' . the propagation of small amplitude disturbances through dna is treated according to the mechanical theory of elastic rods . according to the theory four types of mechanical waves affecting extension(compression ) , twist , bend or shear can propagate through dna . each type of wave has unique characteristic properties . disturbances affecting all wave types can propagate independently of each other . using a linear approximation to the theory of motion of elastic rods , the dispersion of these waves is investigated . the phase velocities of the waves lies in the range using constants suitable for a description of dna . the dispersion of all wave types of arbitrary wave length is investigated for a straight , twisted rod . based on these findings , we propose all - atom numerical simulations of dna to investigate the propagation of these waves as an alternative measure of the wave velocity and dispersion analysis . * introduction * our treatment of dna is based on the hypothesis that such a well organized system as a cell must have a highly sophisticated system of communications . for instance , the cell cycle is the result of a series of complex self organized processes that must be well orchestrated . a model has been developed to demonstrate how these events may be initiated by critical concentrations of specific proteins which shift equilibrium to favor the advance of the cell cycle . however , it is also known that the cell - cycle can be disrupted if conditions are not suitable . similar arguments apply to virtually all cellular processes and in order to achieve the required checks and balances a method of communication is necessary . is there the possibility that information is transmitted through dna electromagnetically instead of mechanically ? in such case dna will function as a transmission line which requires total internal reflection ( tir ) of the radiation within the dna . to achieve tir , the wavelength of the radiation must be 5 - 10 times less than the diameter of the transmission line . since the diameter of the dna is close to 20 the radiation must have a wavelength close to 2 . this wavelength is close to atomic dimensions so diffraction should dominate . furthermore , the energy associated with this wavelength radiation is on the order of which is sufficient to destroy chemical bonds and therefore not easily managed biologically . for these reasons , we believe that the communication will be achieved mechanically . the remainder of the paper will demonstrate that the mechanical properties of dna have the necessary time and spatial dimension to support the propagation of information while interactions between proteins and dna provide a mechanism by which this information is processed . this manuscript is based on a continuous medium model of dna . such an approach is well known and widely used to describe solids , liquids and gases ( see , for example ) . thus , when we speak of infinitely small elements of volume , we shall always mean those which are `` physically '' infinitely small , i.e. very small compared with wavelength or radius of curvature under consideration , but large compared with the distance between the atoms in dna .
in ( * ? ? ? * th .1 ) , the authors derived the coverage probability for the downlink transmission of a typical user in the single - tier multi - cell network by assuming that the desired channel undergoes rayleigh fading and that the base stations ( bss ) are spatially distributed according to homogeneous ppp as : where and are real non - negative quantities - tier systems as well . ] . here , is the path - loss exponent , is the intensity of the ppp , is the signal - to - interference - plus - noise ratio ( sinr ) threshold , is the constant transmit power , and is the noise variance .the value of is given by the formula : , ] is expectation taken with respect to interferers channel distribution .the integral does not have closed - form expression for arbitrary values of .for special cases : where is the q - function . in practice , the value of can range anywhere from 1.6 to 6.5 , depending on the environment .the path - loss exponent for outdoor urban area is around 3.7 to 6.5 ; while for indoor single floor office buildings , it is around 1.6 to 3.5 ; for home environment , it is 3 ; whereas for stores , it can be from 1.8 to 2.2 .( [ eqn : coverage ] ) can be directly evaluated by numerical integration techniques , but qualitative insight is lost in this process .this can instead be replaced by approximations , which allows sufficiently accurate quantitative predictions , while at the same time offers qualitative insight into the relationship between various parameters .thus , the motivation of this letter is to find simple expressions that allow us to approximate the value of coverage probability for these diverse path - loss exponents . in this letter, we investigate a few approximations related to the integral ( [ eqn : coverage ] ) and discuss their convergence properties . while we focus on cases when , much of the analysis carries over to cases when as well .there can be two limiting cases for the integral ( [ eqn : coverage ] ) : when and when .although the solutions for these two cases are trivial , they have important physical significance .physically , can occur when the bs intensity and hence the term in the integral can be neglected .this is referred to as the noise - limited case .similarly , can occur when , so that the term can be neglected .this is referred to as the interference - limited case .these cases can also be used as initial approximations when or .for these two cases , the integral ( [ eqn : coverage ] ) assumes simple closed form solutions : to evaluate the integral for the case when , we have used the fact that , where , , ( * ? ? ?. 3.326 , eqn .2 , p. 337 ) .we will use this formula frequently in the rest of the letter .the simplicity of these expressions inspires the following simple , closed form of limiting approximation for the cases when both and are non - zero : ^{-1}. \label{eqn : limiting - approx}\ ] ] we see that ( [ eqn : limiting - approx ] ) reduces to one of the cases given in ( [ eqn : limit - a - zero ] ) or ( [ eqn : limit - b - zero ] ) as or , respectively , and is exact for .the formula can be used when and are comparable to each other , but at the expense of accuracy . for the case when both and are positive, one possible strategy of arriving at an integral approximation is to expand one of the exponential terms as where the lagrange form of the remainder is such that . since , absolute value of can be upper bounded by if we expand the term appearing in ( [ eqn : coverage ] ) and integrate term wise , we obtain where the remainder term .neglecting the remainder term gives us our first approximation eq . ( [ eqn : first - approx - series ] ) reduces to ( [ eqn : limit - b - zero ] ) when , as such it is a refinement of ( [ eqn : limit - b - zero ] ) .the authors in arrived at the first two terms of the series ( [ eqn : first - approx - series ] ) using integration by parts ; but they did not elaborate on its validity . if we take infinite number of terms , the series ( [ eqn : first - approx - series ] ) is not convergent for for arbitrary values of and ( see * appendix a * ) .nevertheless , this form of approximation is appropriate when . to quantify the regionin which this approximation is valid , we first upper bound the remainder term in ( [ eqn : first - approx ] ) as the second inequality is from ( [ eqn : lagrange - bound ] ) where .if we take terms of the approximating series , then for any given error tolerance , we require that the integral error be . using the upper bound for in this expression , we obtain the bound for in terms of as where . substituting the expressions for and in ( [ eqn : bound-1 ] ) , we get thus , we obtain the largest noise variance above which the error of approximation becomes unacceptable for given number of terms and error tolerance . however , it is not obvious as to what happens as is increased to infinity . for this, we have the limit hence , for , we have the limit therefore , we have the following largest value of as below which the approximation will be valid for any given . in the integral ( [ eqn : coverage ] ) ,if we consider expanding the term instead and perform term wise integration , we get where the remainder term . as before , if we neglect the remainder term , we obtain our second approximation as eq . ( [ eqn : second - approx - series ] ) reduces to ( [ eqn : limit - a - zero ] ) when , as such it is a refinement of ( [ eqn : limit - a - zero ] ) . the remainder term in ( [ eqn : second - approx ] ) can be bounded as the second inequality is from ( [ eqn : lagrange - bound ] ) where .this form of approximation is suitable for cases when . to find the precise region for which this approximation is valid , consider again an error tolerance of and terms of the approximating series . since we require that the integral error be bounded by the error tolerance , , this leads us to a bound for in terms of as where .substituting the expressions for and gives us the smallest value of noise variance below which the error becomes unacceptably large .as in previous case , as tends to infinity , we have hence for , we have .thus , the smallest value of above which the approximation will be valid for any is this result implies that the infinite series for ( [ eqn : second - approx - series ] ) is convergent when .this is indeed the case ( see * appendix b * ) . here, we would like to find an approximation for the case when and are comparable to each other , that is , when the system is neither noise limited nor interference limited .we will try to obtain an approximation using laplace s method .let .the unique global minima of is at , which is where this method is usually applied .if we take the first order taylor expansion of about , then it will merely result in the approximation .thus , it can be advantageous to consider a point other than the global minima for the taylor expansion .the taylor expansion of about is where .the remainder term in lagrange form is where for some .changing the variable in ( [ eqn : coverage ] ) from to , we obtain let , , and ; completing the square , we get now , let , and expanding its exponent , we have , where such that . its absolute value can be upper bounded as . thus the right integralcan be split as where and is the q - function of .the remainder term is .if we neglect this remainder term , then we obtain our required approximation as the expression is exact when .unlike the previous two approximations , here the accuracy of approximation in general depends on the value of chosen .we can bound the remainder term as the last step follows as such : since the function to be integrated is non - negative , .when and , both and ; hence , and therefore , the maxima of occurs in negative axis .so , where the last equality follows by changing the variable from to .we have . when , is monotonically decreasing and its maximum value is attained at .putting , we have putting , we obtain where and is the lower incomplete gamma function .empirical considerations have led us to choose .note that the form of this value is similar to the limiting approximation .. , width=264 ] changes.,width=264 ] here we compare the four approximations to integral ( [ eqn : coverage ] ) with its value obtained via numerical integration . since both parameters and depend on and ,we maintain them both at db .we assume , i.e. , one bs on average per circular area of radius 500 meters .we only vary .the interferer s channel distribution is assumed to be exponential with mean .1 plots the change in absolute error and coverage probability as snr = changes for .for both the interference - limited and noise - limited cases , we take the summation up to four terms from ( [ eqn : first - approx - series ] ) and ( [ eqn : second - approx - series ] ) , respectively . for the interference - limited case ( [ eqn : second - approx - series ] ) , the error is near zero only for snr greater than certain value .likewise , for the noise - limited case ( [ eqn : first - approx - series ] ) , the error is near zero only for snr smaller than some value . for intermediate case, the laplace approximation gives an error less than 0.005 .fig . 2 plotsthe maximum absolute error for the limiting and laplace approximations . in general , the error amplitude tends to increase with , except for where the error is zero for laplace .the error amplitude decreases as increases , while it remain unchanged when or is changed .this is because when is increased , the system becomes interference limited at higher noise variance ( i.e. , at lower snr ) .thus the coverage probability curve in fig .[ fig : alpha3 ] shifts to the left .this shift is quantitatively given by ( 10 ) and ( 11 ) .however , the maximum coverage probability , given by , which occurs when ( and thus at high snr ) , remains unchanged .similarly , since the transmit powers of all the bss are assumed to be equal in , changing the value of transmit power is equivalent to scaling the noise variance by in the sinr expression .thus , the plot of the coverage probability versus the in db results in shifting of the curves towards the left as is increased , without changing its shape or amplitude .since the coverage probability curve has only shifted and not changed its shape , the maximum error which occurs around the edge of the plateau of the coverage probability will also remain unchanged .we have examined four different ways of approximating the coverage integral .the limiting approximation is useful as an initial rough approximation .we can use the approximation for interference - limited case or the noise - limited case so long as the parameters and satisfy some inequality relationship . for intermediate cases ,we recommend the use of laplace approximation . to check the convergence of the series ( [ eqn : first - approx - series ] )we will apply the ratio test : converges if . for our case , using the identity , we have using the identity : , we obtain for large and fixed , , so we have therefore , we finally have h.s .dhillon , r.k .ganti , f. baccelli , j.g .andrews , `` modeling and analysis of -tier downlink heterogeneous cellular networks , '' _ ieee j. sel .areas commun . _ ,3 , pp . 550560 , april 2012 .
this letter gives approximations to an integral appearing in the formula for downlink coverage probability of a typical user in poisson point process ( ppp ) based stochastic geometry frameworks of the form . four different approximations are studied . for systems that are interference - limited or noise - limited , conditions are identified when the approximations are valid . for intermediate cases , we recommend the use of laplace approximation . numerical results validate the accuracy of the approximations . integral approximations , coverage probability , poisson point process
noise is usually considered as having a corrupting effect on meaningful signals .there is however one well known counter example to this widespread belief ; the _ stochastic resonance _ ( sr ) phenomenon . in this case ,addition of a random interference signal ( noise ) to a weak , subthreshold stimuli , may enhance its detection .originally introduced in the framework of physics , stochastic resonance is now known to take place in several sensory systems from the cricket cercal sensory system , to crayfish mechanoreceptors , the somatosensory cortex of the cat and the human visual cortex in order to facilitate detection of weak signals ( for a review see ) .in addition there is evidence that stochastic resonance may be used not only at the level of sensory processing , but also in central cognitive processes . in a psychophysical study by usher and feingold ,the memory retrieval of arithmetical multiplication was found to be enhanced by the addition of noise . in this article, we feed a simplified model of a cortical column with two time - varying input signals .we first show that for a broad set of computation based on those input signals , addition of a small amount of noise enhances the computational power of the system .the location where the noise strength is optimal lies approximately where the network reacts the strongest to a change in the mean input ( maximum susceptibility ) .we then set the connectivity to zero ( with appropriate scaling of the statistics of the input to the neurons ) .although the simplest task ( addition of both input signals ) can be achieved with a similar accuracy to that obtained with a connected network , a multiplicative task can only be solved if the population of neurons is connected .the stochastic resonance effect is thus seen to take place at the _ system level _ rather than at the single cell level .it is an emergent property of the neuronal assemblies .we consider in our simulations networks of cells ( leaky integrate - and - fire neurons ) . the connectivity matrix is fixed and every neuron receives input from excitatory and inhibitory presynaptic neurons randomly chosen among the neurons in the network .the system is made out of excitatory and inhibitory neurons , reflecting the ratio of pyramidal cells to interneurons in cortical tissue .this excess of excitatory neurons is approximately balanced by the greater efficacy of synaptic transmission for inhibition ; in our model six times bigger than for excitation : and .this approximate balance between excitation and inhibition is thought to take place at a functional level in cortical areas .sparsely - connected networks of spiking neurons of this type have been fully described in terms of their dynamical behaviour .the dynamics of the leaky integrate - and - fire neurons is described by the following equation : where describes the membrane potential of neuron with respect to its equilibrium value , corresponds to its effective membrane time constant and is the effective input resistance of the neuron stimulated by a total input .we add to this equation a threshold condition ; if the membrane potential of the neuron exceeds the critical value , a spike is emitted and after a refractory period , the integration start again from the reset potential . every time the neuron receives an action potential from a presynaptic pyramidal cell ( resp .interneuron ) , its membrane potential is depolarized according to the value of the synaptic efficacy for excitation ( resp . for inhibition, the neuron is hyperpolarized by an amount ) .the input a neuron will get from within the network can thus be written as : where is the ensemble of presynaptic neurons , the time neuron fires its spike and is a short transmission delay .we can now decompose the external stimulation into two contributions .first , we model all noise sources , ranging from synaptic bombardment from neurons outside the network to different sources of noise diversely located at the level of synaptic transmission , channel gating , ion concentrations , membrane conductance to name but a few .all these noise sources are grouped into a term with a mean depolarisation and a white noise component defined by its standard deviation .we can define the _ susceptibility _ of the network as the sensitivity of the population spiking rate upon a change in the mean depolarisation : .we constrained our simulation space to small mean depolarisation so that in absence of the noisy component , the network exhibits no spiking activity ( subthreshold regime ) . and .in addition two randomly generated subpopulations , each composed of of the total number of neurons , receive test inputs and .the readout _ sees _ all neurons in the network and is trained to match a function of the test input and ( see the methods section for details ) ., width=309 ] then , we inject two test signals and , each to randomly chosen neurons in the network .both inputs share the same statistical properties ; they are constant over a time interval and then they switch to new randomly chosen values and remain constant for the next time interval . at each transition time , the new values are chosen uniformly over the interval ] . in our simulations we fixed so that the transient period after a transition has vanished .the set of functions that are under consideration in the present study are ; the addition , the multiplication which plays a crucial role in the transformation of object locations from retinal to body - centered coordinates and two polynomials of degree two and , which can be seen as the nonlinear xor paradigm .parameters were optimized using a first simulation ( learning set ) lasting 100 seconds ( 100000 time steps of simulation ) and were kept fixed afterwards .the performance measurements reported in this paper are then evaluated on a second simulation of 100 seconds ( test set ) .we compare our results to a simple two parameter readout .such a readout only adjusts to the mean of the time series .the reconstruction error for such a readout equals the variance of the time series .the performance with the full readout are therefore expressed as a gain ( in percent ) over the trivial prediction : , where is the error introduced above . in the simulation of fig.[control ] , all connections in the network were removed . in order to compensate the loss of input , we changed the background input so that the neurons receive an input with the same statistical properties ( mean and variance ) .the adapted input in the network with no connection ( nc ) is thus : for the mean and for the variance term . is the mean population rate in the connected network ( see ) .simulation results were obtained using the simulation software nest .we want to know how noise influences the ability of a recurrent network of spiking neurons to process information and to perform a series of computational tasks .in particular we are interested in effects related to stochastic resonance .we therefore relate the optimal noise level ( where the system has a maximum performance ) to dynamical properties of the network . from that perspective , we analyze how performance in a series of computational tasks based on both test signals and are affected by the noise level . in a first series of simulations , we measure the gain over the trivial prediction for three different functions of the test signals ; , and ( see figure [ gradient ] , respectively top right , bottom left and bottom right graphs ) .note that the function can be seen as an implementation of the xor task , which can not be solved by a single layer neural network ( perceptron ) . in all three tasks ,the network exhibits stochastic resonance ; for a given mean depolarisation , there is a non - monotonic dependence upon the noise level .the maximum gain compared to trivial prediction reaches for the additive task and for polynomials of degree two .a comparison to the map of the susceptibility of the network ( see top left graph of figure [ gradient ] ) indicated that the tasks are solved best when the sensitivity of the network upon changes in the mean input is highest .addition of noise both increases the susceptibility of the network and the capacity of performing complex computation based on sparse inputs . in ( hz / pa ) of the network as a function of the statistical properties of the external drive ( mean and variance ) .a high _ susceptibility _ ( light colors ) defines a network that is highly sensitive to a change in the mean of the drive .top right and bottom : gain ( in percent ) over the trivial prediction as a function of the mean and variance of the external drive ; for the simple additive task ( top right ) , for a first polynomial of degree two ( bottom left ) and a second polynomial of degree two ( bottom right ) .level curves are shown for the sake of clarity at and for the polynomials and at and for the addition .the system exhibits stochastic resonance for all three different tasks .in addition , the location where the performance peak approximately corresponds to the zone of high _ susceptibility_.,width=415 ] in order to know whether stochastic resonance displayed in the network is an emergent property of the population of neuron or a single cell effect , we removed all connections within the network . in this second series of simulations ,we compare the performance achieved in networks with connectivity to networks with no connectivity .the latter networks receive adapted version of the mean input and of the noisy input so that every neurons is stimulated with a mean and variance equivalent to that of the connected network ( see methods ) . in the first task ; the addition , the performance with and without connectivity are similar ( see figure [ control ] left ) .since the readout unit performs a weighted _ sum _ that runs over all neurons , it can capture the essence of this simple computation by summing the averaged response of the groups of neurons receiving input with the averaged response of those receiving .recurrent loops play here no significative role . in the second taskwe train the network to perform the multiplication of the two test signals .this arithmetical operation is thought to be essential to the brain in order to do coordinate transformation .the computation of a multiplication by a recurrent neural network was shown to be achievable in a model of the parietal cortex . in this multiplicative task , the complex recurrent network outperforms the network with no connectivity ( see figure [ control ] right ) .in fact , in absence of connections within the population of neurons , multiplication can not be solved by the simple addition of noise .stochastic resonance displayed in the multiplicative task therefore takes place at a _ system _level rather than at the level of single neurons . ; for the connected network ( solid ) and for the control network , when the connectivity is set to zero ( dashed ) .the unconnected collection of neurons is capable of solving this simple task with a similar accuracy as the randomly connected neural network .right : gain ( in percent ) over the trivial prediction for the multiplicative task ; for the reference network ( solid ) and when the connectivity is set to zero ( dashed ) . in absence of recurrence, the network is no longer able to sustain complex computations .stochastic resonance is thus a _ population - based _effect rather than a _ single - cell _ phenomenon.,width=309 ]complex networks of neurons fall in the class of non linear systems with a threshold ; systems that are known to exhibit stochastic resonance . from the experimental side , evidences have shown that the phenomenon helps in detecting sensory signals of small amplitude , and furthermore to favor high level cognitive processes such as arithmetical calculations .our model has revealed the presence of stochastic resonance in a series of neural - based computation . whereas simple additive transformations of input signals can be solved by a collection of independent neurons ,more complex computations need the massive recurrence typically observed in cortical tissue .such complex tasks include the xor problem , a nonlinear benchmark test , and the arithmetical multiplication the brain is likely to use in order to achieve coordinate transformation .stochastic resonance displayed is then an emergent property of the brain microcircuitry .
varied sensory systems use noise in order to enhance detection of weak signals . it has been conjectured in the literature that this effect , known as _ stochastic resonance _ , may take place in central cognitive processes such as the memory retrieval of arithmetical multiplication . we show in a simplified model of cortical tissue , that complex arithmetical calculations can be carried out and are enhanced in the presence of a stochastic background . the performance is shown to be positively correlated to the susceptibility of the network , defined as its sensitivity to a variation of the mean of its inputs . for nontrivial arithmetic tasks such as multiplication , stochastic resonance is an emergent property of the microcircuitry of the model network . _ keywords : _ stochastic resonance , information processing , recurrent neural network + + _ contact author : wulfram.gerstner.ch_
the most popular method for differential gene expression detection in two - sample microarray studies is to compute the t - statistics .the differentially expressed genes are those whose t - statistics exceed a certain threshold .recently , due to the realization that in many cancer studies , many genes show increased expressions in disease samples , but only for a small number of those samples .the study of shows that t - statistics has low power in this case , and they introduced the so - called cancer outlier profile analysis " ( copa ) .their study shows clearly that copa can perform better than the traditional t - statistics for cancer microarray data sets .more recently , several progresses have been made in this direction with the aim to design better statistics to account for the heterogeneous activation pattern of the cancer genes . in , the authors introduced a new statistics , which they called outlier sum .later , proposed outlier robust t - statistics ( ort ) and showed it usually outperformed the previously proposed ones in both simulation study and application to real data set . in this paper, we propose another statistics for the detection of cancer differential gene expression which have similar power to ort when the number of activated samples are very small , but perform betters when more samples are differentially expressed .we call our new method the maximum ordered subset t - statistics ( most ) . through simulation studies we found the new statistics outperformed the previously proposed ones under some circumstances and never significantly worse in all situations .thus we think it is a valuable addition to the dictionary of cancer outlier expression detection .we consider the simple 2-class microarray data for detecting cancer genes .we assume there are normal samples and cancer samples .the gene expressions for normal samples are denoted by for genes and samples , while denote the expressions for cancer samples with and . in this paper , we are only interested in one - sided test where the activated genes from cancer samples have a higher expression level .the extension to two - sided test is straightforward .the usual t - statistics ( up to a multiplication factor independent of genes ) for two - sample test of differences in means is defined for each gene by where is the average expression of gene in normal samples , is the average expression of gene in cancer samples , and is the usual pooled standard deviation estimate the t - statistics is powerful when the alternative distribution is such that all come from a distribution with a higher mean . that for most cancer types , heterogeneous activation patterns make t - statistics inefficient for detecting those expression profiles .they defined the copa statistics where is the percentile of the data , is the median of the pooled samples for gene , and is the median absolute deviation of the pooled samples .the choice of in ( [ copa ] ) depends on the subjective judgement of the user .the use of and to replace the mean and the standard deviation in ( [ t ] ) is due to robustness considerations since it is already known that some of the genes are differentially expressed . in ( [ copa ] ), only one value of is used in the computation .a more efficient strategy would be to use additional expression values .let be the outliers from the cancer samples for gene , where is the interquartile range of the data .the os statistics from is then defined as more recently , studied ort statistics , which is similar to os statistics .the important difference that makes ort superior is that outliers are defined relative to the normal sample instead of the pooled sample .so in their definition , by similar reasoning in os is replaced by and by , where and are the medians of normal and cancer samples respectively . in both os and ort statistics ,the outliers are defined somewhat arbitrarily with no convincing reasons . to address this question ,we propose the following statistics that implicitly considers all possible values for outlier thresholds .suppose for notational simplicity that are ordered for each : if the number of samples where oncogenes are activated were known , we would naturally define the statistics as when is not known to us , one would be tempted to define but this does not quite work since obviously for different values of are not directly comparable under the null distribution that . for example , when , we have >0 ] .this observation motivates us to normalize such that each approximately has mean and variance .this can be achieved by defining $ ] and where is the order statistics of samples generated from the standard normal distribution. then we can define as : so that has mean and variance approximately equal to and respectively .finally we can define our new statistics ( called most ) as with most , we practically consider every possible threshold above which are taken to be outliers . in this formulation , the number of outliers is implicitly defined as [ cols="^,^ " , ]some simulations are carried out to study most , and compare its performance to os , ort , copa , and t - statistics .for copa , we choose to use the percentile in its definition as in .we generate the expression data from standard normal with . for various values , which is the number of differentially expressed cancer samples, a constant is added for differentially expressed genes .we simulated differentially and non - differentially expressed genes , and calculated the roc curves from them by choosing different thresholds for gene calls .figure [ roc1 ] and [ roc2 ] plots the roc curves for some combinations of and . for and small ,all five statistics behave similarly with t - statistics performing the worst .as increases , t becomes better and os and copa begin to lose power . for and medium to large ,the performance of most is only worse than t and better than other statistics .smaller in this case basically leads to roc curve that is close to a line for all statistics since the signal is too weak in this case , so we do not show these results . for and small ,most is better than ort , copa and t , and in this situation only os is competitive with most .larger in this case will produce nearly perfect roc curves for all statistics , and thus those results are also omitted . besides roc curves , we have also tried examining the possibility of using ( [ k ] ) for estimating the number of differentially expressed samples , but so far have been unable to get a reasonable estimate out of it . from the above simulations, we judge that our new estimate most is at least as good as other previously proposed statistics , sometimes much better .thus it is a valuable tool for detecting activated genes in many situations . as an example of real data application ,the data from is publicly available from http://data.cgt.duke.edu/west.php .the microarray used in the breast cancer study contains 7129 genes and 49 tumor samples , 25 of which with no positive lymph nodes identified and the other 24 with positive nodes . similar to , we take the log transformation of the expressions after normalizing the data .we apply most to the data and compare it to the t - statistics by computing the fdr using the sam approach ( ) .figure [ fdr ] plots the fdr versus the number of genes called significant . for this example , most seems to perform a little better than t - statistics , although the difference is too small to be of any significance .significance analysis of microarrays applied to transcriptional responses to ionizing radiation ._ proceedings of the national academy of sciences of the united states of america _ * 98 * , 511621 .
we propose a new statistics for the detection of differentially expressed genes , when the genes are activated only in a subset of the samples . statistics designed for this unconventional circumstance has proved to be valuable for most cancer studies , where oncogenes are activated for a small number of disease samples . previous efforts made in this direction include copa ( ) , os ( ) and ort ( ) . we propose a new statistics called maximum ordered subset t - statistics ( most ) which seems to be natural when the number of activated samples is unknown . we compare most to other statistics and find the proposed method often has more power then its competitors . cancer ; copa ; differential gene expression ; microarray .
constraining the galactic potential optimally has been a very difficult yet important aspect of studies of the milky way and the local group . understanding the various components of the galactic potential , for instance separating the potential contributions from the baryonic disc and that from the dark matter halo , is fundamental to understanding the history and formation of our own galaxy .furthermore , understanding the dark halo potential near the sun is a crucial step to pin down the density and thus the scattering cross - section of the dark matter , which in turn is crucial input for interpreting any dark matter annihilation signal at the centre of milky way ( e.g. * ? ? ?* ) . in the past few decades , constraining the galactic potentialhas mostly relied on the jeans equation ( for a summary , see * ? ? ?* ) despite the known problems of this approach .for instance , one only keeps the velocity dispersion moments up to second order in the jeans equation . this predicts a gaussian ellipsoid velocity distribution that does not capture the observed and expected asymmetries in the angular velocity ( e.g. * ? ? ? * ) .one of the ways to alleviate these problems is through explicitly modelling the stellar population distribution function ( df ) , either via integrals of motions or actions instead of just taking position velocity moments , as is the case of the jeans equation .for such modelling , it is crucial that relatively simple analytic dfs , e.g. based on the actions , are reasonably good approximations to the actual orbit distributions for at least subpopulations of stars in the galactic disc .how well such df approximations work in practice is not yet clear . to explore this question, we consider in particular stellar subpopulations of a particular age or abundance pattern .recently , and ( hereafter refer all of these references as bo12 ) showed , using g - dwarfs from the sloan extension for galactic understanding and exploration ( segue ) survey , that mono - abundance , stars that have nearly the same [ fe / h ] and [ /fe ] , populations have very simple phase - space distribution properties : spatial density distributions that vary exponentially , both in the radial and vertical directions ; furthermore , each mono - abundance population shows isothermal velocity dispersion in the vertical direction and exponentially in the radial direction .this simplicity in space gives hope that one could describe mono - abundance populations with simple action - based dfs which is one of the purposes of this study .since stars are the most abundant tracers of the galactic potential , simple action - based dfs will naturally provide a statistical and rigorous method to constrain the galactic potential because the conversion of position velocity variable ( configuration space ) to the action angle variable depends on the parameters of the potential ( see for generic ground work on the topic ) .the main idea is to determine the likelihood of the observational data , given a joint set of parameters for both the df and the gravitational potential ; subsequent marginalization over the df parameters provides then a rigorously derived constraint on the potential .such an approach appears to be a precondition to fully exploit the dynamical information content of large galactic surveys that will provide us spatial and velocity distribution , along with elemental abundances , including the apache point observatory galactic evolution experiment , galactic archaeology survey with hermes ( galah ; * ? ? ?* ; * ? ? ?* ) and european southern observatory-_gaia _ . in the context of dynamically modelling the galactic disc with action - based dfs ,the goals of the paper are three - fold : explore how well ` mono - abundance populations ' can be approximated by simple action - based analytic dfs ; lay out a formalism that provides constraints from a set of discrete stellar positions and velocities on the gravitational potential , after marginalizing over the df ; forecast what constraints on the _ shape _ of the potential we can expect given existing sample sizes .this paper is organized as follows : in section[sec : df ] , we discuss the choice of parametrization of the df and summarize the adiabatic approximation that we assume to calculate the action variable efficiently . in section[sec : df - match ] , we will show that our choice of df exhibits all the basic phase - space distribution properties of mono - abundance stars , with the only caveat that disc potential has to be included in order for the vertical spatial profile to fit . in section[sec : constrain_potential ] , we show that by studying the likelihood of the df potential parameters , one can recover the potential parameters , even under a restrictive and realistic spatial selection function . throughout this study, we denote the cylindrical coordinate to be .we assume that velocities are measured in the inertial galactocentric frame with axisymmetric potential , . in practice, the conversion from the heliocentric frame to the galactocentric frame requires the knowledge of the solar motion . from the study of sgra * , the solar motion is now accurate to a few ( e.g. * ? ? ? * ) and the solar radius is accurate to ( e.g. * ? ? ?* ; * ? ? ?we will also discuss the uncertainty in estimating the potential parameters due to the uncertainty of this conversion in section[sec : constrain_potential ] .we follow the df advocated by and closely , but note that we rearrange some of the terms in the df to facilitate physical interpretation of the df . for circular orbits in the equatorial galactic plane, the df has to be of the form where determines the angular momentum distribution for an infinitely cold planar disc , and are the radial and vertical action , respectively . in the cold disc limit , the circular radius coincides with the observed radius , where and is the circular velocity , thus , we have for an exponential disc , up to a multiplicative normalization constant , the df parameter determines the radial spatial distribution . in a more general setting ,we relax the constraint and propose the df to be \notag\\ \bigg[\frac{\nu(l_z)}{d_r(l_z)}\exp\bigg(-\frac{\nu ( l_z ) j_z}{d_r(l_z)}\bigg)\bigg ] , \label{eq : df}\end{aligned}\ ] ] where with the circular radius given angular momentum , assuming equatorial in - plane movement ; and are the radial and vertical frequency , respectively , under epicycle approximation .the df parameter determines the radial exponential decay of the velocity dispersion , whereas and control the total radial and vertical dispersion , respectively . and are _ different _ from the velocity dispersions obtained as moments of the df , which correspond to the observable dispersion .these parameters are named as such because they govern the velocity dispersions .they differ in their value quite significantly from the velocity dispersions in the space . ]the and terms are necessary ( e.g. * ? ? ?* ) for two reasons . 1 .a finite increment in energy can lead to an infinite increment of actions in the unbound case .one would love to have a term that will tend to zero at large and couple this term with .one of the possible choices is the radial frequency . at large ,the effective potential is less concave at the minimum point and therefore tends to zero .similarly for the vertical frequency .2 . qualitatively , in region where changes more drastically , the increment of is also more drastic , and therefore the scalelength of has to decrease in proportion to compensate for this effect .similarly for the vertical oscillation .importantly , we only measure but not the actions . in order to study an action - based df, we employ the adiabatic approximation in order to calculate the radial and vertical actions efficiently ., , , the vertical dashed line marks approximately the solar galactocentric radius , the horizontal dashed line indicates the common accepted circular velocity at the solar radius and the shaded region shows the region where the circular velocity is approximately constant in this choice of potential ; the middle and bottom panels show the variation of actions calculated using the adiabatic approximation in the course of trajectory .we plot the case of in table[table : adiabatic ] .the results show that actions calculated with the adiabatic approximation are almost conserved along the orbit with less than cent variation .the orbit was integrated about 10 gyr.,title="fig:",width=307 ] , , , the vertical dashed line marks approximately the solar galactocentric radius , the horizontal dashed line indicates the common accepted circular velocity at the solar radius and the shaded region shows the region where the circular velocity is approximately constant in this choice of potential ; the middle and bottom panels show the variation of actions calculated using the adiabatic approximation in the course of trajectory .we plot the case of in table[table : adiabatic ] .the results show that actions calculated with the adiabatic approximation are almost conserved along the orbit with less than cent variation .the orbit was integrated about 10 gyr.,title="fig:",width=316 ] it has been shown that the vertical action calculated in a fixed radius approximation is almost conserved in the axisymmetric system , and in discs including bar / spiral structure perturbation and radial migration . more precisely , this reduces the calculation to one - dimensional integration that can be numerically calculated easily . furthermore , since , to calculate the approximated vertical action , we have where the subscript ` init .' stands for the observed value in practice .similarly , for the radial action , we can consider the radial component at the equatorial plane , ignoring the vertical motion . under this assumption , ccccccc + & & & & & & + ( kpc ) & ( kpc ) & & & & ( per cent ) & ( per cent ) + + 3 & 0 & 450 & 90 & 60 & 4 & 3 + 6 & 0 &1250 & 60 & 40 & 5 & 3 + 9 & 0 & 1970 & 30 & 20 & 4 & 2 + we deduce where and are the apo- and pericentric radius , respectively .fig.[fig : adiabatic ] and table[table : adiabatic ] illustrate the variation of the approximated actions in the course of a trajectory .we assume a miyamoto nagai potential , with parameters , , .we choose these parameters because we have almost a flat rotation curve at the galactocentric radius and circular velocity at the solar radius as shown in the top panel of fig.[fig : adiabatic ] .we apply slight perturbation on circular orbits that mimic the radial velocity dispersion of at the solar galactocentric radius , radial - to - vertical velocity dispersion ratio of and velocity scalelength of ( bo12 ) .we define the variation of action to be over the mean value of the action for each time step .the results of are listed in table[table : adiabatic ] .the results show that the approximated actions are almost conserved in all cases .we also examine the potential due to a double - exponential disc ( i.e. with density decaying both radially and vertically ) ( see appendix a , equation a10 , * ? ? ?* ) , and also the flattened isothermal potential : the results are qualitatively similar , with the variation of actions less than per cent in all cases . studies a more precise method of calculating the actions , but it is computationally more demanding . as we will discuss in section[sec : constrain_potential ] , we find that the error in using the adiabatic approximation will introduce a systematic error of about cent in estimating the potential parameters .the adiabatic approximation is sufficient for our current study since the purpose of this study is to introduce a new statistical method to constrain the galactic potential .we only focus on the miyamoto nagai potential and the flattened isothermal potential in this study .we do not consider the potential due to a double - exponential disc because the analytic form of this potential ( by solving the poisson equation ) involves an improper integration of a bessel function .this is computationally very expensive .moreover , the miyamoto nagai potential is good enough as a first approximation for a realistic disc potential .in this section , we will show that the df in equation([eq : df ] ) is a good representation of the observed position velocity distribution of mono - abundance subpopulations of the milky way disc , namely that they exhibit an exponential spatial density , both radially and vertically , and an isothermal velocity dispersion in the vertical direction and exponentially decaying in the radial direction .{fig2a.ps } & \includegraphics[scale=0.27]{fig2b.ps } \end{array } ] in order to have the conversion of df between canonical position velocity variable and the action variable , one has to take care of the jacobian term carefully .note that where is the canonical momentum associated with .since , are canonical variables and measure is invariance under canonical transformation , we have furthermore , in a cylindrical coordinate , , we deduce in summary , for an axisymmetric potential , in order to convert an action - based df to space distribution , it suffices to multiply the jacobian ( ) term .we create mock data stellar tracers with attributes , such that it has a density scalelength of about , dispersion scalelength of about with a gaussian velocity dispersion of at the solar galactocentric radius , , ( * ? ? ?we assume that the mean velocity of the gaussian for each spatial point is where ( see for details ) assume that all of the mixed moments ( , and ) vanish .thus , we assume that both the vertex deviation and the tilt of the velocity ellipsoid are zero , even though these were not measured by bo12 .the tilt of the velocity ellipsoid in particular is not expected to be zero at heights above the plane ( e.g. * ? ? ?* ) . however , as discussed by , the adiabatic approximation is unable to capture a non - zero tilt as it assumes that the radial and vertical motions are independent .thus , even though the quasi - isothermal df of equation([eq : df ] ) has a non - zero tilt when used with correctly calculated actions ( for example , using the torus machinery ; * ? ? ?* ) , it does not when using the adiabatic approximation . for this reason , we assume that the tilt of the velocity ellipsoid is zero in what follows . , but for the isothermal potential with the axis ratio and circular velocity .we also tried various choices of , but the fits are equally unsatisfactory : the df family in equation([eq : df ] ) can not produce a vertically exponential tracer density profile , if stars orbit in a flattened isothermal sphere . ]we search for the best - fitting df parameters by maximum likelihood .for this section , we fix a particular choice of miyamoto nagai potential parameters : , , .we examine various choices of the potential parameters .it does not change the results qualitatively .we emphasize that the mock tracer population mimics a mono - abundance population .note that the probability that we observe , according to our df model with df parameters , is given this probability , we can define the log likelihood to be the sum over mock data , and search for the best df parameters by maximizing the log likelihood , .\end{aligned}\ ] ] we obtain the best - fitting parameters by performing a nested - grid search on the multidimensional space and finding the set of parameters that maximizes the likelihood .after obtaining the best - fitting parameters , we calculate from the stellar profile in configuration space and compare it to the one of the mock data to check whether our assumption on the functional form of the df represents well the mock data .we consider a grid of position velocity configuration space and calculate the df in configuration space using equation([eq : configuration_space ] ) .we then calculate the stellar profile of each position velocity component by marginalizing over the other spatial - velocity components .note that , given the functional form of the df in equation([eq : df ] ) , the predicted vertical profile is fixed by the vertical velocity dispersion and the potential , but independent of the scaleheight of the mock data .we find that the vertical profile is predicted satisfactorily with the miyamoto nagai potential ( as shown in fig.[fig : nm_fit ] ) but not with the flattened isothermal potential ( as shown in fig.[fig : log_fit ] ) .this can be explained by the following . in the case where , the vertical restoring force of the miyamoto nagai potential and thus is about constant .this implies that the vertical action is proportional to , and the vertical profile in ansatz equation([eq : df ] ) goes down exponentially . on the other hand ,the vertical restoring force of the flattened isothermal potential and thus is proportional to .this implies that the vertical action is proportional to , and the vertical profile will go down too drastically [ proportional to .also note that , in the upper panel of fig.[fig : nm_fit ] , the df predicts a shallower vertical density profile near the galactic plane . as segue does not observe the galactic plane , we can not decide whether this prediction is accurate or one should revise the df to better fit the vertical density profile at this point .nonetheless , the density distribution is not expected to be exponential all the way to and .our aim here is to match the exponential profiles in the range of and where segue has data .we conclude that the quasi - isothermal df of equation([eq : df ] ) provides a good representation of the df of mono - abundance subpopulations of the milky way disc , namely that the df predicts ( 1 ) an exponential spatial density , both radially and vertically ; ( 2 ) an isothermal velocity dispersion in the vertical direction and ( 3 ) exponentially decaying in the radial direction .more importantly , the action - based dfs predict a low- tail . as discussed in section[sec : introduction ] , this coincides better with the observation ( e.g. * ? ? ?* ) . in the following section, we will generate mock samples by rejection sampling _ directly from the df_.in this section , we will show that , given a functional form of the potential and the df , we can recover the optimal potential parameters by marginalizing over the df parameters .note that in practice , we only observe a very small part of the galaxy . in this study , we assume that the observed volume is a cylinder around the sun with a finite width and height .we assume that the location of the sun in the galactocentric cartesian coordinate is and include the spatial selection function with this selection function in place , given any parameters of the df and the potential , , we carefully normalize the df through monte carlo integration , more precisely where to our best knowledge , this is the first attempt that has never before been implemented to study the joint distribution of df and potential parameters in the context of constraining the galactic potential .first , we consider the two - dimensional in - plane miyamoto nagai potential which contains two potential parameters : and .we consider the case where , and in equation([eq : df ] ) , which roughly mimics the observed velocity distributions from our calculation in section[sec : df - match ] .we define the likelihood of the potential parameters , , by marginalizing over .we estimate the one - sigma significance of the best - fitting potential parameter by finding the demarcation line at and two - sigma significance at .we create data with , , in the observed volume radially from the sun and .we reiterate the process for various sample sizes for the mock data , for the same observed volume . as shown in fig.[fig : nm_2d_opti_potential ] , the uncertainty goes down as .we use the results with the sample size to be our reference , because the uncertainty of the potential parameters is cent in this case .we show that this reference value coincides with the true value , indicating that , assuming a potential model , we can recover the true potential parameter with observation restricted to a small volume .we show that we can find the best - fitting potential parameters within one- to two - sigma significance from the true value regardless of the sample size .more importantly , the data constrain a highly degenerate combination of and : since , we find that as seen in fig.[fig : nm_2d_opti_potential ] . on top of that , we show that , with the sample size of , we recover both parameters with a precision of about cent for a cent significance ( two - sigma ) .the uncertainties are larger than the one in the study of a flattened isothermal potential below because the parameters are highly correlated .we extend this result to the three - dimensional case by considering the flattened isothermal potential with two parameters ( equation[eq : isothermal_potential ] ) : and .we choose to constrain only two potential parameters because ( 1 ) the main purpose of this study is to show that we can pin down the two upmost important parameters of the galactic potential around the solar neighbourhood , namely the total mass and the vertical density scaleheight . in this regard , two parameters are sufficient .( 2 ) while our method is general and could be used to constrain any number of potential parameters , it is computationally expensive to add another dimension in the nested - grid optimization method that we use .we could have implemented markov chain monte carlo ( mcmc ) and have increased the efficiency of higher dimensional search , but the purpose of this paper is to illustrate the new method .we shall leave the mcmc implementation to a later study .we consider the warm - disc case where , , and for df in equation([eq : df ] ) .we create the mock data from the potential with and . as shown in fig.[fig : log_3d_opti_potential ] , one can recover both the circular velocity and the axis ratio successfully with our proposed method . note that , in most cases , without a galactic dust map , observations in the equatorial galactic disc are less reliable. therefore , we also tested the more realistic scenario , where the observed volume is restricted to the cylindrical height of without the mid - plane .the result as shown in fig.[fig : log_3d_opti_potential_without_midplane ] shows that we can about equally well constrain the axis ratio and the circular velocity , given the same sample size. this could be probably due to the fact that our three - dimensional flattened isothermal potential is too restrictive ; therefore , the axis ratio is not too sensitive to the vertical observed volume restriction . in both cases , we show that , with the sample size of , we recover the circular velocity with a precision of about cent for a cent significance ( two - sigma ) and of about cent for the axis ratio .it is important to note that , in this study , we perform the potential parameter estimation by marginalizing over the df parameters instead of maximizing over the df parameters .it has been argued that this is the best way to constrain the potential in this type of study . to the best of our knowledge ,our study is also the first successful study in implementing this marginalization .interestingly , we find that by performing marginalization , the results we obtain are almost exactly the same as maximizing over the df parameters .since marginalization will take at least an order of magnitude more in terms of computational time , it might be worthwhile to explain these results . to illustrate these results, we consider a two - dimensional parameter space and we marginalize over one of the dimensions .it is trivial to generalize the arguments to higher dimensions .if the posterior joint distribution is a two - dimensional joint gaussian probability distribution , assuming flat prior , the posterior probability distribution is given by since this is a monotonic function , the contour of the posterior probability distribution will resemble the contour of .note that the general formula for a two - dimensional posterior gaussian probability distribution can be written as .\end{aligned}\ ] ] one can show that the maximum of is obtained at .therefore , differing only by a multiplicative constant , this is the same as when considering the difference in with respect to the minimum point , the multiplicative constant will drop out .hence , with all the arguments above , we show that in the marginalized , to obtain the boundary point at which , it is the same as maximizing over parameter , with the assumption that the posterior is a multidimensional gaussian distribution .to better address the concern of systematic error arising from the use of the adiabatic approximation in the potential parameter estimation , we also perform an identical study with mock data generated using the torus machinery ( mcmillan & binney , private communication ) .the torus machinery , compared to the adiabatic approximation , is an more accurate method of calculating actions .we find that the best - fitting parameters ( recovered assuming the adiabatic approximation ) , in this case , are slightly shifted ( about cent ) compared to the input parameters ( generated using the torus machinery ) .this is probably due to the error in calculating actions with the adiabatic approximation .we conclude that more accurate and efficient ways of calculating actions are very valuable in improving our analysis ; however , our analysis with the adiabatic approximation only introduces a non - significant error . , but with the vertical range of available tracer constraints restricted to the mid - plane is cutout : this is a more realistic selection function since in most cases , without a galactic dust map , observations in the equatorial galactic disc are less reliable . ] we also test our method by shifting the galactocentric frame velocities by , mimicking the uncertainty due to the solar motion .we find that all the results presented in this paper remain to be qualitatively similar , with the only difference that the estimated circular velocity of the galactic potential will be shifted by the same amount .more importantly , when the galactocentric frame is shifted by a larger amount , the maximum likelihood decreases more , accordingly .therefore , if we consider the solar motion to be another free parameter in the fit , we could study the most probable solar motion with our proposed method and put an independent constraint on the true solar motion in the galactocentric frame .bo12 show that the galactic disc , when decomposed into mono - abundance populations , contains cold populations ( i.e. smaller velocity dispersion ) as well as warmer populations .to compare the constraints which we can obtain from different populations , we study the three - dimensional miyamoto nagai potential .we consider a warm population with , as before , and a cold population with , .we fit for ( the scaleheight ) and ( the total mass ) , but fix ( the scalelength ) to keep the computational cost down ( see above ) .we find that cold populations constrain the total mass better and also constrain slightly better the scaleheight .the result is expected for because it is related to the circular velocity .one should be able to pin down the circular velocity better if the dispersion of the velocity is smaller .we interpret the result for the scaleheight as follows : there are two competing effects for the scaleheight estimation : ( 1 ) the warm populations probe a wider range in height , given the same selection function ; ( 2 ) the cold populations can pin down the trajectories / orbits better .we find that in our case , the latter effect slightly dominates the former .however , the results should be viewed with caution .the results could be biased because ( 1 ) our model might be too restrictive , ( 2 ) the selection function that we choose ( ) is larger than the galactic scaleheight ; therefore , there is not much gain in probing a wider range in height . with these caveats in mind ,we find that colder populations are more valuable in the study of the galactic potential with our proposed method .we demonstrate a way to constrain the galactic potential from mono - abundance stellar populations .we show that the phase - space distribution properties of mono - abundance stars can be fitted very well with a simple action - based df . assuming that the vertical and radial velocity dispersion scalelengths are the same , these properties are fully determined by only four df parameters .we further show that , assuming that the proposed df is the true representation of the mono - abundance populations and certain parametrization of the galactic potential , we devise a statistical and rigorous method to measure the galactic potential parameters .this is achieved by calculating the likelihood of observational data , given a joint set of parameters for both the df and the gravitational potential and subsequently marginalizing over the df parameters . perform a rigorous test on the method we propose in this paper and confirm that our method has significant advantages on both the uncertainty in the potential parameter estimation and the computational time .we create a mock data sample from the df with various potentials , including the miyamoto nagai in - plane potential and a flattened isothermal potential .we show that even with a mono - abundance sample of the size of a few thousands , given a two - parameter gravitational potential , we can pin down the potential parameters to a few per cent . in this study , we assume that the measurement is perfect without uncertainty in the position and velocity measurements. the inclusion of measurement uncertainty should be conceptually straightforward .although a more realistic potential , for instance a combination of halo , bulge and disc potential , will contain more than two parameters and hence post a challenge to the estimation , different mono - abundance stars are tracing the same galactic potential and therefore their joint constraints with the method we propose are still very promising . in the current available segue survey , considering an elemental abundance bin of [ fe / h] and =0.05 $ ] , we have about tracers per bin . in the future ,when galah is fully operational , we expect to have about tracers per bin . in practice ,when dealing with multiple mono - abundance populations , one can combine the constraints from different populations by multiplying the individual likelihoods marginalized over the df parameters .the assumptions of axisymmetry , of dfs that are smooth in the actions and no time variation remain important limitations in our current study . to tackle some of these aspects ,we have started analysing data from the large -body simulation described in , which has strong , time - dependent spiral structure . as a preliminary result, we find that , even with realistic levels of non - axisymmetry and time variation of the potential , the method still works , but we plan to describe this in a later paper for a more concrete and rigorous analysis of this problem .we thank the anonymous referee for helpful comments .we also thank elena donghia for providing data from the simulation in .yst and jb are grateful to the max - planck - institut fr astronomie for its hospitality and financial support during the period in which this research was performed .jb was supported by nasa through the hubble fellowship grant hst - hf-51285.01 from the space telescope science institute , which is operated by the association of universities for research in astronomy , incorporated , under the nasa contract nas5 - 26555 .jb was also partially supported by sfb 881 funded by the german research foundation dfg .99 binney j. , 2010 , mnras , 401 , 2318 binney j. , 2011 , pramana , 77 , 39 binney j. , 2012 , mnras , 426 , 1324 binney j. , mcmillan p. , 2011, mnras , 413 , 1889 binney j. , tremaine s. , 2008 , galactic dynamics .princeton univ . press ,princeton , nj bovy j. , tremaine s. , 2012 , apj , 756 , 89 bovy j. , rix h.w . ,, 2012a , apj , 751 , 131 bovy j. , rix h.w . , liu c. , hogg d.w . , beers t.c ., lee y.s . , 2012b , apj , 753 , 148 bovy j. , rix h.w . , hogg d.w . , beers t.c . , lee y.s . ,zhang l. , 2012c , apj , 755 , 115 donghia e. , vogelsberger m. , hernquist l. , 2013 , apj , 766 , 34 dehnen w. , 1999 , aj , 118 , 1201 eisenstein d. et al . , 2011 , aj , 142 , 72 freeman k.c . , 2010 , in block d.l ., freeman k.c . , puerari i. , eds , galaxies and their masks .springer - verlag , berlin , p. 319freeman k.c . , 2012 , in aoki w. , ishigaki m. , suda t. , tsujimoto t. , arimoto n. , eds , asp conf .galactic archaeology : near - field cosmology and the formation of the milky way .pac . , san francisco , p. 393freeman k.c . ,bland - hawthorn j. , 2002 , ara&a , 40 , 487 fuchs b. et al ., 2009 , aj , 137 , 4149 ghez a.m. et al . , 2008 , apj , 689 , 1044 gillessen s. , eisenhauer f. , trippe s. , alexander t. , genzel r. , martins f. , ott t. , 2009 , apj , 692 , 1075 gilmore g. et al ., 2012 , the messenger , 147 , 25 kuijken k. , gilmore g. , 1989 , mnras , 239 , 571 magorrian j. , 2006 , mnras , 373 , 425 magorrian j. , 2013 , arxiv:1303.6099 mcmillan p.j . , 2011 , mnras , 414 , 2446 mcmillan p.j . , binney j. , 2012 , mnras , 419 , 2251 mcmillan p.j . , binney j. , 2013 , mnras , arxiv:1303.5660 minchev i. , famaey b. , quille a.c . , dehnen w. , martig m. , siebert a. , 2012 , a&a , 548 , a127 miyamoto m. , nagai r. , 1975 , pasj , 27 , 533 reid m. j. , brunthaler a. , 2004 , apj , 616 , 872 siebert a. et al . , 2008 , mnras , 391 , 793 solway m. , sellwood j.a . ,schnrich r. , 2012 , mnras , 422 , 1363 su m. , finkbeiner d.p . ,2012 , arxiv:1207.7060 ting y .- s ., freeman k.c . , kobayashi c. , de silva g.m . , bland - hawthorn j. , 2012 , mnras , 421 , 1231 yanny b. et al . , 2009 ,apj , 137 , 4377
we present a rigorous and practical way of constraining the galactic potential based on the phase - space information for many individual stars . such an approach is needed to dynamically model the data from ongoing spectroscopic surveys of the galaxy and in the future _ gaia_. this approach describes the orbit distribution of stars by a family of parametrized distribution function ( df ) proposed by mcmillan and binney , which are based on actions . we find that these parametrized dfs are flexible enough to capture well the observed phase - space distributions of individual abundance - selected galactic subpopulations of stars ( ` mono - abundance populations ' ) for a disc - like gravitational potential , which enables independent dynamical constraints from each of the galactic mono - abundance populations . we lay out a statistically rigorous way to constrain the galactic potential parameters by constructing the joint likelihood of potential and df parameters , and subsequently marginalizing over the df parameters . this approach explicitly incorporates the spatial selection function inherent to all galactic surveys , and can account for the uncertainties of the individual position velocity observations . on that basis , we study the precision of the parameters of the galactic potential that can be reached with various sample sizes and realistic spatial selection functions . by creating mock samples from the df , we show that , even under a restrictive and realistic spatial selection function , given a two - parameter gravitational potential , one can recover the true potential parameters to a few per cent with sample sizes of a few thousands . the assumptions of axisymmetry , of dfs that are smooth in the actions and of no time variation remain important limitations in our current study . [ firstpage ] galaxy : disc galaxy : fundamental parameters galaxy : halo galaxy : kinematics and dynamics galaxy : structure .
the formation of ripples and dunes at the surface of an erodible sand bed results from the interplay between the relief , the flow and the sediment transport .the aim of these two companion papers is to propose a coherent and detailed picture of this phenomenon in the generic and important case of a unidirectional turbulent stream .this first part is devoted to the study of the stationary flow over a wavy rough bottom . in the second partwe propose a common theoretical framework for the description of the different modes of sediment transport .hydrodynamics and transport issues at hand , we then revisit the linear instability of a flat sand bed submitted to a water shear flow and show that , in contrast to ripples , subaqueous dunes can not form by a primary linear instability .it has long been recognised that the mechanism responsible for the formation and growth of bedforms is related to the phase - lag between sediment transport and bed elevation ( ) .it has been shown in the context of aeolian dunes that this lag comes from two contributions , which can be considered as independent as the time scale involved in the bed evolution is much slower than the hydrodynamics relaxation ( ) .first there is a shift between the bed and the basal shear stress profiles .this shift purely results from the hydrodynamics and its sign is not obvious _ a priori _, i.e. the stress maximum can be either upstream or downstream the bed crest depending on the topography or the proximity of the free surface .the second contribution comes from the sediment transport : the sediment flux needs some time / length to adapt to some imposed shearing .this relaxation mechanism induces a downstream lag of the flux with respect to the shear . when the sum of these two contributions results in a maximum flux upstream the bed crest , sediment deposition occurs on the bump , leading to an unstable situation and thus to the amplification of the disturbance . in part 1, we shall focus on the first of these contributions , the second one being treated in part 2 . is perpendicular .the bottom profile is . in that experiment ,the wavelength and amplitude of the bumps are m and m , for a water depth m ) .the point of maximum shear on the bump ( where the lines are squeezed ) is located upstream the crest .[ wavyschematic ] ] we consider here the generic case of a flow over a fixed sinusoidal bottom of wavelength ( see figure [ wavyschematic ] for an illustration of the geometry and some notations ) . in order to obtain the basal shear stress and in particular its phase shift with respect to the topography, the equations of hydrodynamics must be solved in this geometry .the case of viscous flows has been investigated by .the first attempts to model the high reynolds number regime in the context of ripples and dunes in rivers have dealt with potential flows ( ) , for which the velocity field does not present any lag with respect to the bottom .the shallow - water approximation ( ) implies that the bedforms spread their influence on the whole depth of the flow .however , patterns only have a significant influence within a vertical distance on the order of their wavelength .it is then crucial to compute explicitly the vertical flow structure , taking into account the turbulent fluctuations . in order to overcome the flaws of the perfect flow, constant eddy viscosity closures have been tried to improve kennedy s original model ( ) .further progress has been made by , who used a more sophisticated modelling with an additional equation on the turbulent energy and a closure which involves a prandtl mixing length in the expression of the eddy viscosity . made use of the same turbulent modelling , but in the case of an infinite water depth .a mixing length approach was also used by to improve benjamin s laminar description . in the meteorological context of atmospheric flows over low hills ,a deep and fundamental understanding of the physics of turbulent flows over a relief has been developed from the 70 s ( see the review by ) .starting with the seminal work of , further refined by and , the gross emerging picture is that the flow can be thought of as composed of two ( or more ) layers , associated with different physical mechanisms and different length scales . have been able to compute analytically the basal shear stress for asymptotically large patterns , under an infinite flow depth assumption .their ideas have been discussed in a rather vast literature .the predictions of these calculations , and in particular this layered structure of the flow , has been compared with experiments ( see e.g. ) , or field measurements on large scale hills ( see e.g. the review paper by ) , with a good degree of success , especially on the upstream side of the bumps . moreover , they have been tested against the results of the numerical integration , in various configurations , of navier - stokes equations closed with different turbulent closures ( ) .the relevance of this approach for the description of the flow and the stresses around aeolian sand dunes has also been investigated ( see e.g. ) , and is amongst the current directions of research in that community ( ) . because the prediction of the stable or unstable character of a flat sand bed submitted to a turbulent shear flow is very sensitive to the way both hydrodynamics and transport issues are described and intermixed , we find it useful to discuss at length , in the two parts of this paper , the different mechanisms and scales involved at the different steps of the modelling .it is indeed particularly revealing that , despite the fact that the approach presented here is very close to those of and and has been motivated by these works , we basically disagree with their conclusions , especially that river dunes are initiated by the linear instability of a flat bed .the detail discussion we provide here gives also the opportunity to revisit the still debated question of the subaqueous ripple size selection ( see and references therein ) , as well as the important issue of the classification of bedforms ( ) .this article is structured as follows . in the next section, we briefly recall the equations for the base flow over a uniform bottom .we then study the linear solution in the case of wavelengths much smaller than the flow depth .importantly , the sensitivity of these linear results with respect to various changes in the modelling is tested in sections [ controlzzero ] and [ robust ] .section [ nl ] is devoted to the derivation of the first non - linear corrections . in section [ fs ], we investigate the effect of the free surface in the case of wavelengths comparable or larger than the flow depth and interpret it in terms of topography induced standing gravity waves .finally , we provide in section [ qualitativesummary ] a qualitative summary of the main results of the paper .the most technical considerations are gathered in appendices .we consider a turbulent flow over a relief . following reynolds decomposition between average and fluctuating ( denoted with a prime ) quantities , the equations governing the mean velocity field be written as : where is the reynolds stress tensor ( ) . for the sake of simplicity ,we omit the density factor in front of the pressure and the stress tensor .the aim of this paper is to describe quantitatively the average flow over a fixed corrugated boundary within this framework .the reference state is the homogeneous and steady flow over a flat bottom , submitted to an imposed constant shear stress .the turbulent regime is characterised by the absence of any intrinsic length and time scales . at a sufficiently large distance from the ground ,the only length - scale limiting the size of turbulent eddies the so - called mixing length is precisely ; the only mixing time - scale is given by the velocity gradient .as originally shown by , it results from this dimensional analysis that the only way to construct a diffusive flux is a turbulent closure of the form : where the mixing length is and isthe ( phenomenological ) von krmn constant . after integration, one obtains that the velocity has a single non zero component along the -axis , which increases logarithmically with ( ) : where is a constant of integration called the hydrodynamical roughness .this expression does not apply for .there should be layer of thickness close to the bottom , called the surface layer , matching the logarithmic profile to a null velocity on the ground .the hydrodynamical roughness should be distinguished from the geometrical ( or physical ) roughness of the ground , usually defined as the root mean square of the height profile variations . is defined as the height at which the velocity would vanish , when extrapolating the logarithmic profile to small .the physical mechanism controlling can be of different natures .if the ground is smooth enough , a viscous sub - layer of typical size must exist , whose matching with the logarithmic profile determines the value of . on the contrary, if the geometrical roughness is larger than the viscous sub - layer , turbulent mixing dominates at small with a mixing length controlled by the ground topography . in the case of a static granular bed composed of grains of size , reported values of the hydrodynamical roughness are reasonably consistent ( in , in and ( ) . in section [ nl ], we will justify the connection between geometrical and hydrodynamical roughness on a rigourous basis and show that they are not simply proportional .the situation is of course different in the presence of sediment transport , which may ( or not ) induce some negative feedback on the flow . in this case , the hydrodynamical roughness may directly be controlled by the transport characteristics ( e.g. mass flux and grain trajectories ) .nature presents many other physical processes controlling the roughness : for instance , the flexible stems of wetland plants in low marshes or , for the wind , the canopy or the waves over the ocean . in all these cases , it can be assumed that the logarithmic law is a good approximation of the velocity profile above the surface layer , with a single known parameter .we will first consider the asymptotic limit in which the typical relief length say , the dune wavelength is much larger than the surface layer thickness .the relief is locally flat at the scale , so that there must be a region close to the ground where the velocity profile shows a logarithmic vertical profile .we will then discuss the case of moderate values of the ratio , for which the flow becomes sensitive to the details of the mechanisms controlling the roughness . in the logarithmic boundary layer, the normal stresses can be written as : where is a second phenomenological constant estimated in the range .note that does not have any influence on the results as it describes the isotropic component of the reynolds stress tensor , which can be absorbed into the pressure terms .normal stress anisotropy is considered in section [ robust ] and appendix [ avi ] . introducing the strain rate tensor and its squared modulus , we can write both expressions ( [ tauxzprandtl ] ) and ( [ taullprandtl ] ) in a general tensorial form : in this paper, we focus on 2d steady situations , i.e. on geometries invariant along the transverse -direction , see figure [ wavyschematic ] .as they are of permanent use for the rest of the paper , we express the components of the velocity and stress equations in the - and -directions .the navier - stokes equations read : the stress expressions are the following : in these expressions , the strain tensor components are given by and the strain modulus by : now consider the turbulent flow over a wavy bottom constituting the floor of an unbounded boundary layer . in rivers, this corresponds to the limit of a flow depth much larger than the bed - form wavelength .the solution is computed as a first order linear correction to the flow over a uniform bottom , using the first order turbulent closure previously introduced . for small enough amplitudes, we can consider a bottom profile of the form without loss of generality . is the wavelength of the bottom and the amplitude of the corrugation , see figure [ wavyschematic ] .the case of an arbitrary relief can be deduced by a simple superposition of fourier modes .we introduce the dimensionless variable , the dimensionless roughness and the function : we also switch to the standard complex number notation : ( real parts of expressions are understood ) .we wish to perform the linear expansion of equations ( [ nscont])-([strainmod ] ) with respect to the small parameter .the mixing length is still defined as the geometrical distance to the bottom : .we introduce the following notations for the two first orders : , \label{defu}\\ u_z & = & u _ * k\zeta e^{ikx } w , \label{defw}\\ \tau_{xz } & = & \tau_{zx}= - u_*^2 \left[1+k\zeta e^{ikx } s_t\right ] , \label{defst}\\ p+\tau_{zz } & = & p_0+u_*^2 \left [ \frac{1}{3}\chi^2 + k\zeta e^{ikx } s_n\right ] , \label{defsn}\\ \tau_{zz } & = & u_*^2 \left [ \frac{1}{3}\chi^2 + k\zeta e^{ikx } s_{zz}\right ] , \label{defszz}\\ \tau_{xx } & = & u_*^2 \left [ \frac{1}{3}\chi^2 + k\zeta e^{ikx } s_{xx}\right ] . \label{defsxx}\end{aligned}\ ] ] the quantities , , etc , are implicitly considered as functions of .an alternative choice is to consider functions of the coordinate .such alternative functions are denoted with a tilde to make the distinction .this important but somehow technical issue of the choice of a representation is discussed in appendix [ rep ] .although the curvilinear and cartesian systems of coordinates are equivalent , the distinction between the two is of importance when it comes to the expression of the boundary conditions , and for the range of amplitudes for which the linear analysis is no more valid ( see section [ nl ] ) .in particular , vertical profiles in the forthcoming figures will be mostly plotted as a function of the shifted variable .the linearised strain rate tensor reads finally the navier - stokes equations lead to taking the difference of equations ( [ relaxsxx ] ) and ( [ relaxszz ] ) , one can compute to obtain four closed equations : introducing the vector , we finally get at the first order in the following compact form of the equation to integrate : the general solution of this equation is the linear superposition of all solutions of the homogeneous system ( i.e. with ) , and a particular solution .four boundary conditions must be specified to solve the above equation ( [ systlinplaque ] ) .the upper boundary corresponds to the limit , for which we ask that the vertical fluxes of matter and momentum vanish asymptotically .this means that the first order corrections to the shear stress and to the vertical velocity must tend to zero : and . in practice , a boundary at finite height ( at ) is introduced , at which we impose a null vertical velocity and a constant tangential stress so that .this corresponds to a physical situation where the fluid is entrained by a moving upper plate , for instance a stress - controlled couette annular cell .then , we consider the limit , i.e. when the results become independent of . the lower boundary condition must be specified on the floor ( ) .we consider here the limit in which the surface layer thickness is much smaller than the wavelength .this allows to perform an asymptotic matching between the solution and the surface layer , whatever the dynamical mechanisms responsible for the hydrodynamical roughness are . indeed , focusing on the surface layer , we know that in the limit , the asymptotic behaviour of the local tangential velocity should be a logarithmic profile controlled by the local shear stress and the roughness .the solution of ( [ systlinplaque ] ) should thus match this asymptotic behaviour as .thus , is the only parameter inherited from the surface layer in the limit .we will investigate the situation where this approximation is not valid anymore in section [ controlzzero ] . in the limit , the homogeneous solution of ( [ systlinplaque ] )can be expanded in powers of and and expressed as the sum over four modes . adding the asymptotic behaviour of the particular solution , the full solution writes : the next terms in this expansion are . . is the distance to the bottom , rescaled by the wavenumber . in all panels ,the solid lines represent the real parts of the functions , whereas the dotted lines represent the imaginary ones .dashed lines show the asymptotic behaviours ( [ uasymp]-[snasymp ] ) used as boundary conditions .they match the solutions in the inner layer , which extends up to here .we note and .close to the boundary , a plateau of constant shear stress can be observed , which corresponds to the logarithmic zone .it is embedded into a slightly larger zone of constant pressure in which the shear stress varies linearly . ] the values of the four coefficients , ... , are selected by the matching with the surface layer . would correspond to a non vanishing normal velocity through the surface layer and should thus be null . precisely corresponds to the logarithmic profile with a roughness and a basal shear stress modulation .this gives . would correspond to a modulation of the local roughness more precisely of its logarithm .we do not consider such a modulation so that . corresponds to a sub - dominant behaviour associated to the basal pressure modulation ( ) . in summary, the functions , , and should follow the following asymptotic behaviour : the region of thickness in which this asymptotic behaviour constitutes a good approximation of the flow field is called the inner layer .equation ( [ snasymp ] ) means that the total pressure is constant across this boundary layer : and equation ( [ stasymp ] ) that the shear stress decreases linearly with height according to : the tangential pressure gradient is balanced by the normal shear stress , which means that inertial terms are negligible or equivalently that the fluid is in local equilibrium . in terms of energy ,the space variation of the internal energy ( pressure ) is dissipated in turbulent `` friction '' .these two equations correspond to the standard lubrication approximation for quasi - parallel flows . in practice , we solve the equations using a fourth order runge - kutta scheme with a logarithmic step . the integration is started at an initial value of inside the inner layer i.e. which verifies ) .we write the solution as a linear superposition of the form , where the different terms verify : the boundary conditions on the bottom are then automatically satisfied , and the top ones give algebraic equations on the real and imaginary parts of and , which can be solved easily .we have checked that the result is independent of the initial value of , as long as it remains in the announced range .( aspect ratio ) , computed from the linearised equations ( ) .the flow direction is from left to right .note the left - right asymmetry of the streamlines around the bump in the inner layer ( grey lines ) .note also the onset of emergence of a recirculation bubble in the troughs .the thick line in the top right corner shows the positions that maximises the velocity along a streamline . ]the velocity and stress profiles resulting from the integration of equation ( [ systlinplaque ] ) are displayed in figure [ modesverticaux ] .looking at panel ( c ) , one can clearly see the region close to the bottom where the shear stress is constant , while the horizontal velocity component ( panel a ) exhibits a logarithmic behaviour .this plateau almost coincides with the inner layer , which is the zone where the solution is well approximated by the asymptotic behaviour derived above .the inner layer is embedded in a wider region characterised by a constant pressure ( panel d ) .the estimate of the thickness is of crucial importance for the transport issue ( see next section and part 2 ) . is the scale at which inertial terms are of the same order as stress ones in the reynolds averaged navier - stokes equations .the original estimation of given by was further discussed in several later papers ( see e.g. ) .our data are in good agreement with the scaling proposed by consistently , this scaling relationship is precisely that of the first neglected terms in the asymptotic expansion ( [ assymptotexpansion ] ) .away from the bottom , all profiles tend to zero , so that one recovers the undisturbed flow field ( [ uzero ] ) at large .the shape of these profiles are very consistent with the work of , who have compared the influence of the closure scheme on the linear flow over a relief , which means that the precise choice of the turbulent closure does not affect significantly the results . .( a ) lin - lin plot in the shifted representation .( b ) lin - log plot in the non - shifted representation .the solid lines correspond to the real part and the dotted line to the imaginary one .the velocity disturbance decreases exponentially over one wavelength ( dashed line ) . in this outer region, the reynolds stress can be neglected . ] , , and as a function of .these plots show the dependence of the basal shear and normal stresses with the number of decades separating the wavelength from the soil roughness , for a given bump aspect ratio .the solid line corresponds to the results of the model , using the asymptotic matching with the surface layer .the dashed lines represent the analytical formula deduced from by .they agree well at very small . ] in order to visualise the effect of the bottom corrugation on the flow , the flow streamlines are displayed in figure [ streamlines ] ( see appendix [ streamlines ] for explanations about their computation ) .it can be observed that the velocity gradient is larger on the crest than in the troughs as the streamlines are closer to each other .the flow is disturbed over a vertical distance comparable to the wavelength .a subtler piece of information concerns the position along each streamline at which the velocity is maximum .these points are displayed in the right corner of figure [ streamlines ] .away from the bottom , they are aligned above the crest of the bump . very close to it , however , they are shifted upstream . in other words ,the fluid velocity is in phase with the topography in the upper part of the flow , but is phase advanced in the inner boundary layer where the shear stress tends to its basal value . in this inner layer , the profile is well approximated by its asymptotic expression ( [ uasymp ] ) .an inspection of the velocity profile evidences two distinct regions ( see figures [ modesverticaux ] and [ perfectfluidinner ] ) , in which we recognise those at the basic partitioning of the flow in jackson & hunt work ( 1975 ) , and subsequent papers .there is an outer region ( ) , where decreases exponentially with ( figure [ perfectfluidinner](b ) ) .seeking for asymptotic solutions decreasing as , one has to solve the eigenvalue problem for asymptotically large values of .at the two leading orders , the decrease rate is given by : the asymptotic behaviour is an oscillatory relaxation corresponding to .however , the observed decrease corresponds to the intermediate asymptotic regime for which the solution is .this behaviour is reminiscent from that of an inviscid potential flow . in other words ,the effect of the turbulent shear stress on the flow disturbance can be neglected .the intermediate region between the inner and the outer layers is responsible for the asymmetry of the flow as well as the upstream shift of the maximum velocity discussed above .let us emphasise again that this is the physical key point for the formation of bedforms .one can understand the reason of the phase shift with the following argument .the external layer can be described as a perfect irrotational flow .since the elevation profile is symmetric ,the streamlines are symmetric too , as the flow is solely controlled by the balance between inertia and the pressure gradient induced by the presence of the bump . as a consequence ,the velocity is maximum at the vertical of the crest .now , inside the inner layer , this flow is slowed down by turbulent diffusion of momentum .focusing on the region of matching between these outer and inner regions , the velocity needs some time to re - adapt to a change of shear stress , due to inertia .thus , the shear stress is always phase - advanced with respect to the velocity .one concludes that the basal shear stress is phase - advanced with respect to the bump . as mentioned in the introduction , we are especially interested in the shear stress and pressure distributions on the bottom .we note and .the ratio is the tangent of the phase shift between the shear stress and the topography .it is positive as the shear stress is phase advanced .the four coefficients , , and are displayed as a function of in figure [ abcdlin ] .their overall dependence with is weak , meaning that the turbulent flow around an obstacle is mostly scale invariant .more precisely , following jackson & hunt s work ( ) , it has been shown that , for asymptotically small , one expects logarithmic dependencies : where euler s constant is , is defined by the equation and with .note that tends to and to as , as expected when the inner layer thickness vanishes . in this limit ,the basal shear stress is directly proportional to the square of the velocity inherited from the outer layer , which is solution of the potential flow problem .these expressions agree well with our numerical results for very small .however , for realistic values of , e.g. , this approximation can not be accurately used as it leads to errors of order one note that jackson & hunt s expressions tend to diverge at larger . in comparison to and , we observe that the normal stress coefficients and are more robust with respect to the details of the model . in the limit of a perfect flow ,the pressure varies as the square of the velocity . here, one needs to consider the velocity at the scale of the perturbation , say , where the logarithmic factor should be evaluated for of order unity . from this argument, we predict that the pressure coefficient should scale as the square of ( a parabola in figure [ abcdlin ] ) , which is very accurately verified . more precisely , ^ 2 \\ \quad & \end{tabular } \\ \\ \begin{tabular}{ll } & \\ \quad & \\ \quad & \\ \quad & \end{tabular } \\ \end{tabular } \right ) .\end{aligned}\ ] ]several of the free surface effects can be recovered within a simple friction force model , for which analytical expressions of the linear solution of the flow can be derived .in particular , the resonance condition as well as the behaviour of the basal stress coefficients , , and for can be found and interpreted .we start from the navier - stokes equations for a perfect flow , with a crude additional turbulent friction term as an approximation of the stress derivatives : physically , the force applied to a fluid particle is directly related to the relative velocity with respect to the ground . at an angle ,the following plug flow is an homogeneous solution of the above equations : in order to estimate the value of the friction coefficient , one can make use of the fact that typical turbulent velocity vertical profiles are logarithmic .however , as the logarithm varies slowly when is much larger than , we write . identifying the shear stress on the bottom as , we finally get with the relation ( [ ux0_ffc ] ) for in the range - , we get a typical value for on the order of few .we now normalize quantities by and and get a single non - dimensional ( froude ) number : the starting equations can be linearised around the above reference state . looking a the flow over a corrugated bottom , it is easy to show that the solution is of the following form , \\ u_z & = & \overline{u}ik\zeta e^{ikx } \left [ a_+ e^{kz } + a_- e^{-kz } \right ] , \\ p & = & g \cos \theta ( h - z ) + \overline{u}^2 ( kh - i 2\omega ) \frac{\zeta e^{ikx}}{h } \left [ a_+ e^{kz } - a_- e^{-kz } \right ] , \end{aligned}\ ] ] where and must be determined by the boundary conditions .this exponential form is characteristic of potential flows .we require that the velocity normal to the bottom vanish . following the notations of the main part of the paper , we define such that the free surface is at the altitude .it is a material line where the pressure vanishes .the three boundary conditions are then : where , as before , is defined as .the constants and , as well as are thus solutions of from which we get : , \\a_- & = & \frac{1}{2 } \left [ 1 + \frac{(kh - i2\omega)\tanh kh - \frac{1}{{{\mathcal{f}}}^2 } } { ( kh - i2\omega ) - \frac{1}{{{\mathcal{f}}}^2 } \tanh kh } \right ] .\end{aligned}\ ] ] the shear stress is not part of the variables of this model , but we can consistently define it as .looking at the shear stress and normal stress on the bottom , in accordance with the notations of the previous sections of the paper , we introduce the coefficients , , and as , \\p_b & = & gh\cos\theta + \omega \overline{u}^2 ( c+id ) k\zeta e^{ikx},\end{aligned}\ ] ] which gives \tanh kh - \frac{1}{{{\mathcal{f}}}^2 } \ , kh [ \tanh^2 kh+1 ] } { \left ( kh-\frac{1}{{{\mathcal{f}}}^2}\tanh kh \right ) ^2 + 4\omega^2 } \ , , \\b & = & \frac{2\omega}{{{\mathcal{f}}}^2 } \ , \frac{[\tanh^2 kh-1 ] } { \left ( kh-\frac{1}{{{\mathcal{f}}}^2}\tanh kh \right ) ^2 + 4\omega^2 } \ , , \\ c & = & \frac{1}{2\omega } \left ( -a-\frac{2\omega b}{kh }\right ) , \\ d & = & \frac{1}{2\omega } \left ( -b+\frac{2\omega a}{kh } \right ) .\end{aligned}\ ] ] it is worth noting that the friction force model predicts negative values of for any .this means that there is always a phase delay of the shear stress with respect to the bottom , which is a clear disagreement with the full solution . in order to fix this flaw, one would need to empirically introduce an imaginary part to .finally , this discrepancy shows that a precise description of the phase between the basal friction and the relief is a subtle and difficult issue that fully justify the use of a rigorous but heavy formalism .ayotte , k.w ., xu , d. & taylor , p.a .1994 the impact of turbulence closure schemes on predictions of the mixed spectral finite - difference model for flow over topography ._ boundary - layer met . _ * 68 * , 1 - 33 .prandtl , l. 1925 bericht ber untersuchungen zur ausgebildeten turbulenz ._ * 3 * , 136 - 139 . after ,bradshaw , p. 1974 possible origin of prandt s mixing - length theory , _ nature _ * 249 * , 135 - 136 .
in the context of subaqueous ripple and dune formation , we present here a reynolds averaged calculation of the turbulent flow over a topography . using a fourier decomposition of the bottom elevation profile , we perform a weakly non - linear expansion of the velocity field , sufficiently accurate to recover the separation of streamlines and the formation of a recirculation bubble above the some aspect ratio . the normal and tangential basal stresses are investigated in details ; in particular , we show that the phase shift of the shear stress with respect to the topography , responsible for the formation of bedforms , appears in an inner boundary layer where shear stress and pressure gradients balance . we study the sensitivity of the calculation with respect to ( i ) the choice of the turbulence closure , ( ii ) the motion of the bottom ( growth or propagation ) , ( iii ) the physics at work in the surface layer , responsible for the hydrodynamic roughness of the bottom , ( iv ) the aspect ratio of the bedform and ( v ) the effect of the free surface , which can be interpreted in terms of standing gravity waves excited by topography . the most important effects are those of points ( iii ) to ( v ) , in relation to the intermixing of the different length scales of the problem . we show that the dynamical mechanisms controlling the hydrodynamical roughness ( mixing due to roughness elements , viscosity , sediment transport , etc ) have an influence on the basal shear stress when the thickness of the surface layer is comparable to that of the inner layer . we evidence that non - linear effects tend to oppose linear ones and are of the same order for bedform aspect ratios of the order of . we show that the influence of the free surface on the basal shear stress is dominant in two ranges of wavelength : when the wavelength is large compared to the flow depth , so that the inner layer extends throughout the flow and in the resonant conditions , when the downstream material velocity balances the upstream wave propagation . leer.eps gsave 72 31 moveto 72 342 lineto 601 342 lineto 601 31 lineto 72 31 lineto showpage grestore
the problem we consider here is to compute the exponential of an upper block - triangular , block - toeplitz matrix , that is , a matrix of the kind ,\ ] ] where , , are matrices .our interest stems from the analysis in dendievel and latouche of the erlangization method for markovian fluid models , but the story goes further back in time .the erlangian approximation method was introduced in asmussen _ in the context of risk processes ; it was picked up in stanford _ where a connection is established with fluid queues .other relevant references are stanford _ where the focus is on modelling the spread of forest fires , and ramaswami _ et al . _ where some basic algorithms are developed .markovian fluid models are two - dimensional processes where is a markov process with infinitesimal generator on the state space ; to each state is associated a rate of growth and is controlled by through the equation performance measures of interest include the distributions of and of various first passage times .usually , is called the phase of the process at time and its level , and the phase space is partitioned into three subsets , and such that , or if is in , or , respectively . to simplify our presentation without missing any important feature , we assume below that is empty . the first return probabilities of to its initial level play a central role in the analysis of fluid queues .it is customary to define two matrices and of first return probabilities : , \qquad \mbox{ , ,}\ ] ] and , \qquad \mbox{ , ,}\ ] ] where is the first passage time to level 0 .thus , the entries of and are the probability of returning to the initial level after having started in the upward , and the downward directions , respectively .if the process starts from some level , then = ( \begin{bmatrix } i\\ \psi \end{bmatrix } e^{hx})_{ij } \qquad \mbox{ , ;}\ ] ] here , is a square matrix on and is given by where and are submatrices of the generator , indexed by and , respectively , and is a diagonal matrix with on the diagonal .a similar equation holds for .the matrices and are solutions of algebraic riccati equations and their resolution has been the object of much attention .very efficient algorithms are available , and we refer to bini _ et al . _ and bean _ et al . _ .the erlangian approximation method is introduced in to determine the detailed distribution of .the idea is that , to compute the probability , \qquad \mbox{ , }\ ] ] for a fixed value , it is convenient to replace by a random variable with an erlang distribution , with parameters for some positive integer .the random variable has expectation and variance , so that is a good approximation of if is large enough . from a computational point of view, the advantage is that one replaces systems of integro - differential equations by linear equations .the long and the short of it is that the original system is replaced by the process with a two - dimensional phase on the state space and with the generator where .the physical interpretation is that the absorbing state 0 is entered at the random time , and the component of the phase marks the progress of time towards . some authors ( for instance ) report that good approximations may be obtained with small values of . because of the toeplitz - like structure of , the matrices and are both upper block - triangular block - toeplitz and it is interesting to use the toeplitz structure in order to reduce the cost when is large .this is done in for the matrix . herewe address the question of efficiently computing the exponential matrix for a given value of , where has the structure of .we shall assume without loss of generality that .we recall that the exponential function can be extended to a matrix variable by defining for more details on the matrix exponential and more generally on matrix functions we refer the reader to higham .the matrix defined in is of order and it may be huge , since a larger leads to a better erlangian approximation , while the size of the blocks is generally small . the matrix is a subgenerator , i.e. , it has negative diagonal entries , nonnegative off - diagonal entries , and the sum of the entries on each row is nonpositive . since block - triangular block - toeplitz matrices are closed under matrix multiplication , it follows from that the matrix exponential is also an upper block triangular , block - toeplitz matrix ; in particular , the diagonal blocks of coincide with .moreover , it is known that the matrix is nonnegative and substochastic .the problem of the computation of the exponential of a generator has been considered in xue and ye and by shao et al . , where the authors propose component - wise accurate algorithms for the computation .these algorithms are efficient for matrices of small size . for the erlangian approximation problem ,these algorithms are useless for the large size of the matrices involved .recently , some attention has been given to the computation of the exponential of general toeplitz matrices by using arnoldi method ( lee _ et al . _ , pang and sun ) . in our framework ,toeplitz matrices are block - triangular so that they form a matrix algebra .this property is particularly effective for the design of efficient algorithms and we propose some numerical methods that exploit the block - triangular block - toeplitz structure and the generator properties . unlike the general methods ,our algorithms allow one to deal with matrices of very large size .two methods rely on spectral and computational properties of block - circulant and block -circulant matrices ( bini , bini _ et al . _ ) and on the use of fast fourier transforms ( fft ) .recall that block -circulant matrices have the form ,\ ] ] and that a block - circulant matrix is a block -circulant matrix with . for simplicity ,we denote by the block -circulant matrix .since block -circulant matrices can be block - diagonalized by fft , the computation of the exponential of an block -circulant matrix with blocks can be reduced to the computation of exponentials of matrices .these latter exponentials are independent from each other and can be computed simultaneously with a multi - core architecture at the cost of a single exponential .the idea of the first method is to approximate by where and is sufficiently small .we analyse the error and are thereby able to choose the value of which gives a good balance between the roundoff error and the approximation error .in fact , the approximation error grows as while the roundoff error is , where is the machine precision .this leads to an overall error which is . by using the fact that the solution is real , by choosing a pure imaginary number we get an approximation error which leads to an overall error .since the approximation error is a power series in , we devise a further technique which consists in averaging the solutions computed with different values of .this way , we are able to cancel out the components of the error of degree less than .this leads to a substantial improvement of the precision .moreover , since the different computations are independent from each other , the computational cost in a multicore architecture is independent of . in our second approach , the matrix is embedded into a block - circulant matrix , where is sufficiently large , and an approximation of is obtained from a suitable submatrix of .the computation of is reduced to the computation of exponentials of matrices , and our error analysis allows one to choose the value of so as to guarantee a given error bound in the computed approximation .the third numerical method consists in specializing the shifting and taylor series method of .the block - triangular toeplitz structure is exploited in the fft - based matrix multiplications involved in the algorithm , leading to a reduction of the computational cost .the algorithm obtained in this case does not seem well suited for an implementation in a multicore architecture .we compare the three numerical methods , from a theoretical as well as from a numerical point of view . from our analysis , we conclude that the method based on -circulant matrices is the fastest and provides a reasonable approximation to the solution .moreover , by applying the averaging technique we can dramatically improve the accuracy .the method based on embedding and the one based on power series perform an accurate computation but are slightly more expensive .it must be emphasised that the use of fft makes the algorithms norm - wise stable but that component - wise stability is not guaranteed . in consequence , the matrix elements with values of modulus below the machine precision may not be well approximated in terms of relative error .the paper is organised as follows . in sections[ sec : der ] and [ sec : fastcomp ] , we recall properties of the exponential of a subgenerator and of its derivatives , and some basic properties of block - toeplitz and block - circulant matrices which are used in our algorithms . in section [ sec : expcirc ] , we show how to compute the exponential of a block -circulant matrix by using fast arithmetic based on fft and we perform an error analysis .we present in section [ sec : exptt ] the algorithms to compute the exponential of : first we analyse the decay of off - diagonal entries of the matrix exponential , next we describe the new methods and perform an error analysis .we conclude with numerical experiments in section [ sec : exper ] .a subgenerator of a markov process is a matrix of real numbers such that the off diagonal entries of are nonnegative , the diagonal entries are negative , and the sum of the entries on each row is nonpositive .we denote by the column vector with all entries equal to 1 , with size according to the context .if is a subgenerator , then and is called a generator if the row sum on all rows is zero .let .the matrix is a nonnegative matrix , and we may write . from the latter equality it follows that the matrix exponential is nonnegative. moreover , since it follows that .therefore , in view of , .thus we may conclude that , that is , is a substochastic matrix .we recall the definition and some properties of the gteaux and frchet derivatives , and their expression for the matrix exponential function , together with some properties when the matrix is a subgenerator .we refer the reader to for more details .recall that if is a subgenerator , then is a substochastic matrix for any and in particular .therefore , by taking norms in , we obtain the upper bound which may be extended to the -th order gteaux derivative as in the next proposition .[ prop1 ] if is a subgenerator , then }(tx , e)\|_\infty \le t^j \| e\|_\infty^j\ ] ] for , for any . moreover , if is a nonnegative matrix , then }(tx , e) ]. the inductive step is immediately proved , since from we have }(tx , e)\|_\infty & \le j\int_0^t \|e^{(t - s)x}\|_\infty \|e\|_\infty \|g^{[j-1]}(sx , e)\|_\infty ds\\ & \le j \int_0^t \|e\|_\infty^j s^{j-1 } ds=\|e\|_\infty^j t^j , \end{split}\ ] ] where the last inequality follows from the inductive assumption . if the matrix is nonnegative , from the recurrence and from the fact that is nonnegative and substochastic for any , it follows by induction that }(tx , e) ] coincides with the first block - row of and the entries of any other block - row are obtained by the entries of the previous block - row by a cyclic permutation which moves the last block entry to the first position and shifts the remaining block - entries one place to the right . for instance , for one has .\ ] ] observe that a block - circulant matrix is a particular block - toeplitz matrix .block - circulant matrices can be simultaneously block - diagonalized by means of fft , that is , this property shows that block - circulant matrices are closed under matrix multiplication , i.e. , they form a matrix algebra , moreover the product of a circulant matrix and a vector can be computed by means of algorithm [ alg:1 ] .this algorithm performs the computation with ffts and matrix multiplications . since are sufficient to multiply two matrices , the overall cost of algorithm [ alg:1 ] is ops . , , if the input block - vectors are real then the vectors and have a special structure , that is , the components and are real while , , for . in this case , the number of matrix multiplications at step 3 of algorithm [ alg:1 ] is reduced to .[ rem : cir]observe that the product of two circulant matrices may be computed by means of a product of a circulant matrix and a vector by means of algorithm [ alg:1 ] .in fact , since the last column of the block - circulant matrix is the block - vector , if then we find that , where , .we denote by the block - vector and by the block - upper triangular block - toeplitz matrix whose first row is ] stands for inequality up to lower order terms .the symbol denotes the machine precision .we recall the following useful fact ( see ) for , and we use the following properties involving norms , where in order to perform the error analysis of algorithm [ alg : expepsc ] , we recall the following result concerning fft ( see ) .[ th : fft ] let be a vector of components , , integer , , where is the fourier matrix .let be the vector obtained in place of by applying the cooley - tukey fft algorithm in floating point arithmetic with precision where the roots of the unity are approximated by floating point numbers up to the error .then in particular , with and performing a first - order error analysis where we consider only the part of the error which is linear in we have observe that , since , we may replace with in the statement of theorem [ th : fft ] .we split algorithm [ alg : expepsc ] into three parts .the first part consists in computing the entries of the matrices by means of steps 1 and 2 , the second part consists in computing the entries of and the third part is formed by the remaining steps 4 and 5 .the first and third part can be viewed as the collection of independent computations applied to the entry of the generic block for .more specifically , given the pair , denote , , the vectors whose components are , , , , respectively .the computation of is obtained in the following way : , , for , . while , denoting the vectors whose components are , , , , respectively , the computation of is obtained in the following way : , for .define , , , where are the values obtained in place of by performing computations in floating point arithmetic .we denote also by and the -th component of and , respectively .in our analysis we assume that the constants have been precomputed and approximated with the numbers such that , , . since , from we find that .thus , denoting by the error introduced in computing the fft of in floating point arithmetic , we have and in view of , , and we obtain where the last inequality follows from the fact that since and .this implies that is such that which yields concerning the second part of the computation , for the matrix we have where is the error generated by computing the matrix exponential in floating point arithmetic .here we assume that for some positive constant which depends on the algorithm used to compute the matrix exponential . from the properties of the gteaux derivative onehas , and from theorem [ thm : new ] , applied with , , it follows that and . combining these results with leads to the bound \leq\end{array}}\|\delta_{v_k}\|_\infty+\mu\tau.\ ] ] finally , for the third part of the computation , consisting of steps 4 and 5, we have where is the error obtained by computing in floating point arithmetic .thus from we have \leq\end{array}}\frac 1n \|f\delta_w\|_\infty+ \mu \gamma \log_2n\|r\|_2\\ &\le \|\delta_w\|_\infty+\mu\gamma \sqrt n\log_2n\|y\|_\infty , \end{split}\ ] ] where the second inequality holds from and from since . moreover , we find that now we are ready to combine all the pieces and obtain the error bound on the computed value . fromwe get \leq\end{array}}|\theta|^{-k}(\|\delta_w\|_\infty + ( \zeta+\gamma \sqrt n\log_2n)\mu\|y\|_\infty).\ ] ] thus we have \leq\end{array}}m|\theta|^{-k}(\max_h\|\delta_{w_h}\|_\infty+ ( \zeta+\gamma \sqrt n\log_2n)\mu \max_h\|y_h\|_\infty).\ ] ] moreover , from and we conclude with the following bound \leq\end{array}}m|\theta|^{-k } ( \max_h\|\delta_{v_h}\|_\infty+\mu\tau + ( \zeta+\gamma \sqrt n\log_2n)\mu \max_h\|y_h\|_\infty).\ ] ] whence \leq\end{array}}\mu m|\theta|^{-k } ( m n(\zeta+\gamma \log_2n)\max_{r , s , h } \max_h\|y_h\|_\infty)\ ] ] and we may conclude with the following [ th : err ] let be the value of provided by algorithm [ alg : expepsc ] applied in floating point arithmetic with precision for computing , where , .denote .one has \leq\end{array}}\mu\epsilon^{-1 } m\varphi\ ] ] where , , and is the error bound in the computation of the matrix exponential , i.e. , such that for an matrix . in the case where , we apply algorithm [ alg : expcirc ] to compute the exponential of a block - circulant matrix and the above result leads to [ th : err2 ] let be the value of provided by algorithm [ alg : expcirc ] applied in floating point arithmetic with precision for computing , where , . denote .one has where latexmath:[ ] are defined by means of . from proposition [ prop1 ]we obtain }({\mathcal{t}}(u),l)\|_\infty \le \sum_{j=1}^\infty \frac{|\epsilon|^j}{j ! } \|l\|_\infty^j= e^{|\epsilon|\|l\|_\infty}-1.\ ] ] if is a pure imaginary number , since is a real matrix , the inequality is obtained by comparing the real parts in and by applying proposition [ prop1 ] . it is interesting to observe that the choice of an imaginary value for provides an approximation error of the order instead of .the idea of using an imaginary value for was used in in the framework of frchet derivative approximation of matrix functions .the error bound can be improved by performing the computation with several different values of and taking the mean of the real parts of the results obtained this way .for instance , choose , , where , and recall that , are power series in . taking the arithmetic mean of and , the components of odd degree in cancel out while the coefficient of is pure imaginary . therefore taking the real part of the arithmetic meanprovides an error .this technique can be generalized as follows .choose an integer and set , , where is a principal -th root of .then one can verify that the arithmetic mean of , is a power series in , moreover , is a pure imaginary number so that the real part of this mean provides an approximation with error .observe that computing the exponential for different values of might seem a substantial computational overload .however , in a parallel model of computation , the exponentials , , can be computed simultaneously by different processors at the same cost of computing a single exponential .algorithm [ alg : epscirc_int ] reports this averaging technique .set , compute the first block row of , , by means of algorithm [ alg : expepsc ] . set theorem [ thm1 ] provides us with a bound on the error generated by approximating the exponential of a block - upper triangular toeplitz matrix by means of the exponential of a block--circulant matrix .in fact , in practical computations in floating point arithmetic , the overall error is formed by two components : one component is given by the approximation error analyzed in theorem [ thm1 ] , the second component is due to the roundoff and is estimated by theorem [ th : err ] .more precisely , the effectively computed approximation in floating point arithmetic is the block - vector with components , , where is bounded in theorem [ th : err ] . on the other hand , where , by theorem [ thm1 ] , is such that \|_\infty\le \left\{\begin{array}{ll } \psi(|\epsilon|^2\| l\|_\infty^2 ) & \hbox{if~ } \epsilon \hbox{~is imaginary}\\[1ex ] \psi(|\epsilon|\| l\|_\infty ) & \hbox{otherwise } \end{array}\right.\ ] ] where .this way , for the overall error one has for , or .this shows the need to find a proper balance between the two errors : small values for provide a small approximation error but the roundoff errors diverge to infinity as .a good compromise is to choose so that the upper bounds to and have the same order of magnitude . equating these upper bounds in the case of non - imaginary yields \leq\end{array}}2\epsilon \|l\|_\infty\ ] ] and in the case of imaginary , {m\mu\varphi/\|l\|_\infty^2},\quad \|e_k\|_\infty{\begin{array}{c}^{\displaystyle \cdot}\\[-2ex]\leq\end{array}}2\epsilon^2\|l\|_\infty^2.\ ] ] the latter bound is an .this implies that asymptotically , as , we may loose of the digits provided by the floating point arithmetic .if we adopt the strategy of performing the computation with different values of , , so that the approximation error is , then the total error turns to , i.e. , only digits are lost .an interesting point is that the quantities and are involved in the expressions of the error bound .since is a generator , both these quantities are bounded from above by . however , by means of simple manipulations , we may scale the input so that it is bounded by 1 .this is performed by applying to the scaling and squaring technique of .let be an integer such that . then ,since , we first compute and then recover by performing repeated matrix squaring . in this way we have and .since is still a generator , the error analysis performed for applies as well , and we can approximate the first block - row of with the first block - row of for a suitable with . finally we recover an approximation to by computing by means of repeated squarings , by using the toeplitz structure and algorithm [ alg:2 ] , in view of remark [ rem : tt ] .the overall procedure is described in algorithm [ alg : ttepsc ] . , and compute , the first block - row of , by means of algorithm [ alg : expepsc ] if is imaginary , replace with the real part of the idea of this method is to embed the matrix into a block - circulant matrix .the first block - row of is approximated by the first blocks of the first block - row of .specifically , take and consider the block - vector defined in .the block - circulant matrix may be partitioned as ,\ ] ] where and are and block - matrices , respectively .denote by and by the matrices formed by the first and the last columns , respectively , of the identity matrix of size .the matrix can be also written as where the matrix is defined in .because of the triangular toeplitz structure , the desired matrix is identical to the block - leading submatrix of .our idea is to approximate the first block - row of with the first blocks of the first row of . as pointed out in section [ sec : expcirc ] , is a block - circulant matrix , and can be computed by means of algorithm [ alg : expcirc ] with ops , plus the cost of computing exponentials of matrices .denote by the first block - row of , so that . an approximation of the matrices , ,defining the first block - row of is provided by , ; as increases , the approximation improves , as shown by the following result .[ thm : erremb ] let , with .let and let , with .one has for , and \right\|_\infty \le f_k(\sigma)\ ] ] for any , where with and defined in . proof . by using and, we find that }(\mathcal t(u^{(k)}),h_k).\ ] ] equating the first blocks in the first block - row in the above equation yields = \sum_{j=1}^\infty \frac{1}{j ! }w^{[j]},\ ] ] where } ] .that is , }=\widehat{\mathcal e}_1^t g^{[j]}(\mathcal t(u^{(k)}),h_k)\mathcal e_1,\ ] ] where is the matrix formed by the first columns of the identity matrix .since , from and from proposition [ prop1 ] we deduce that }\ge 0 ] is the block - row vector formed by the last block - entries of the first block - row of , and }(s) ] .since the matrix is a subgenerator , it follows that and , from theorem [ thm : decexp ] , that for any , for , where the latter inequality follows from the fact that .this implies that moreover , since is a subgenerator , is nonnegative and , then , from proposition [ prop1 ] , we have }(s{\mathcal{t}}(u^{(k)}),h_k)\ge 0 ] .therefore , by taking norms in , we find that }\|_\infty & \le j \int_0 ^ 1 \| v(s)\|_\infty \| l \|_\infty \| z^{[j-1]}(s)\|_\infty ds \\ & \le j \| l\|_\infty^{j } e^{\alpha(\sigma^{n-1}-1 ) } \frac{\sigma^{-k+n}}{1-\sigma^{-1 } } \int_0 ^ 1 s^{j-1 } ds = \| l\|_\infty^{j } e^{\alpha(\sigma^{n-1}-1)}\frac{\sigma^{-k+n}}{1-\sigma^{-1}}. \end{split}\ ] ] hence , by taking norms in , we obtain . the matrices and have a probabilistic interpretation .namely , the matrix is the probability that the bmap is absorbed after time 1 , _ and _ at time 1 there have been arrivals ; the matrix is the probability that the bmap is absorbed after time 1 , _ and _ at time 1 there have been , or , or , or , arrivals . clearly , there are more trajectories favourable for than for and . similarly , there are more trajectories favourable for than for for a positive integer .this shows that , if we take a sequence of integers , , , and a sequence , , , , such that , then for . therefore , the sequence has some monotonicity property in its convergence to .the bound in shows that the error has an exponential decay as increases . moreover , such bound holds for any .therefore we can fix a tolerance and a , and find such that . since we would like to keep as low as possible ,another way to proceed is to fix a tolerance and find such that the size for which is minimum .more specifically , after some manipulations , from the condition we obtain that where since is arbitrary , we choose such that has a minimum value .in fact , the function diverges to infinity as tends to 1 and to , therefore it has at least a local minimum and we can choose .when we perform the computation in floating point arithmetic , we have to consider also the error generated by roundoff in computing the exponential of a block - circulant matrix . in practical computations , we obtain a block - vector with components , , where is bounded in theorem [ th : err2 ] and where , by theorem [ thm : erremb ] , is such that \|_\infty\le f_k(\sigma).\ ] ] altogether , for the overall error , one has a similar analysis can be carried out for the relative error . in this casethe inequality is replaced by , for \|_\infty$ ] . so that the function is modified by replacing with . like at the end of section [ sec : epsc ] , in the overall estimate of the error ,the quantities and are bounded from above by , and we may scale the block - vector so that these quantities are bounded by 1 .the overall procedure is summarized in algorithm [ alg : ttemb ] , where the repeated squaring of the block - triangular block - toeplitz matrices can be performed by using algorithm [ alg:2 ] , as explained in remark [ rem : tt ] .set , and set with for , for apply algorithm [ alg : expcirc ] to compute the first block - row of set , for . in this section we use the taylor series method for computing the exponential of an essentially nonnegative matrix , where the block - triangular block - toeplitz structure is exploited to perform fast matrix - vector multiplications .the computation of the exponential of an essentially nonnegative matrix have been analyzed in and .following and , the taylor series method is applied to compute , since the matrix is nonnegative and can be obtained by means of the equation . in this way , we avoid possible cancellations in the taylor summation .denote by the taylor series truncated at the term , namely the following bound on the approximation error is given in .[ thm : trtay ] let be such that .then the scaling and squaring method is used to accelerate the convergence of the taylor series , by using the property that indeed , if is an estimate of , and if , then and the truncated taylor series expansion is used to approximate . since is block - triangular block - toeplitz , then .the toeplitz structure is used in the computation of the taylor expansion and in the squaring procedure .in fact , the computation of each term in the power series expansion consists in performing products between block - triangular block - toeplitz matrices , that can be done by applying algorithm [ alg:2 ] in view of remark [ rem : tt ] ; similarly in the squaring procedure at the end of the algorithm .concerning rounding errors , we observe that the taylor polynomial is the sum of nonnegative terms .therefore no cancellation error is encountered in this summation .the main source of rounding errors is the computation of the powers for which are computed by means of algorithm [ alg:2 ] in view of remark [ rem : tt ] relying on fft .we omit the error analysis of this computation , which is standard . however , we recall that in view of theorem [ th : fft ] , fft is normwise backward stable but not component - wise stable . for this reason , for the truncation of the power seriesit is convenient to replace the component - wise bound expressed by theorem [ thm : trtay ] by the norm - wise bound from which we obtain that the condition implies that .the overall procedure is stated in algorithm [ alg : ttt ] .it is worth pointing out that , if the computation of the powers of the triangular toeplitz matrices is performed with the standard algorithm then the computation is component - wise stable as shown in . set set , , , compute an estimate of , or set compute and set and , , , .the numerical experiments have been performed in matlab . to compute the error obtained with the proposed algorithmswe have first computed the exponential by using the ` vpa ` arithmetic of the symbolic toolbox with 40 digits and we have considered this approximation as the exact value .denote by , , the approximations of the blocks on the first row of and define the four errors \|_\infty,\\ & \textrm{nw - rel}=\|[a_0-\tilde a_0,\ldots , a_{n-1}-\tilde a_{n-1}]\|_\infty/\|[a_0,\ldots , a_{n-1}]\|_\infty,\\ \end{split}\ ] ] which represent absolute / relative component - wise and norm - wise errors , respectively . the test matrix is taken from two real world problems concerning the erlangian approximation of a markovian fluid queue .the block - size of is usually very large since a bigger leads to a better erlangian approximation , while the size of the blocks is equal to 2 for both problems .we show the performances in terms of accuracy of the algorithm based on the -circulant matrix . in table[ tab:1 ] we report the errors generated by algorithm [ alg : ttepsc ] with applied to the first problem .observe that the errors are much smaller in magnitude than .the component - wise and norm - wise absolute errors range around , while the componentwise relative errors deteriorate as increases ; the norm - wise relative errors moderately increase as increases .this behavior is expected since the use of fft makes the algorithm stable in norm , while the component - wise accuracy is not guaranteed .concerning the second problem , we report only the results in the case where scaling is applied .in fact , there is not much differences between the sclaed and the unscaled versions since this problem is already well scaled in its original formulation . in figure[ fig : prob2.1 ] we report the errors for the method based on -circulant matrices .it is interesting to note that the optimal value of is close to 1 and that the component - wise relative error is minimized by values of greater than 1 .this fact , which apparently seems to be a contradiction , is explained as follows .large values of generate large errors in the lower triangular part , i.e. , the lower triangula part of has large norm .on the other hand we consider the first block - row of to approximate the matrix exponential of , therefore the errors are not influenced by a large error in the lower triangular part . to conclude , the method based on -circulant is the fastest one , but the accuracy of the results is lower than that provided by the embedding and taylor series expansion .however , by applying the averaging technique we can dramatically improve the accuracy of the -circulant algorithm .the algorithms based on embedding , on -circulant matrices are faster than the one based on taylor series with fft matrix arithmetic .moreover they are better suited for a parallel implementation .a. h. al - mohy , n. j. higham , http://dx.doi.org/10.1007/s11075-009-9323-y[the complex step approximation to the frchet derivative of a matrix function ] , numer .algorithms 53 ( 1 ) ( 2010 ) 113148 .http://dx.doi.org/10.1007/s11075-009-9323-y [ ] .m. benzi , p. boito , http://dx.doi.org/10.1016/j.laa.2013.11.027[decay properties for functions of matrices over -algebras ] , linear algebra appl . 456( 2014 ) 174198 .[ ] . http://dx.doi.org/10.1016/j.laa.2013.11.027 m. benzi , p. boito , n. razouk , http://dx.doi.org/10.1137/100814019[decay properties of spectral projectors with applications to electronic structure ] , siam rev .55 ( 1 ) ( 2013 ) 364 .http://dx.doi.org/10.1137/100814019 [ ] .d. bini , http://dx.doi.org/10.1137/0213019[parallel solution of certain toeplitz linear systems ] , siam j. comput . 13 ( 2 ) ( 1984 ) 268276 .d. a. bini , g. latouche , b. meini , http://dx.doi.org/10.1093/acprof:oso/9780198527688.001.0001[numerical methods for structured markov chains ] , numerical mathematics and scientific computation , oxford university press , new york , 2005 , oxford science publications .n. j. higham , http://dx.doi.org/10.1137/090768539[the scaling and squaring method for the matrix exponential revisited ] , siam rev .51 ( 4 ) ( 2009 ) 747764 .http://dx.doi.org/10.1137/090768539 [ ] .http://dx.doi.org/10.1137/090768539 i. najfeld , t. f. havel , http://dx.doi.org/10.1006/aama.1995.1017[derivatives of the matrix exponential and their computation ] , adv . in appl .16 ( 3 ) ( 1995 ) 321375 .http://dx.doi.org/10.1006/aama.1995.1017 m. shao , w. gao , j. xue , http://dx.doi.org/10.1137/120894294[aggressively truncated taylor series method for accurate computation of exponentials of essentially nonnegative matrices ] , siam j. matrix anal .35 ( 2 ) ( 2014 ) 317338. http://dx.doi.org/10.1137/120894294 [ ] .http://dx.doi.org/10.1137/120894294 d. stanford , f. avram , a. badescu , l. breuer , a. da silva soares , g. latouche , phase - type approximations to finite - time ruin probabilities in the sparre andersen and stationary renewal risk models , astin bulletin 35 ( 2005 ) 131144 .j. xue , q. ye , http://dx.doi.org/10.1090/s0025-5718-2013-02677-4[computing exponentials of essentially non - negative matrices entrywise to high relative accuracy ] , math .82 ( 283 ) ( 2013 ) 15771596 . http://dx.doi.org/10.1090/s0025-5718-2013-02677-4 [ ] . http://dx.doi.org/10.1090/s0025-5718-2013-02677-4 j. xue , q. ye , http://dx.doi.org/10.1007/s00211-008-0167-5[entrywise relative perturbation bounds for exponentials of essentially non - negative matrices ] , numer .110 ( 3 ) ( 2008 ) 393403 .
the erlangian approximation of markovian fluid queues leads to the problem of computing the matrix exponential of a subgenerator having a block - triangular , block - toeplitz structure . to this end , we propose some algorithms which exploit the toeplitz structure and the properties of generators . such algorithms allow to compute the exponential of very large matrices , which would otherwise be untreatable with standard methods . we also prove interesting decay properties of the exponential of a generator having a block - triangular , block - toeplitz structure . * keyword * matrix exponential , toeplitz matrix , circulant matrix , markov generator , fluid queue , erlang approximation .
many results in quantum information theory require the generation of specific quantum states , such as epr pairs , or the implementation of specific quantum measurements , such as a von neumann measurement in a fourier transformed basis . some states and measurements can be efficiently implemented using standard quantum computational primitives such as preparing a qubit in the state and applying a sequence of quantum gates ( from a finite set ) .epr pairs can be prepared from the state using a hadamard gate and a controlled - not gate .a von neumann measurement in the fourier basis can be efficiently realized by applying an inverse quantum fourier transform and performing a von neumann measurement in the standard computational basis , i.e. .however many states and basis changes can not be efficiently realized .this paper focusses on the generation of quantum states .for example , in , their improved frequency standard experiment requires the preparation of specific symmetric states on qubits , where is a parameter ( number of ions ) .the algorithm we describe here will efficiently prepare the required symmetric state .this short paper will focus on the algorithm for generating the state , and will ignore issues related to errors and decoherence ( for which the theory of fault - tolerant error - correction , or other stabilization methods , will apply ) .we do not have space to elaborate on the details of precision , but simple calculations that require extra space and elementary operations allow us to generate any state with fidelity at least .suppose we want to generate the state . in practiceit suffices to generate a state satisfying for a given small real number . in this case , it suffices to approximate each and to accuracy ( i.e. bits of accuracy ) .here we have factored out the phase in each term , and so the are all non - negative real values .note that if we can prepare the state , then we can approximate arbitrarily well by introducing appropriate phase factors using methods discussed in .we will therefore focus on a method for generating states with non - negative real amplitudes .in order to create the -qubit state , we will implement in sequence controlled rotations , with the rotation controlled by the state of the previous qubits for .we will first define these controlled rotations , and then in the next section we will describe how we would implement them . will extend the definition of to for .suppose we had a copy of , and we measured the leftmost qubits in the computational basis .let be the non - negative real number so that equals the probability the measurement result is .then gives the conditional probability that the qubit is , conditioned on the state of the first state of the qubits being .define a controlled rotation by : as shown in figure [ genpsi.fig ] , the algorithm for generating the -qubit state is a sequence of such controlled rotations .it is easy to show by induction that after the first controlled rotations are applied we have produced the state and therefore after all controlled rotations we have , width=240,height=144 ]in this section we show how to implement the controlled- with arbitrary precision .first assume we have a quantum register which encodes some `` classical '' description of the state .the state must contain enough information to allow the probabilities ( or a related quantity , such as the we define below ) to be computed .we also use an ancilla register of qubits initialized to the state .then we define operators for each as follows : where satisfies a simple application of the techniques in allow us to approximate ( arbitrarily well ) the transformation : also , define . with these components ,a network implementing is shown in figure [ u_k.fig ] ., width=336,height=124 ] here we assumed that the algorithm works for a general family of states with classical descriptions . if we are only interested in producing a specific state the network can be simplified by removing the register containing and simplifying each to work only for that specific ( in the same way that one can simplify a circuit for adding variable inputs and to one that adds a fixed input to variable input ) .the overall efficiency of our algorithm depends on how efficiently we can implement ; in other words , how efficiently we can compute the conditional probabilities or equivalently one example for which this is easy is the symmetric states .the symmetric state is defined to be an equally - weighted superposition of the computational basis states that have hamming weight ( is the number of bits of that equal 1 ). that is , the conditional probability is easily computed to be for , and to be for .the hamming weight can be efficiently computed as shown in .then we simply need to reversibly compute the satisfying + .this technique will not allow us to generate efficiently all quantum states , but it will work for any family of states where for some reordering of the qubits we can efficiently compute the conditional probabilities .99 richard cleve , artur ekert , chiara macchiavello , michele mosca .`` quantum algorithms revisited '' proceedings of the royal society of london a , 454 , 339 - 354 , 1998 . on the quant - ph archive , report no .huelga , s.f . ,macchiavello , c. , pellizzari , t. , ekert , a. , plenio , m.b . and cirac j.i .1997 , e - print quant - ph/9707014 .p. kaye , m. mosca 2001 .`` quantum networks for concentrating entanglement '' on the quant - ph archive , report no . 0101009 .
quantum protocols often require the generation of specific quantum states . we describe a quantum algorithm for generating any prescribed quantum state . for an important subclass of states , including pure symmetric states , this algorithm is efficient .
a fundamental ingredient of almost every biological process is molecular recognition , which is widely observed in interaction systems like protein - protein , receptor - ligand , antigen - antibody , dna - protein , rna - ribosome , etc .understanding the basis for molecular recognition requires the full characterization of binding process which involves noncovalent bonding effect and biomolecular thermodynamics . among all the factors ,biomolecular conformation energy is of great importance .it is found that changes in protein conformational entropy can contribute significantly to the free energy of protein - ligand association .internal dynamics of the protein calmodulin varies significantly on binding a variety of target domains .the apparent change in the corresponding conformational entropy is linearly related to the change in the overall binding entropy .also , conformational entropy of protein side - chain is a major effect in the energetics of folding .the entropy of heterogeneous random coil or denatured proteins is significantly higher than that of the folded native state tertiary structure . in particular ,the conformational entropy of the amino acid side chains in a protein is thought to be a major contributor to the energetic stabilization of the denatured state and thus a barrier to protein folding .the reduction in the number of accessible main chain and side - chain conformation when a protein folds into a compact globule yields an unfavourable entropic effect .this reduction in conformational entropy counters the hydrophobic effect favoring the folded state and in part explains the marginal stability of most globular proteins . even thought biomolecular conformation energy is a key property to understand a wide variety of physical , chemical , and biochemical phenomena , its evaluation or calculation is very challenging both experimentally and theoretically . only recently ,nmr relaxation methods for characterizing thermal motions on the picosecond - nanosecond ( ps - ns ) timescale is developed and the resulting order parameters is used as a proxy for conformational entropy evaluation .atomic force microscopy ( afm ) for unfolding has great potential in measuring the backbone conformational entropy for protein folding .neutron spectroscopy is also used to elucidate the role of conformational entropy upon thermal unfolding by observing the picosecond motions , which are dominated by side - chain reorientation and segmental movements of flexible polypeptide backbone regions .the computational estimation of conformation energy is a long - standing problem and one of challenges in computational chemistry .generally , the calculation of conformational entropy requires a fully understanding of biomolecular configuration spaces .various simulation techniques , including harmonic analysis , molecular dynamics ( md ) , monte carlo simulation , normal mode analysis and so on , are all employed to explore the configuration spaces .traditionally , the simulation data is processed by the quasiharmonic analyses .more specifically , a special variance matrix for the biomolecular conformations can be defined and analyzed with principal component analysis ( pca ) .conformation entropies are then evaluated from the eigenvalue and eigenvector information from pca . however , for macromolecules , quasiharmonic analyse can be computationally demanding . and the employment of the cartesian coordinates representation makes it very inefficient for bond rotation characterization . moreover , it is found that , for biomolecular systems with multiple occupied energy wells , quasiharmonic approximation tends to overestimate their configurational entropy , and with cartesian coordinates , the errors tend to be magnified .characterization of biomolecular conformations by some structural parameters has been proposed to estimate the conformational entropy .particularly , backbone dihedral angles and side chain rotamers are widely used structural measurements for protein conformational entropy evaluation .the entropy value depends on the probability of the occupancy of the particular structure . by the assumption that amino acids distributions in the native state of proteins are comparable to that found in the denatured state , stites and pranata proposed a way to evaluate the relative conformation entropies for different amino acids .they analyze the preferred distribution of amino acid residues by systematically studying about twelve thousand residues from sixty - one nonhomologous and high - resolution protein crystal structures .a ramachandran plot of angles for various amino acid residues are obtained and the conformation entropies are evaluated through a discretization of angle distribution by an uniformed grid size .the dihedral angle and sid - chain rotamer based parameterization has been widely used in biomolecular conformation entropy estimation .the above - mentioned dihedral - angle based entropy evaluation method usually involves a discretization of angle distributions and it is found that the calculated entropy values are sensitive to the grid size .it is simply true that when the grid is very fine or dense , angles with similar values can be classified into different categories .in contrast , when the grid is very coarse , even angles with large variation can still be classified into a same category .therefore , dramatically different entropy values can be obtained from the same data under different discretization .an example can be found in figure [ fig : entropy_problem ] .however , it has also been pointed out that correlation coefficients between entropies computed from different meshes are very high , suggesting that mesh - related bias does not systematically alter relative entropy values . but this consistence is highly related to the studied systems . simply speaking , for each ramachandran plot ,only the same type of amino acide is considered thus the angle is highly concentrated in a particulary area instead of scattering over a wide range .researchers have realized that a lack of a robust classification poses challenge to a rigorous estimation of the entropies .recently , a k - mean clustering method is proposed to deliver an optimized discrete k - state classification model . in this method , the distribution of torsional angles are naturally classified in to clusters with irregular boundaries . in this way, it achieves an optimized classification and a high silhouette value , indicating very good quality of clustering , is obtained . in this paper , we propose a multiscale persistent function for biomolecular structure characterization and conformation entropy evaluation .our multiscale persistent function is based on two major methods , i.e. , persistent homology and flexibility rigidity index ( fri ) .persistent homology is deeply rooted in algebraic topology but finds great potential in the simplification of complex data .unlike the traditional topological method , persistent homology provides a multiscale topological representation .it is able to embed a geometric measurement into topological invariants and provides a bridge between geometry and topology .the essence of persistent homology is filtration . by the consistent variation of a certain filtration parameter ,a series of topological spaces from various scales are generated . during the filtration , topological invariants , i.e., betti numbers , will be generated and further continue or persist for some time .this life length or persistent time gives a relative geometric measurement of the associated topological properties .persistent homology finds great success in topological characterization identification and analysis ( cia ) and has been used in a variety of fields , including shape recognition , network structure , image analysis , data analysis , chaotic dynamics verification , computer vision and computational biology .recently , we have introduced persistent homology for structure characterization and mathematical modeling of fullerene molecules , proteins and other biomolecules .consistent barcode patterns are extracted and defined as molecular topological fingerprint ( mtf ) , which is used in the analysis of protein flexibility and protein folding .we have also developed multiresolution and multidimensional persistence .flexibility rigidity index is originally proposed to study the flexibility and dynamic modes of biomolecules , particularly for the prediction of debye - walter factors or b - factors .the basic assumptions of the fri method is that biomolecular structure is the equilibrium state determined by interactions within the structure .simply speaking , if two atoms or residues are close to each other,general interaction " between them will be relatively strong .if they are far from each other , general interaction " will be relatively week . in this way, a distance - based monotonically - decreasing function is chosen to describe the particle - particle interactions . herea particle can be an atom , a residue or a certain domain .a rigidity index is defined on each particle as the summation all of interactions between this particular particle and all the others . andflexibility index defined as the reciprocal of rigidity index is found to be linearly proportional to the experimental b - factors .more interestingly , our flexibility index can also be viewed as the closeness centrality in graph theory .the physical insight is that atoms located in the center will tend to have more neighbors thus more rigid , while atoms located far from the center will have less companions thus less rigid and more flexible .a rigidity function can be constructed to simulate the biomolecular density .the essential idea of our multiscale persistent functions is the combination of persistent homology analysis with our multiscale rigidity function , to construct a series of multiscale persistent functions , particularly multiscale persistent entropies , for structure characterization . unlike the previous persistent entropy or topological entropy, we incorporate a resolution parameter into our rigidity function and delivers a topology based multiscale entropy model .based on the density filtration , our method incorporates in it a classification scheme .the classification results are reflected in barcodes , and are further used in persistent entropy calculation .we have systematically compared our entropy model with the traditional model with fixed grid sizes and found that similar results can only be obtained in two extreme situations , i.e. , the grid size and resolution parameter value are very small or very large . in the middle range ,different results are obtained as our classification is different .our method is tested for protein structure classification .we found that it is able to deliver a very nice classification of all - alpha , all - beta and mixed alpha and beta proteins .the all - alpha and all - beta proteins are dramatically different and can be clearly separated by their conformation entropies .the mixed alpha and beta proteins have both alpha - helix and beta - sheet secondary structures .their entropies are largely distributed between the border of all - alpha and all - beta entropies .the results are comparable with our previous ones from machine learning method .more interestingly , based on our multiscale persistent entropy , a protein structure index ( psi ) is proposed to describe the regularity " of protein structures .the essential idea of psi is to evaluate disorderliness in the angle distributions .simply speaking , for a highly regular " structure element like alpha - helix , its dihedral and bond angles are very consistent thus contribute very little to the total entropy . loops and intrinsically disorder regions are extremely irregular " in terms of dihedral and bond angles and tend to contribute a large weight in the total entropy . in this way , through the entropy evaluation , psi is able to provide a new way of structure regularity characterization .essentially , psi can be used to describe the regularity information in any systems .the paper is organized as following , section [ sec : theory ] is devoted to method introduction .we give a brief introduction of conformation entropy in section [ sec : conformation_entropy ] . in this section , the discrete representation of protein structure and angle - based conformation entropy evaluationare discussed .multiscale rigidity function is introduced in section [ sec : multiscale_rigidity_function ] .rigidity function is essentially a continuous version of rigidity index , and our multiscale representation is achieved through tuning a resolution parameter . the persistent homology analysis ,is reviewed in section [ sec : pha ] , which includes the basic theory of simplicial complex , filtration , complex construction and persistence . section [ sec : mph ] is devoted for the multiscale persistent homology and multiscale persistent functions .we propose a density based filtration over the multiscale rigidity function and define various functions on the persistent barcodes .the multiscale persistent entropy is discussed with great details .finally , the application of our model can be found in section [ sec : application ] .a classification of various protein structures including all - alpha , all - beta , mixed alpha and beta , are studied in section [ sec : classification ] .section [ sec : protein_index ] is devoted for the description of protein structure index ( psi ) .the paper ends with a conclusion .entropy is proposed for the characterization of disorderlyness of a system .it measures the freedom of a system to evolve into various potential configurations .entropy is the key property to understand a wide variety of physical , chemical , and biochemical phenomena .it plays very important roles in various biomolecular structures and interactions including protein folding , protein - protein interaction , protein - ligand binding , chromosome configuration , dna translation and transcription , etc .for instance , the folding of a single peptide chain into a well - defined native structure is greatly facilitated by the reduction of its conformation entropy .therefore , conformation entropy calculation is of essential importance in computational chemistry , biophysics and biochemistry .the evaluation of the entropy necessitates a characterization of biomolecular configuration spaces . due to the limitation of cartesian grid based representation , structural parameters , particularydihedral angles and bond anlges , are usually employed for structure description .their distributions can be derived from various computational methods , including molecular dynamics , monte carlo simulation , normal mode analysis , etc .various microstates can be obtained by the discretization of the angle distribution , usually through a equal - spacing grid . andconformation entropy can be evaluated by using the classic shannon entropy form . even though it is suggested that the relative entropy is consistent in this procedural, a lack of a robust classification still poses challenge to a rigorous estimation of the entropie . in this section , multiscalepersistent analysis and multiscal persistent functions for biomolecular structure characterization and conformation entropy evaluation are introduced .this methodology is founded on two major methods , i.e. , persistent homology and flexibility rigidity index .the great details will be given in the following sections .but before that , a brief review of the conformation entropy is given firstly . in physics ,the second law of thermodynamics states that the total entropy of an isolated system always increases .this law also leads to the clausius thermodynamic definition of entropy as following : here is the amount of heat absorbed by the system . for a real biomolecular system ,this definition of entropy is too abstract and hard to implement .normally , a direct calculation of entropy is based on the boltzmann - gibbs - planck ( bgp ) expression where is the boltzmann constant and is the probability distribution function . if the potential function of a biomolecule can be modeled as , probability function will be linearly proportional to , i.e , . however , eq.[eq : entropy_integration ] brings challenges in practical biomolecular simulation .this is largely due to the enormous complexity of conformation space and potential function for biomolecules . to be able to evaluate the entropy, researchers adopt a fundamental assumption that entropy can be divided into several components . in this way, protein folding and binding entropy can be divided into protein contribution and solvent contribution .further , protein contribution can be subdivided into conformation entropy , rotational entropy and translation entropy . [ cols="^ " , ] [ tab : protein_index ]in this paper , we discuss our multiscale persistent homology analysis , particulary multiscale persistent functions .the multiscale persistent homology analysis is based on two methods , i.e. , multiscale rigidity function and persistent homology .the rigidity function is essential to a continuous version of rigidity index .it incorporate in it the multiscale information by turning a resolution parameter in kernel functions .further , multiscale barcode representation of the data can be achieved by a density filtration over these multiscale rigidity functions .multiscale persistent functions can be defined on these barcode spaces .we discuss a particulary function , i.e. , persistent entropy , and illustrate its applications in protein structure classification and protein structure index representation .there are several distinguished characteristics of our multiscale persistent entropy .firstly , we naturally divide the data into several groups .this subdivision is based on the general topological features of the elements , i.e . , ( , ) angle points .secondly , the classification information is embedded into our barcodes .essentially , each is a cluster in the data and the barlength represents the size of the cluster .it needs to be pointed out that our barcode based clustering is comparable with other methods , such as , k - mean methods , spectral graph theory , etc . in this sense , the topological entropy obtained in some special kernels and scales , can be similar to the results from the other clustering methods. however , topological entropies defined from high dimensional barcodes are dramatically different from all the previous methods .it should be noticed that topological invariants are able to describe the global structure information , and only persistent homology is able to characterize topological invariant with a geometric measurement , i.e. , persistence . in this way, high dimensional barcode based topological entropies will be able to provide more interesting intrinsic topological information of the structure. this will be our future research topics .this work was supported in part by ntu sug - m4081842.110 .the author zhiming li thanks the chinese scholarship council for the financial support no .201506775038 .r. baron , p. h hunenberger , and j. a. mccammon .absolute single - molecule entropies from quasi - harmonic analysis of microsecond molecular dynamics : correction terms and convergence properties . , 5(12):31503160 , 2009 .a. t. hagler , p.s .stern , r. sharon , j. m. becker , and f. naider .computer simulation of the conformational properties of oligopeptides .comparison of theoretical methods and analysis of experimental results ., 101(23):68426852 , 1979 .a. korkut and w. a. hendrickson .stereochemistry of polypeptide conformation in coarse grained analysis . in _biomolecular forms and functions : a celebration of 50 years of the ramachandran map _ , pages 136147 .world scientific publishing , 2013 .y. yao , j. sun , x. h. huang , g. r. bowman , g. singh , m. lesnick , l. j. guibas , v. s. pande , and g. carlsson .topological methods for exploring low - density states in biomolecular folding pathways . , 130:144115 , 2009 .
in this paper , we introduce multiscale persistent functions for biomolecular structure characterization . the essential idea is to combine our multiscale rigidity functions with persistent homology analysis , so as to construct a series of multiscale persistent functions , particularly multiscale persistent entropies , for structure characterization . to clarify the fundamental idea of our method , the multiscale persistent entropy model is discussed in great detail . mathematically , unlike the previous persistent entropy or topological entropy , a special resolution parameter is incorporated into our model . various scales can be achieved by tuning its value . physically , our multiscale persistent entropy can be used in conformation entropy evaluation . more specifically , it is found that our method incorporates in it a natural classification scheme . this is achieved through a density filtration of a multiscale rigidity function built from bond and/or dihedral angle distributions . to further validate our model , a systematical comparison with the traditional entropy evaluation model is done . it is found that our model is able to preserve the intrinsic topological features of biomolecular data much better than traditional approaches , particularly for resolutions in the mediate range . moreover , our method can be successfully used in protein classification . for a test database with around nine hundred proteins , a clear separation between all - alpha and all - beta proteins can be achieved , using only the dihedral and pseudo - bond angle information . the persistent entropy values of mixed - alpha - and - beta proteins are also found to be in the middle region with just a few cases overlapped with the other two categories . finally , a special protein structure index ( psi ) is proposed , for the first time , to describe the regularity " of protein structures . basically , a protein structure is deemed as regular if it has a consistent , uniformed , and orderly configuration . our psi model is tested on a database of 110 proteins , we find that structures with larger portions of loops and intrinsically disorder regions are always associated with larger psi , meaning an irregular configuration . while proteins with larger portions of secondary structures , i.e. , alpha - helix or beta - sheet , have smaller psi . essentially , psi can be used to describe the regularity " information in any systems . key words : conformation entropy , dihedral angle , multiscale persistent function , protein structure , persistent homology , rigidity function .
application of the algebra of biquaternions to equations of electromagnetism has been subject of an important number of research articles and books ( see , e.g. , , , , , , , , , , , , and many others ) .the aim of this work is to present some recent results in the field concerning the usage of algebraic advantages of biquaternions for analytic and numerical solution of maxwell s system for chiral media as well as for inhomogeneous media .compared to a considerable number of publications dealing with biquaternionic reformulations of maxwell s equations for a vacuum or for a homogeneous isotropic medium , application of biquaternions to electromagnetic models corresponding to more complicated media ( a much more challenging object for studying ) was discussed in relatively few sources ( , , , , , , ) .meanwhile the possibility of representation of maxwell s system for a vacuum in the form of a single biquaternionic equation is known since 1919 , only recently it became clear how this result can be generalized for inhomogeneous media , and for chiral media .an appropriate quaternionic or biquaternionic reformulation of a first order system of mathematical physics opens the way for applying different methods which in many aspects preserve the algebraic power of complex analysis .for example , it is not easy to arrive at the cauchy integral formula for holomorphic functions using two - component vector formalism or even more difficult task to develop using this formalism a holomorphic function into a taylor series. no mathematician would consider such way of presenting complex function theory helpful or appropriate .however , this is precisely what is happening in the study of three or four - dimensional models of mathematical physics .compare , e.g. , the stratton - chu integrals written in their standard form ( see , e.g. , ) with their biquaternionic representation , , which is in fact a convolution of a biquaternionic fundamental solution of the maxwell operator with the electromagnetic field and it is quite evident that the latter is natural and elucidating .the meaning of the stratton - chu formulas as a cauchy integral formula for the electromagnetic field becomes transparent and no doubt in their biquaternionic form the stratton - chu formulas could be included in a moderately advanced course of electromagnetic theory which is not the usual case up to now in spite of their central role in electrical engineering applications .the results presented in this work are essentially quaternionic in the sense that it is not clear how they could be obtained with other techniques . in the first part( sections 2 - 5 ) we explain a numerical method for solving electromagnetic scattering problems with an unusual for three - dimensional models precision by the aid of biquaternionic fundamental solutions which to the difference of the usually utilized matrix fundamental solutions for the maxwell equations ( see , e.g. , and ) enjoy some advantageous properties .first of all they are not matrices but vectors ( of four components ) .second , they have a clear physical meaning of fields generated by point sources .third , their singularity is lower than that of fundamental solutions based on a matrix approach .the main idea of this work is to explain how our approach works and how it can be used .we only formulate some necessary results like those about the completeness of our systems of quaternionic fundamental solutions in appropriate functional spaces referring the interested reader to some previous publications , in particular where the corresponding proofs can be found . in sections 6 - 8we consider the time - dependent maxwell system for chiral media , rewrite it in a biquaternionic form as a single equation and then construct explicitly a corresponding green function .section 9 is dedicated to the time - dependent maxwell equations for inhomogeneous media .we show that these equations can also be written as a single biquaternionic equation . in a static casethe corresponding quaternionic operator factorizes the stationary schrdinger operator .we study relationship between solutions of these important physical equations .let denote the set of complex quaternions (= biquaternions ) . each element of is represented in the form where , is the unit and are the quaternionic imaginary units : we denote the imaginary unit in by as usual . by definition commutes with , .we will use the vector representation of complex quaternions , every is represented as follows , where is the scalar part of : , and is the vector part of : .complex quaternions of the form are called purely vectorial and can be identified with vectors from . the operator of quaternionic conjugation we denote by : .let us introduce the operator , where , whose action on quaternion valued functions can be represented in a vector form as follows that is , and . denote , where is a complex constant .we have the following factorization of the helmholtz operator : using the fundamental solution of the helmholtz operator ( we suppose that ) , the fundamental solutions and for the operators and can be obtained from ( [ factor ] ) in the following way we have where is the dirac delta function . from ( [ fund ] )we obtain the explicit form of and : here .note that and are full biquaternions with and .more information on the algebra of biquaternions and related calculus can be found in .the operators and are closely related to the maxwell equations . consider the maxwell system for a homogeneous chiral medium ( see , e.g. , ) and where .some examples of numerical values of for physical media can be found , e.g. , in .we notice only that when we obtain the maxwell system for a homogeneous , isotropic achiral medium with the wave number . the vectors and in ( [ m12 ] ) and ( [ m13 ] ) are complex . consider the following purely vectorial biquaternionic functions it is easy to verify ( see , or ) that and satisfy the following equations and where if then and we arrive at the quaternionic form of the maxwell equations in the achiral case ( see ( * ? ? ? * sect .9 ) , ) but in general and are different and physically characterize the propagation of electromagnetic waves of opposing circular polarizations . obviously the vectors and are easily recovered from and : be a sufficiently smooth closed surface in . herewe use the term sufficiently smooth for surfaces whose smoothness allows us to introduce the corresponding sobolev space for a given . the interior domain enclosed by we denote by and the exterior by .let and be two complex vectors defined on .we say that and are extendable into if there exist such pair of vectors and defined on that equations ( [ m12 ] ) and ( [ m13 ] ) are satisfied in and on we have and .the vectors and are extendable into if there exist such pair of vectors and defined on that equations ( [ m12 ] ) and ( [ m13 ] ) are satisfied in , the silver - mller condition = o(\frac{1}{\left| x\right| } ) \label{sm}%\ ] ] is fulfilled at infinity uniformly for all directions and on : and . with the aid of quaternionic analysis techniques thesetwo introduced classes of vector functions can be completely described . for achiral mediait was done in ( see also ( * ? ? ?11 ) ) and for chiral media in .here we recall these results without proof .we will need the following operators which are bounded in for all real .the function in ( [ sa ] ) is a biquaternion valued function , is the quaternionic representation of the outward with respect to unitary normal on : and all the products in the integrand in ( [ sa ] ) are quaternionic products . from the numerous interesting properties of the operators , and ( see ) we will need only the following fact let complex vectors and belong to , . then 1 . in order for and to be extendable into the following condition is necessary and sufficient or which is the same 2 . in order for and to be extendable into following condition is necessary and sufficient or which is the same now we will show how two systems of quaternionic fundamental solutions suitable for the approximation of the vector functions extendable into or can be constructed .by we denote a closed surface enclosed in and being a boundary of a bounded domain , and by we denote a closed surface enclosing as shown in the figure .[ ptb ] fig1.bmp by we denote a set of points densely distributed on , and by a set of points densely distributed on . for each of these two setswe construct a corresponding pair of systems of quaternionic fundamental solutions .the pair of systems corresponds to and the pair of systems corresponds to .the following theorems show us the possibility to apply the fundamental solutions ( [ syst+ ] ) for the numerical solution of interior boundary value problems for the maxwell equations ( [ m12 ] ) , ( [ m13 ] ) , and fundamental solutions ( [ syst- ] ) for the solution of exterior problems .[ int] let two complex vectors and belong to , , be extendable into and both and be not eigenvalues of the dirichlet problem in .then and can be approximated with an arbitrary precision ( in the norm of ) by right linear combinations of the form and where and are constant complex quaternions .[ ext] let two complex vectors and belong to , , be extendable into and let both and be not eigenvalues of the dirichlet problem in . then and can be approximated with an arbitrary precision ( in the norm of ) by right linear combinations of the form and where and are constant complex quaternions .the right linear combinations in theorem [ int ] and theorem [ ext ] are in general full quaternions . in order to ensure that they will be purely vectorial additionally to a usual boundary condition for the electromagnetic field we have to add the requirement that their scalar parts be equal to zero .we show how this can be easily achieved on some examples of numerical realization considered in the next section .consider the exterior boundary value problem for the maxwell equations corresponding to the model of electromagnetic scattering by a perfectly conducting body with a boundary .find two vectors and satisfying ( [ m12 ] ) and ( [ m13 ] ) in , the condition ( [ sm ] ) at infinity and the following boundary condition = \overrightarrow{f}(x),\qquad x\in\gamma , \label{b1}%\ ] ] where is a given tangential field .we look for the solutions in the form ( [ eext ] ) and ( [ hext ] ) , applying the collocation method in order to find the coefficients and .substitution of the vector part of ( [ eext ] ) in ( [ b1 ] ) gives us two linearly independent equations in every collocation point . in each collocation point we must require also that and which gives us other two linearly independent equations . taking into account that in ( [ eext ] ) and ( [ hext ] ) we have unknown complex quantities we need collocation points .after having solved the corresponding system of linear algebraic equations we obtain the coefficients and and consequently the approximate solution of the problem .a good approximation of the boundary condition ( [ b1 ] ) guarantees a good approximation of the electromagnetic field in the domain due to the following estimate ( see ) where stands for the supremum norm in any closed subset of and is a positive constant depending on and .the method was tested , , using different exact solutions .for example , let and consequently .the vectors and where is constant , represent the electromagnetic field of a magnetic dipole situated at the origin ( * ? ? ?they satisfy ( [ m12 ] ) and ( [ m13 ] ) ( for as well as the silver - mller conditions at infinity .let be an ellipsoid described by the equalities . where , .then and give us the solution of the following boundary value problem = \overrightarrow{f}(x),\qquad x\in\gamma\ ] ] where {c}% c_{3}\partial_{2}\theta_{\alpha}(x)-c_{2}\partial_{3}\theta_{\alpha}(x)\\ c_{1}\partial_{3}\theta_{\alpha}(x)-c_{3}\partial_{1}\theta_{\alpha}(x)\\ c_{2}\partial_{1}\theta_{\alpha}(x)-c_{1}\partial_{2}\theta_{\alpha}(x ) \end{array } \right ) \times\overrightarrow{n}(x)\right ] .\ ] ] we give the numerical results for , and in ( [ ellips ] ) .as the auxiliary surface containing points we have chosen an ellipsoid interior with respect to with , and multiplied by . in the following table we present the results for and for different values of corresponding errors represent the absolute maximum difference between the exact and the approximate solutions at the points on the ellipsoid exterior with respect to with , and multiplied by .[ c]||c|c|c|| & error for & error for + 10 & 0.441e-03 & 0.332e-03 + 15 & 0.693e-05 & 0.713e-05 + 20 & 0.162e-05 & 0.186e-05 + 25 & 0.245e-06 & 0.248e-06 + 30 & 0.113e-06 & 0.171e-06 + 35 & 0.522e-07 & 0.409e-07 + a quite fast convergence of the method can be appreciated ( all numerical results were obtained on a pc pentium 4 ) .let us notice that the approximation by linear combinations of quaternionic fundamental solutions can be applied to other classes of boundary value problems for the maxwell system like for example the impedance problem with the boundary condition -\xi\left [ \left [ \overrightarrow{h}(x)\times\overrightarrow{n}(x)\right ] \times\overrightarrow{n}(x)\right ] = \overrightarrow{f}(x),\qquad x\in\gamma.\ ] ] this implies some obvious changes in the matrix of coefficients of the system of linear algebraic equations corresponding to collocation points .more results and analysis of numerical experiments were given in .consider time - dependent maxwell s equations with the drude - born - fedorov constitutive relations corresponding to the chiral media ( see , e.g. , , , ) : where is the chirality measure of the medium . are real scalars assumed to be constants .note that the charge density and the current density are related by the continuity equation .incorporating the constitutive relations ( [ dbf1 ] ) , ( [ dbf2 ] ) into the system ( [ rot1])-([div ] ) we arrive at the time - dependent maxwell system for a homogeneous chiral medium application of to ( [ max1 ] ) and ( [ max2 ] ) allows us to separate the equations for and and to obtain in this way the wave equations for a chiral medium it should be noted that when , ( [ wave1 ] ) and ( [ wave2 ] ) reduce to the wave equations for non - chiral media but in general to the difference of the usual non - chiral wave equations their chiral generalizations represent equations of fourth order .in this section following we rewrite the field equations from section [ maxw ] in a biquaternionic form .let us introduce the following biquaternionic operator and consider the purely vectorial biquaternionic function the equation is equivalent to the maxwell system ( [ max1])-([max3 ] ) , the vectors and are solutions of ( [ max1])-([max3 ] ) if and only if the purely vectorial biquaternionic function defined by ( [ v ] ) is a solution of ( [ amax ] ) .the scalar and the vector parts of ( [ amax ] ) have the form the real part of ( [ vec ] ) coincides with ( [ max1 ] ) and the imaginary part coincides with ( [ max2 ] ) .applying divergence to the equation ( [ vec ] ) and using the continuity equation gives us taking into account these two equalities we obtain from ( [ sc ] ) that the vectors and satisfy equations ( [ max3 ] ) .it should be noted that for from ( [ a ] ) we obtain the biquaternionic maxwell operator for a homogeneous achiral medium for which the following equality is valid in the case under consideration ( ) we obtain a similar result .let us denote by the complex conjugate operator of : for simplicity we consider now a sourceless situation . in this case the equations ( [ wave1 ] ) and ( [ wave2 ] )are homogeneous and can be represented as follows where for or for .here we present a procedure from which gives us a green function for the operator . consider the equation applying the fourier transform with respect to the time - variable we obtain where the last equationcan be rewritten as follows where the fundamental solution of is given by ( [ fund_d ] ) , so we have from where \frac{e^{i\left\vert x\right\vert \frac{\sqrt{\varepsilon\mu}\omega}{\beta\sqrt{\varepsilon\mu}\omega-1}}}% { 4\pi\left\vert x\right\vert } .\ ] ] we write it in a more convenient form where , , in order to obtain the fundamental solution we should apply the inverse fourier transform to . among different regularizations of the resulting integralwe should choose the one leading to a fundamental solution satisfying the causality principle , that is vanishing for .such an election is done by introducing a small parameter in the following way where .this regularization is in agreement with the condition .we have where .expression ( [ fund1 ] ) includes two integrals of the form where .we have where the change of order of integration and summation is possible because the two necessary for this conditions are fulfilled : the series is uniformly convergent on each segment and the integrals of partial sums converge uniformly with respect to .denote for and we obtain ( see , e.g. , ( * ? ? ?8.7 ) ) where is the heaviside function . for all other cases , that is for and and for and have that and the integrand in ( [ ik ] ) has a pole at the point of order . using a result from the residue theory ( * ? ? ?4.3 ) we obtain consider and for we have that is equal to the sum of residues with respect to singularities in the lower half - plane which is zero because the integrand is analytic there .thus we obtain substitution of this result into ( [ ik ] ) gives us now using the series representations of the bessel functions and ( see e.g. ( * ? ? ?* chapter 5 ) ) we obtain substituting these expressions in ( [ fund1 ] ) and then in ( [ f ] ) we arrive at the following expression for : finally we rewrite the obtained fundamental solution of the operator in explicit form: let us notice that fulfills the causality principle requirement which guarantees that its convolution with the function from the right - hand side of ( [ amax ] ) gives us the unique physically meaningful solution of the inhomogeneous maxwell system ( [ max1])-([max3 ] ) in a whole space .consider maxwell s equations in a nonchiral inhomogeneous medium .thus we assume that and are functions of coordinates: then the maxwell system has the following form in this section following the procedure exposed in we show that this system of equations can be written in the form of a single biquaternionic equation . equations ( [ min3 ] ) and ( [ min4 ] ) can be written as follows and combining these equations with ( [ min1 ] ) and ( [ min2 ] ) we obtain the maxwell system in the form and let us make a simple observation : the scalar product of two vectors and can be written as follows using this fact , from ( [ min11 ] ) and ( [ min12 ] ) we obtain the pair of equations and note that then ( [ min21 ] ) can be rewritten in the following form where analogously , ( [ min22 ] ) takes the form where introducing the notations multiplying ( [ min31 ] ) by and ( [ min32 ] ) by we arrive at the equations and where as before is the speed of propagation of electromagnetic waves in the medium. equations ( [ minq1 ] ) and ( [ minq2 ] ) can be rewritten in an even more elegant form .consider the function let us apply to it the biquaternionic operator we obtain applying ( [ minq2 ] ) and ( [ minq1 ] ) to the real and imaginary parts of this equation gives note that hence let us notice that where is the intrinsic wave impedance of the medium .denote then from ( [ vsp4111 ] ) we obtain the maxwell equations for an inhomogeneous medium in the following form this equation first obtained in , is completely equivalent to the maxwell system ( [ min1])-([min4 ] ) and represents maxwell s equation for inhomogeneous media in a quaternionic form .we formulate this as the following statement .let , and be defined by ( [ definitionofcandw ] ) .then two real - valued vectors and are solutions of the system ( [ min1])-([min4 ] ) iff the purely vectorial biquaternion is a solution of ( [ maxmain ] ) . note that if and are constant ( a homogeneous medium ) , equation ( [ maxmain ] ) turns into a well known ( at least since the work of c. lanczos ) biquaternionic reformulation of the maxwell system in a vacuum which was rediscovered by many researchers ( e.g. , and comments in ) . equation ( [ maxmain ] ) can be considered as a generalization of the vekua equation , well known in complex analysis , that describes generalized analytic functions .recently in using the l. bers approach another generalization of the vekua equation was considered .most likely some of the interesting results discussed in can be obtained for ( [ maxmain ] ) also .their physical meaning would be of great interest .let us consider the sourceless situation , that is we are interested in the solutions for the operator , where the complex quaternion represents or and has the form the scalar function is different from zero .note that the study of the operator practically reduces to that of , as shown in . in the case of the operator ( which can be called the static maxwell operator )the situation is quite different .the factorization ( [ factschr3 ] ) was obtained in , in a form which required a solution of an associated biquaternionic riccati equation . in was shown that the solution has necessarily the form with being a solution of ( [ schr3 ] ) .[ remfromschrtofirst]as in ( [ factschr3 ] ) is a scalar function , the factorization of the schrdinger operator can be written in the following form from which it is obvious that if is a solution of ( [ schr3 ] ) then the vector is a solution of the equation the inverse result is given by the next statement where the following notation is used(x , y , z)=% % tcimacro{\dint \limits_{x_{0}}^{x}}% % beginexpansion { \displaystyle\int\limits_{x_{0}}^{x } } % endexpansion g_{1}(\xi , y_{0},z_{0})d\xi+% % tcimacro{\dint \limits_{y_{0}}^{y}}% % beginexpansion { \displaystyle\int\limits_{y_{0}}^{y } } % endexpansion g_{2}(x,\zeta , z_{0})d\zeta+% % tcimacro{\dint \limits_{z_{0}}^{z}}% % beginexpansion { \displaystyle\int\limits_{z_{0}}^{z } } % endexpansion g_{3}(x , y,\eta)d\eta+c\ ] ] ( is an arbitrary complex constant ) . let be a nonvanishing particular solution of the equation with , and being complex valued functions , and in .then for any scalar function the following equality holds where . let be a solution of equation ( [ d+mdf ] ) in a simply connected domain where and be a nonvanishing particular solution of ( [ maineq3 ] ) .then \ ] ] is a solution of ( [ maineq3 ] ) .notice that due to the fact that in ( [ mainfact3 ] ) is scalar , we can rewrite the equality in the form now , consider the equation where is an -valued function . equation ( [ vekuamain3 ] ) is a direct generalization of the main vekua equation considered in .moreover , we show that it preserves some of its important properties . let be a solution of ( [ vekuamain3 ] ) .then is a solution of the stationary schrdinger equation where .moreover , the function is a solution of the equation and the vector function is a solution of the equation observe that the functions give us a generating quartet for the equation ( [ vekuamain3 ] ) .they are solutions of ( [ vekuamain3 ] ) and obviously any -valued function can be represented in the form where are complex valued functions .it is easy to verify that the function is a solution of ( [ vekuamain3 ] ) iff in a complete analogy with the two - dimensional case .denote then ( [ vekuamain3second ] ) can be written as follows which is equivalent to the equation the results of this section remain valid in the -dimensional situation if instead of quaternions the clifford algebra ( see , e.g. , , ) is considered . the operator is then introduced as follows where are the basic basis elements of the clifford algebra .khmelnytskaya k v , kravchenko v v and rabinovich v s 2002 quaternionic fundamental solutions for numerical analysis of boundary value problems for the maxwell equations ._ _ proceedings of the conference on computational and mathematical methods on science and engineering cmmse-2002 alicante , spain , 20 - 25 sept . 2002 , vol .ii , 193 - 201 .khmelnytskaya k v , kravchenko v v and rabinovich v s 2003 quaternionic fundamental solutions for the numerical analysis of electromagnetic scattering problems ._ _ zeitschrift fr analysis und ihre anwendungen , v. 22 , no .1 , 147166 . kravchenko v g , kravchenko v v and williams b d 2001 a quaternionic generalization of the riccati differential equation .kluwer acad .publ . , _ clifford analysis and its applications _ , ed . by f. brackx et al ., 143 - 154 .sprssig w 2006 initial - value and boundary value problems with hypercomplex methods . in : _ clifford analysis and applications _ , sirkka - liisa eriksson ( editor ) , tampere university of technology , research report 82 .
we give an overview of recent advances in analysis of equations of electrodynamics with the aid of biquaternionic technique . we discuss both models with constant and variable coefficients , integral representations of solutions , a numerical method based on biquaternionic fundamental solutions for solving standard electromagnetic scattering problems , relations between different operators of mathematical physics including the schrdinger , the maxwell system , the conductivity equation and others leading to a deeper understanding of physics and mathematical properties of the equations .
from the times of the celebrated greenwood and williamson ( 1966 ) paper , we know that surfaces that have been worn or abraded show a non - unique gaussian distribution .indeed , fig.6 of gw ( adapted here as fig.1 ) shows in a gaussian paper the distribution of summit heights in a surface of mild steel which had been abraded and then slid against copper , which resembles a `` bimodal gaussian '' .more precisely , greenwood & wu 2001 returned to that very surface and commented that fig.6 of gw paper _ `` shows an abraded surface which is largely flat but where the heights of the upper 80% may be regarded as gaussian .( we may note that ` proper ' statistical tests would not reveal this fact)''_. {cc}{\includegraphics [ natheight=7.135600 in , natwidth=5.874700 in , height=3.5968 in , width=2.9654 in ] { g66.jpg } } & \end{array } ] fig.2 .the distribution of asperity heights due to wear in borucki s solution . , and . the delta function can not be represented for obvious reasons .this function is plotted in fig.2 for representative dimensionless time , and compared to the original gaussian tail .we can integrate analytically the difference between the borucki distribution , and the original gaussian one , to find the fraction of asperities worn out \ ] ] which we shall assume , in the absence of a better hypothesis , all lye on the , and , perhaps an even stronger assumption , maintain the same radius .obviously if special experimental setups will permit to have reasonably simple alternative assumptions , they could be readily incorporated . therefore , the so - modified borucki distribution is + f\left ( \overline { c},\overline{t}\right ) \delta\left ( \overline{z}-\overline{c}\right ) \quad,\quad\overline{z}>\overline{c}\ ] ] where is the classical delta function .by developing the usual gw treatment for number of asperities in contact , area and load , integrating for the distribution of asperity heights , we obtain for the compression we can define the gaussian case integrals as whereas the borucki version `` before the modification '' and after the modification , in considering the functions , additional contributions lead to heaviside functions , further , writing and we can rewrite our final results as results are quite as expected .fig.3 plots the integrals for the same case of fig.2 . starting with the integral giving the number of asperities in contact ,it is clear that , at the `` jump '' , we recover the number of asperities in the unworn profile , by construction . upon increasing the wear time ,the tail is worn out increasingly .looking now at fig.3b , this integral is proportional to the contact area , and , as in fig.3c , a change of slope results in moving across the jump , but not an actual jump as in the number of asperities in fig.3a .{cc}{\includegraphics [ height=2.2857 in , width=3.659 in ] { i0.eps } } & ( a)\\{\includegraphics [ height=2.2857 in , width=3.659 in ] { i1.eps } } & ( b)\\{\includegraphics [ height=2.2857 in , width=3.659 in ] { i3-2.eps } } & ( c ) \end{array } ] fig.4 . the ratio of the integrals which indicates mean pressure in the contact areas . , and .we have shown in a simple case , that wear or abrasion leads to continuous change of roughness parameters , which affect the response of the rough contact . in cases when the process is selectively acting on part of the asperity distribution ,the effects are clearer , and a bimodal distribution results .it may be therefore be an oversimplification to consider surfaces gaussian , as if coming simply from a single distribution .borucki , l. mathematical modeling of polish rate decay in chemical - mechanical polishing .( 2002 ) 105114
since the time of the original greenwood & williamson paper , it was noticed that abrasion and wear lead to possibly bimodal distribution of asperity height distribution , with the upper tail of asperities following from the characteristics of the process . using a limit case solution due to borucki for the wear of an originally gaussian distribution , it is shown here that the tail is indeed always gaussian , but with different equivalent parameters . therefore , if the wear process is light , one obtains a bimodal distribution and both may affect the resulting contact mechanics behaviour . in this short note , we illustrate just the main features of the problem . we conclude that it is an oversimplification to consider surfaces gaussian .
a semialgebraic set is a subset of which is a solution set of a system of polynomial equations and inequalities .computation with semialgebraic sets is one of the core subjects in computer algebra and real algebraic geometry .a variety of algorithms have been developed for real system solving , satisfiability checking , quantifier elimination , optimization and other basic problems concerning semialgebraic sets .every semialgebraic set can be represented as a finite union of disjoint cells bounded by graphs of algebraic functions . the cylindrical algebraic decomposition ( cad ) algorithm can be used to compute a cell decomposition of any semialgebraic set presented by a quantified system of polynomial equations and inequalities .an alternative method of computing cell decompositions is given in .cell decompositions computed by the cad algorithm can be represented directly as cylindrical algebraic formulas ( caf ; see the next section for a precise definition ) .a caf representation of a semialgebraic set can be used to decide whether is nonempty , to find the minimal and maximal values of the first coordinate of elements of , to generate an arbitrary element of , to find a graphical representation of , to compute the volume of , or to compute multidimensional integrals over ( see ) .the cad algorithm takes a system of polynomial equations and inequalities and constructs a cell decomposition of its solution set .the algorithm consists of two phases .the projection phase finds a set of polynomials whose roots are sufficient to describe the cell boundaries .the lifting phase constructs a cell decomposition , one dimension at a time , subdividing cells at all roots of the projection polynomials .however , some of these subdivisions may be unnecessary , either because of the geometry of the roots or because of the boolean structure of the input system . in this paperwe propose an algorithm which combines the two phases .it starts with a sample point and constructs a cell containing the point on which the input system has a constant truth value .projection polynomials used to construct the cell are selected based on the structure of the system at the sample point .such a local projection set can often be much smaller than the global projection set used by the cad algorithm .the idea to use such locally valid projections was first introduced in , in an algorithm to decide the satisfiability of systems of real polynomial equations and inequalities .it was also used in , in an algorithm to construct a single open cell from a cylindrical algebraic decomposition .[ exa : mainexample]find a cylindrical algebraic decomposition of the solution set of , where , , and .the solution set of is equal to the union of the open ellipse and the intersection of the closed disk and the set bounded by a lissajous curve .as can be seen in the picture , the set is equal to the open ellipse .the cad algorithm uses a projection set consisting of the discriminants and the pairwise resultants of , , and .it computes a cell decomposition of the solution set of by constructing cells such that all , , and have a constant sign on each cell .note however , that a cell decomposition of the solution set of can be obtained by considering the following cells .on each cell only some of , , and have a constant sign , but those signs are sufficient to determine the truth value of . 1 . is on because .2 . is on and on because .3 . is on and on because .4 . is on and on because . is on and on because . is on because . is on and on because . is on because .determining the cell bounds for the cell stack - requires computation of roots of , , and in and roots of and in .determining the cell bounds for the cells requires computation of roots of and in and roots of , , and in .determining the cell bounds for the cell stacks - and - requires computation of roots of , , , , and in .polynomial is not used to compute any of the projections and its roots in are computed only for two values of .the algorithm we propose in this paper computes a cell decomposition of the solution set of by constructing the cells given in - .details of the computation for this example are given in section [ sub : example ] .find a cylindrical algebraic decomposition of the solution set of in the variable order . in this examplethe system is not well - oriented , hence the cad algorithm needs to use hong s projection operator for the first three projections .however , the additional projection polynomials are necessary only for the cells on which a mccallum s projection polynomial vanishes identically . for most cellslocal projection can be computed using mccallum s projection operator , and for the few cells on which a mccallum s projection polynomial vanishes identically local projection needs to use some , but usually not all , polynomials from hong s projection operator .the algorithm lpcad we propose in this paper computes a cell decomposition of the solution set of by constructing cells in seconds of cpu time .the cad algorithm did not finish the computation in hours .a version of lpcad using only local projections based on hong s projection operator constructs cells and takes seconds of cpu time .a _ system of polynomial equations and inequalities _ in variables is a formula where ] .a _ real algebraic function _ given by the _ defining polynomial _ and a _ root number _ is the function where is the -th real root of ] and a _ root number _ is the -th real root of .see for more details on how algebraic numbers and functions can be implemented in a computer algebra system .let be a connected subset of . is _ regular _ on _ _ if it is continuous on , for all , and there exist _ _ such that for any is a root of of multiplicity . is _ degree - invariant _ on if there exist __ such that if for all .a set of polynomials is _ delineable _ on if all elements of are degree - invariant on and for where are disjoint regular real algebraic functions and for and are either disjoint or equal .functions are _ root functions of over . a set of polynomials is _ analytic delineable _ on a connected analytic submanifold of if is delineable on and the root functions of elements of over are analytic .let be delineable on , let be all root functions of elements of over , and let and . for , the -th _-section over _ is the set for , the -th _-sector over _ is the set a formula is an _ algebraic constraint _ with_ bounds _ if it is a level- equational or inequality constraint with defined as follows ._ _ 1 ._ a level_- _ equational constraint _ has the form , where is a real algebraic number , and .a level_- _ inequality constraint _ has the form , where and are real algebraic numbers , , or , and .a level_- _ equational constraint _ has the form , where is a real algebraic function , and .a level_- _ inequality constraint _ has the form , where and are real algebraic functions , , or , and .a level- algebraic constraint is _ regular _ on a connected set if all elements of are regular on and , if is an inequality constraint , on .an _ atomic cylindrical algebraic formula ( caf ) _ in has the form , where is a level- algebraic constraint for and is regular on the solution set of for . _level- cylindrical subformulas _ are defined recursively as follows 1 .a level- cylindrical subformula is a disjunction of level- algebraic constraints .2 . a level- cylindrical subformula , with ,has the form where are level- algebraic constraints and are level- cylindrical subformulas .a _ cylindrical algebraic formula ( caf ) _ is a level- cylindrical subformula such that distributing conjunction over disjunction in gives where each is an atomic caf . given a quantified system of real polynomial equations and inequalities the cad algorithm returns a caf representation of its solution set . the following formula is a caf representation of the closed unit ball. where this section we describe an algorithm for computing a caf representation of the solution set of a system of polynomial equations and inequalities .the algorithm uses local projections computed separately for each cell . for simplicitywe assume that the system is not quantified .the algorithm can be extended to quantified systems following the ideas of .the algorithm in its version given here does not take advantage of equational constraints .the use of equational constraints will be described in the full version of the paper .the main , recursive , algorithm used for cad construction is algorithm [ alg : lpcad ] .let us sketch the algorithm here , a detailed description is given later in this section . the input is _ _ a system _ _ of polynomial equations and inequalities and a point with .the algorithm finds a level- cylindrical subformula and a set of polynomials _ ] _ such that is delineable on any cell containing on which all elements of have constant signs and we add the elements of to .let be the interval containing bounded by the nearest roots of elements of and let be the constraint on whose bounds are the corresponding algebraic functions .note that if is delineable on a cell containing then the elements of have constant signs on and hence is equivalent to on .we add and to the list of s , s , and , if is nonempty , we add the components of to stack . when the stack is empty we use projection to compute a set _ ] .3 . for a set and denote the projection of on .in this section we assume that all polynomials have coefficients in a fixed computable subfield , irreducibility is understood to be in the ring of polynomials with coefficients in , irreducible factors are always content - free and chosen in a canonical way , and finite sets of polynomials are always ordered according to a fixed linear ordering in the set of all polynomials with coefficients in . in our implementation . whenever we write _ _ with we include the possibility of , the only element of .let ] .1 . put and compute .2 . for .put .2 . if and put and continue the loop .3 . if , , and none of is a nonzero constant , put , where is maximal such that .4 . put .5 . if then put 3 .return . in the next algorithmwe use the following notation .let [x_{k+1}] ] .1 . put and compute .2 . for .put and .2 . if put and continue the loop .3 . if , put and , where is maximal such that .4 . put .if then for if put .3 . return .the following algorithm computes a local projection for given and .[ alg : localproj](localprojection ) + input : _ a finite set _ ] and then return .if $ ] return .3 . return .3 . if 1 . for compute .2 . if for some then return .3 . if for all then return .4 . return .4 . if 1 . for compute .2 . if for some then return .if for all then return .4 . return .we can now present a recursive algorithm computing cylindrical algebraic decomposition using local projections .[ alg : lpcad](lpcad ) + input : _ a system _ _ of polynomial equations and inequalities and with . _ + _ a pair , where is a level- cylindrical subformula , _ _ , _ _ for , and for any cell if and for all elements of have constant signs on then _ 1 .compute a disjunctive normal form and a conjunctive normal form of .2 . set and .3 . while do 1 . remove a tuple from . are algebraic functions of , , or , , , , and the tuple represents the interval , 2 .if set and set , where , else pick a rational number and set .set .3 . compute . if then set and , and go to .4 . compute . if then set and , and go to .5 . compute . for .compute .6 . for set . 7 . if then set and go to .find such that 1 . and or , 2 . and or , 3 . and , 4 . either or and there are no roots of elements of in .if then set , add and to , and go to .11 . if then set and .else set and , and if or add to . 12 . if then set and .else set and , and if or add to . 13 . set . 14 . set 4 .sort a by increasing values of the first element , obtaining . set .5 . compute .6 . for set . 7 .return _ ._ returns where is a cylindrical algebraic formula equivalent to .the formula returned by algorithm [ alg : lpcad ] may involve weak inequalities , but it can be easily converted to the caf format by replacing weak inequalities with disjunctions of equations and strict inequalities . to prove correctness of algorithm [ alg : localproj ] we use the following lemmata .[ lem : lprojmc]let _ _ _ _ , _ _ _ _ , _ _ a connected analytic submanifold of such that and all elements of are order - invariant in then the set of all elements of that are not identically zero on is analytic delineable over and the elements of are order - invariant in each -section over .suppose that .step of algorithm [ alg : lprojmc ] guarantees that has a sign - invariant leading coefficient in . does not vanish identically at any point in ( for it is ensured by step ; for it follows from irreducibility of ) . by theorem 3.1 of , is degree - invariant on . since , by theorem 2 of , is analytic delineable over and is order - invariant in each -section over .suppose that and .if either or has no real roots then is delineable on .otherwise and hence , by theorem 2 of , is analytic delineable over .therefore , is analytic delineable over and the elements of are order - invariant in each -section over .[ lem : lprojh]let _ _ _ _ , _ _ _ _ , _ _ a connected subset of such that and all elements of are sign - invariant in then the set of all elements of that are not identically zero on is delineable over .suppose that .let be maximal such that , and let .steps and of algorithm [ alg : lprojh ] guarantee that in . by step and theorems 1 - 3 of , is delineable over , and hence is delineable over .suppose that and .if either or has no real roots then is delineable on . otherwise without loss of generalitywe may assume that due to step contains all factors of .by lemma 1 of and theorem 2 of , the degree of is constant for .since and are degree - invariant in , by lemma 12 of , is delineable over . therefore is delineable over .[ pro : localproj]algorithm [ alg : localproj ] terminates and returns a local projection sequence for at . to show that the algorithm terminates note that the body of the loop in step is executed at most times .let be the returned sequence .steps and ensure that is a finite subset of and for .we will recursively construct a cell such that is the maximal connected set containing such that all elements of for have constant signs on .moreover , for , the set of elements of that are not identically zero on is delineable over .this is sufficient to prove that is a local projection sequence for at , because for any cell if and all elements of for have constant signs on then , by maximality of .we will consider two cases depending on the value of when the algorithm terminated .suppose first that when the algorithm terminated was . in this case we will additionally prove that for is an analytic submanifold of , all elements of are order - invariant in , andif then none of the elements of vanishes identically at any point in , is analytic delineable on , and the elements of are order - invariant in each -section over .if is a root of an element of let else let , where and are roots of elements of , , or , , and there are no roots of in . is a connected analytic submanifold of and all elements of are order - invariant in . since the elements of are irreducible , none of the elements of vanishes identically at any point in .since all irreducible factors of elements of belong to , by lemma [ lem : lprojmc ] , is analytic delineable over and the elements of are order - invariant in each -section over .suppose that , for some , we have constructed satisfying the required conditions .the conditions imply that is analytic delineable on .let be the -section or -sector over which contains . is an analytic submanifold of .the elements of are order - invariant in , because they are order - invariant in each -section over and nonzero in each -sector over .since all irreducible factors of elements of belong to , by lemma [ lem : lprojmc ] , is analytic delineable over and the elements of are order - invariant in each -section over .step guarantees that if then .suppose now that when the algorithm terminated was .let be as in the first part of the proof .as before , is analytic delineable over and the elements of are order - invariant in each -section over .let be the -section or -sector over which contains . is an analytic submanifold of .the elements of are order - invariant in , because they are order - invariant in each -section over and nonzero in each -sector over .since all irreducible factors of elements of belong to , by lemma [ lem : lprojmc ] , is analytic delineable over .suppose that , for some , we have constructed satisfying the required conditions .the conditions on imply that is delineable on .let be the -section or -sector over which contains .all elements of are sign - invariant in .since all irreducible factors of elements of belong to , by lemma [ lem : lprojh ] , is delineable over .since for , is the -section or -sector over which contains , is the maximal connected set containing such that all elements of for have constant signs on .correctness and termination of algorithm [ alg : peval ] is obvious .algorithm [ alg : lpcad ] terminates and the returned pair satisfies the required conditions .let be the set of all polynomials that appear in and let be the hong s projection sequence for ( the variant of given in proposition 7 of ) .suppose that and , where .let .since we assume that finite sets of polynomials are consistently ordered according to a fixed linear order in the set of all polynomials , for . hence all polynomials that appear during execution of are elements of .in particular , and that appear in the elements of are roots of elements of , , or .therefore , the number of possible elements of is finite , and hence the loop in step terminates .recursive calls to increment . when then either step yields or step yields , and hence step containing the recursive call to is never executed .therefore the value of is bounded by , and hence the recursion terminates .let be the pair returned by and suppose that is a cell such that and for all elements of have constant signs on .we need to show that _ _ let and .we need to show that .let , as computed in step .all elements of have constant signs on on , for .since none of the elements of vanishes identically at , is delineable over .hence the -sections and the -sectors over form a partition of . for a tuple that appears on in any iteration of the loop in step put for each put note that each and is a union of -sections and -sectors over . put and .we will show that in each instance of the loop in step is a partition of .in the first instance of the loop in step and , and hence is a partition of .we will show that this property is preserved in each instance of the loop . in each instancea tuple is removed from and is added to . if in step then and the property is preserved . if in step then and tuples and are added to . since is a partition of , the property is preserved . otherwise steps - are executed .if in step or and then put , where is the tuple added to , else put .if in step or and then put , where is the tuple added to , else put . since is a partition of , the property is preserved .after the loop in step is finished is empty , , and hence is a partition of .let be such that .let us analyze the instance of the loop in step which resulted in adding to .let .suppose first that or was found in step or .let , as computed in step or .for , , and hence all elements of have constant signs on on .therefore the set of elements of that are not identically zero on is delineable over . by definition of , is a -section or a -sector over .hence all elements of have constant signs on . in particular , all elements of have constant signs on , and so .now suppose that was computed in step .let for , , and hence all elements of have constant signs on on .as before , is delineable over , is a -section or a -sector over , and all elements of have constant signs on . in particular , all elements of have constant signs on . since for , all elements of have constant signs on on .hence and so .[ rem : zdim]the following somewhat technical improvements have been observed to improve practical performance of algorithm [ alg : lpcad ] . 1 . in step of algorithm [ alg : lprojmc ] in be chosen arbitrarily as long as , hence an implementation may choose the simplest .if in a recursive call to the initial coordinates correspond to single - point intervals , that is in step of the currently evaluated iteration of loop in all parent computations of for , then does not need to compute the last levels of projection .instead it can return with .computations involved in finding projections are repeated multiple times . a practical implementation needs to make extensive use of caching . in this sectionwe apply to solve the problem stated in example [ exa : mainexample ] . in step of compute and . in the first iteration of loop remove a tuple representing from and pick .the calls to in steps and yield .step makes a recursive call to . in the first iteration of loop in we remove a tuple representing from and pick . in step yields .we continue on to step where yields .we set and compute where is the set of factors of .we go to step and set . in step we find , , , and . in step we set . in steps and we add tuples representing and to . in step obtain . in the second iteration of loop in we remove a tuple representing from and pick . in step yields .we set and compute where is the set of factors of , , and .we go to step and set . in step we find , , and . in step we set . in step we add a tuple representing to . in step obtain . in the third iteration of loop in we remove a tuple representing from and set . in step yields we set and compute where .we go to step and set . in step we set . in step we obtain .the remaining two iterations of loop look very similar to the last two . in step obtain . in step we compute and in step we set .the returned value is . in step of obtain and . yields . in step we find , , , and . in steps and we add tuples representing and to . in step obtain . in the second iteration of loop in we remove a tuple representing from and pick . the calls to in steps and yield .step makes a recursive call to . in the first iteration of loop in we remove a tuple representing from and pick . in step yields we set and compute where is the set of factors of and ( is not a part of the projection because and have no real roots ) .we go to step and set . in step we find and . in step remains empty . in step we obtain .the loop ends after one iteration and the returned value is . in step of we obtain and . yields . in step we find , , and . in step we add a tuple representing to . in step obtain . in the third iteration of loop in we remove a tuple representing from and pick . the calls to in steps and yield .step makes a recursive call to . in the first iteration of loop in we remove a tuple representing from and pick . in step yields we set and compute where , by remark [ rem : zdim ] , we can take .we go to step and the set remains empty . in step we find and . in step we set . in step we add tuples representing and to . in step obtain . in the second iteration of loop in we remove a tuple representing from and pick . in step yields .we set and compute , where , by remark [ rem : zdim ] , we can take .we go to step and the set remains empty . in step we find , and . in step we set . in step we obtain .the remaining iteration of loop look very similar to the last one . in step obtain . in step we compute by remark [ rem : zdim ] .the returned value is . in step of obtain and . yields . in step we set . in step we obtain .the remaining two iterations of loop look very similar to the last two . in step obtain and the returned value is .algorithm [ alg : lpcad ] ( ) and the cylindrical algebraic decomposition ( ) algorithm have been implemented in c , as a part of the kernel of _ mathematica_. the experiments have been conducted on a linux server with a -core ghz intel xeon processor and gb of ram available for all processes . the reported cpu time is a total from all cores used . since we do not describe the use of equational constraints in the current paper , we have selected examples that do not involve equations .[ exa : ellipse in a square](ellipse in a square ) find conditions for ellipse to be contained in the square .we compute a cylindrical algebraic decomposition of the solution set of with the free variables ordered .results of experiments are given in table [ tab : benchmark - examples ] .examples from are marked with w and the original number .the columns marked time give the cpu time , in seconds , used by each algorithm .the columns marked cells give the number of cells constructed by each algorithm .the column marked wo tells whether the system is well - oriented .experiments suggest that for systems that are not well - oriented lpcad performs better than cad . for well oriented - systems lpcad usually construct less cells than cad , but this does not necessarily translate to a faster timing , due to overhead from re - constructing projection for every cell .however , for some of the well - oriented systems , for instance example [ exa : distance to three squares ] , lpcad is significantly faster than cad , due to its ability to exploit the boolean structure of the problem .unfortunately we do not have a precise characterisation of such problems .nevertheless lpcad may be useful for well - oriented problems that prove hard for the cad algorithm or may be tried in parallel with the cad algorithm . c. w. brown . constructing a single open cell in a cylindrical algebraic decomposition . in _ proceedings of the international symposium on symbolic and algebraic computation , issac 2013 _ ,pages 133140 .acm , 2013 .c. chen , m. m. maza , b. xia , and l. yang .computing cylindrical algebraic decomposition via triangular decomposition . in _ proceedings of the international symposium on symbolic and algebraic computation , issac 2009 _ , pages 95102 .acm , 2009 .h. hong .an improvement of the projection operator in cylindrical algebraic decomposition . in _ proceedings of the international symposium on symbolic and algebraic computation ,issac 1990 _ , pages 261264 .acm , 1990 .s. mccallum .an improved projection for cylindrical algebraic decomposition . in b.caviness and j. johnson , editors , _ quantifier elimination and cylindrical algebraic decomposition _ , pages 242268 .springer verlag , 1998 .a. strzeboski .computation with semialgebraic sets represented by cylindrical algebraic formulas . in _ proceedings of the international symposium on symbolic and algebraic computation , issac 2010 _ , pages 6168 .acm , 2010 .a. strzeboski . solving polynomial systems over semialgebraic sets represented by cylindrical algebraic formulas . in _ proceedings of the international symposium on symbolic and algebraic computation ,issac 2012 _ , pages 335342 .acm , 2012 .
we present an algorithm which computes a cylindrical algebraic decomposition of a semialgebraic set using projection sets computed for each cell separately . such local projection sets can be significantly smaller than the global projection set used by the cylindrical algebraic decomposition ( cad ) algorithm . this leads to reduction in the number of cells the algorithm needs to construct . we give an empirical comparison of our algorithm and the classical cad algorithm .
liquid chromatography combined with electrospray ionisation mass spectrometry ( esi - ms ) is one of the most frequently used approaches for conducting metabolomics experiments .collision - induced dissociation ( cid ) is usually employed within this procedure , intentionally fragmenting molecules into smaller parts to examine their structure .this is called ms / ms or tandem mass spectrometry . a significant bottleneck in such experiments is the interpretation of the resulting spectra to identify metabolites .widely used methods for putative metabolite identification , using mass spectrometry , compare a collected ms or ms / ms spectrum for an unknown compound against a database containing reference ms or ms / ms spectra .unfortunately , current reference databases are still fairly limited , especially in the case of esi - ms / ms . at the time of writing , the public human metabolome database contains esi - ms / ms data for around 800 compounds , which represents only a small fraction of the 40,468 known human metabolites it lists .the publicly available metlin database provides esi - ms / ms spectra for 11,209 of the 75,000 endogenous and exogenous metabolites it contains , although more than half of those spectra are for enumerated tripeptides .the public repository massbank contains a more diverse dataset of 31,000 spectra collected on a variety of different instruments , including esi - ms / ms spectra for approximately 2000 unique compounds .however , set against the more than 19 million chemical structures in the pubchem compound database , an estimated 200,000 plant metabolites , or even the 32,801 manually annotated entries in the database of chemical entities of biological interest ( chebi ) , we see that ms / ms coverage still falls far short of the vast number of known metabolites and molecules of interest .consequently , there is substantial interest in finding alternative means for identifying metabolites for which no reference spectra are available . for these cases , one approach to metabolite identification involves first predicting the ms or ms / ms spectrum for each candidate compound from its chemical structure .the interpreter then uses these predicted spectra in place of reference spectra , and labels the target spectrum as the metabolite whose predicted spectrum is the closest match , according to some similarity criteria . a wide range of similarity criteriahave been proposed , from weighted counts of the number of matching peaks , to more complex probability based measures .the upshot of this predictive approach is that only a list of candidate molecules is needed , rather than a complete database of reference spectra .however , the restriction to a list of candidate molecules means that this approach still falls short of _ de novo _ identification of unknown unknowns , -i.e .we can not identify molecules not in the list .the concept of computer - based ms prediction has been around since the dendral project in the 1960 s , when investigators attempted to predict electron ionization ( ei ) mass spectra using early machine learning methods .more recent approaches to this problem have generally taken one of two forms : rule - based or combinatorial .commercial packages , such as mass frontier ( thermo scientific , www.thermoscientific.com ) , and ms fragmenter ( acd labs , www.acdlabs.com ) , are rule - based , using thousands of manually curated rules to predict fragmentations .primarily developed for ei fragmentation , these packages have been extended for use with esi .this current work does not compare against these methods empirically , however in at least one study they have been found to have been out - performed by metfrag , to which we do compare. another knowledge - based approach , called massimo , combines chemical knowledge with data ; using logistic regression to predict fragmentation probabilities for a particular class of ei fragmentations .the other class of algorithms applies a combinatorial fragmentation procedure , enumerating all possible fragments of the original structure by systematically breaking bonds .first proposed by , this method has been incorporated into the freely available programs fid and metfrag .both identify the given spectrum with the metabolite that has the most closely matching peaks via such a combinatorial fragmentation .these programs also employ several heuristics in their scoring protocols to emphasise the importance of more probable fragmentations .fid uses an approximate measure of the dissociation energy of the broken bond , combined with a measure of the energy of the product ion .metfrag incorporates a similar measure of bond energy combined with a bonus if the neutral loss formed is one of a common subset .an alternative method , fingerid , takes advantage of the increasing number of available ms / ms spectra , by applying machine learning methods to this task .this program uses support vector machines ( svms ) to predict a chemical fingerprint directly from an ms / ms spectrum , and then searches for the metabolite that most closely matches that predicted fingerprint . .the main problem with the current combinatorial methods is that , while they have very good recall , explaining most if not all peaks in each spectrum , they also have poor precision , predicting many more peaks than are actually observed .metfrag and fid attempt to address this problem by adding the heuristics described above . in our work ,we investigate an alternative machine learning approach that aims to improve the precision of such combinatorial methods .we propose a method for learning a generative model of the cid fragmentation process from data .this model estimates the likelihood of any given fragmentation event occurring , thereby predicting those peaks that are most likely to be observed .we hypothesise that increasing the precision of the predicted spectrum in this way will improve our system s ability to accurately identify metabolites . section [ sec : cfm ] provides details of our proposed model and the training method .section [ sec : results ] then reports the experimental results .we will assume the reader knows the foundations of esi ms / ms ; for an introduction to this process , see .[ [ sec : cfm ] ] this section presents our model for the esi - ms / ms cid fragmentation process , which we call competitive fragmentation modeling ( cfm ) , and a method for deriving parameters for this model from existing ms / ms data .section [ sec : se ] describes the simplest form of this method ; single energy competitive fragmentation modeling ( se - cfm ) .section [ combinedenergymodel ] then presents an extension of this method , combined energy competitive fragmentation modeling ( ce - cfm ) , which aims to make better use of cid ms / ms spectra measured at different energy levels for the same compound .windows executables and cross - platform source code and the trained models are freely available at http://sourceforge.net / projects / cfm - id/. a web server interface is also provided at http://cfmid.wishartlab.com .section [ sec : results ] in single energy cfm ( se - cfm ) , we model esi - ms / ms fragmentation as a stochastic , homogeneous , markov process involving state transitions between charged fragments , as depicted in figure [ fig : model](a ) . [fig : model ] more formally , the process is described by a fixed length sequence of discrete , random fragment states , where each takes a value from the state space , the set of all possible fragments ; this state space will be further described in section [ statespace ] .a transition model defines the probabilities that each fragment leads to another at one step in the process ; see section [ transitionmodel ] .an observation model maps the penultimate node to a peak , which takes on a value in that represents the m / z value of the peak to which the final fragment will contribute ; see section [ observationmodel ] .se - cfm is a latent variable model in which the only observed variables are the initial molecule and the output peak ; the fragments themselves are never directly observed .each output adds only a small contribution to a single peak in the mass spectrum . in order to predict a complete mass spectrum, we can run the model forward multiple times to compute the marginal distribution of .we make the following assumptions about the cid fragmentation process .further details for the motivations of each are provided below , but these generally involve a trade - off between accurately modeling the process and keeping the model computationally tractable . 1 .all input molecules have a single positive charge and exist in their most common isotopic form. 2 . in a collision, each molecule will break into two fragments .no mass or charge is lost .one of the two fragments must have a single positive charge and the other must be neutral .combined , the two must contain all the components of the original charged molecule , i.e. all the atoms and electrons .4 . no further sigma bonds can be removed or added during a break , except those connecting hydrogens i.e .the edges in the molecular graph must remain the same .rearrangement of pi bonds is allowed and hydrogen atoms may move anywhere in the two resulting fragments , on the condition that both fragments satisfy all valence rules , and standard bond limitations are met e.g .no bond orders higher than triple .the even electron rule is always satisfied i.e .no radicals .assumption 1 is reasonable as we assume that the first phase of ms / ms successfully restricts the mass range of interest to include only the [ m+h] precursor ion containing the most abundant isotopes .since this ion has only a single positive charge , we can safely assume that no multiply - charged ions will be formed in the subsequent ms2 phase .ensuring that valid [ m+h] precursor ions are selected in ms1 is beyond the scope of this work ; see for a summary of ms1 data processing methods .assumptions 2 , 4 and 6 do not necessarily hold in real - world spectra .however including them substantially reduces the branching factor of the fragment enumeration , making the computations feasible . since these assumptions do appear to hold in the vast majority of cases, we expect that including them should have minimal negative impact on the experimental results .note that most 3-way fragmentations can be modeled by two sequential , 2-way fragmentations , so including assumption 2 should not impact our ability to model most fragmentation events .assumption 5 allows for mclafferty rearrangement and other known fragmentation mechanisms .our method for enumerating fragments is similar in principle to the combinatorial approach used in metfrag and fid , with some additional checks to enforce the above assumptions .we systematically break all non - ring bonds in the molecule ( excluding those connecting to hydrogens ) and all pairs of bonds within each ring .we do this one break at a time , enumerating a subset of fragments with all possible masses that may form after each break , .this subset is found by determining the number of additional electrons that can be allocated to either side of the break using integer linear programming e.g . breaking the middle bond in ccc[ch4 + ]( smiles format ) gives fragments c=[ch3 + ] ( mass=29.04da , loss cc ) and c[ch4 + ] ( mass=31.05da , loss c = c ) , the fragmentation procedure is applied recursively on all the produced fragments , to a maximum depth .the result is a directed acyclic graph ( dag ) containing all possible charged fragments that may be generated from that molecule .an abstract example of such a fragmentation graph is provided in figure [ fig : fragmentationgraph ] .note that for each break , one of the two produced fragments will have no charge .since it is not possible for a mass spectrometer to detect neutral molecules , we do not explicitly include the neutral fragments in the resulting graph , nor do we recur on their possible breaks .however neutral loss information may be included on the edges of the graph , indicating how a particular charged fragment was determined . our parametrized transition model assigns a conditional probability to each fragment given the previous fragment in the sequence ,, , . in the case where has as a possible child fragment in a fragmentation graph , our model assigns a positive probability to the transition from to .furthermore , self - transitions are always allowed , i.e. the probability of transitioning from to is always positive ( for the same ) .we assign 0 probability to all other transitions , i.e. those that are not self - transitions , and that do not exist within any fragmentation graph .although the set of possible charged fragments is large , the subset of child fragments originating from any particular fragment is relatively small .for example , the requirement that a feasible child fragment must contain a subset of the atoms in the parent fragment rules out many possibilities .consequently most transitions will be assigned a probability of 0 .note that the assigned probabilities of all transitions originating at a particular fragment , including the self - transition , must sum to one .we now discuss how we parametrize our transition model .a natural parametrization would be to use a transition matrix containing a separate parameter for every possible fragmentation .unfortunately , we lack sufficient data to learn parameters for every individual fragmentation in this manner . instead , we look for methods that can generalize by exploiting the tendency of similar molecules to break in similar ways . we introduce the notion of _ break tendency _ , which we represent by a value for each possible fragmentation that models how likely a particular break is to occur .those fragmentations that are more likely to occur are assigned a higher break tendency value , and those that are less likely are given lower values .we then employ a softmax function to map the break tendencies for all breaks involving a particular parent fragment to probabilities , as defined in equation [ rhoequation ] below .this has the effect of capturing the competition that occurs between different possible breaks within the same molecule .for example , consider the two fragmentations in figure [ fig : twobreaks ] . here ,although both fragmentations involve an h neutral loss , in the left - hand case , the h loss must compete with the loss of an ammonia group , whereas in the right hand case , it does not .hence our model might assign an equal break tendency to both cases , but this would still result in a lower probability of fragmentation in the former case , due to the competing ammonia . two similar breaks , both resulting in an h neutral loss . the right case should be assigned a higher probability , as in the left case , the nh is also likely to break away , reducing the probability of the h loss . ]we model the probability of a particular break occurring as a function of its break tendency value and that of all other competing breaks from the same parent , as follows : where the sums iterate over all for which is possible .since the break tendency is a relative measure , it makes sense to tie it to some reference point . for the purposes of this model ,we have assigned the break tendency for a self - transition ( i.e. no break occuring ) to , which gives as shown in ( [ rhoequation ] ) .[ [ incorporating - chemical - features ] ] incorporating chemical features + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we need to compute for . to do thiswe first define a binary feature vector to describe the characteristics of a given break .such features might include the presence of a particular atom adjacent to the broken bond , or the formation of a specific neutral loss molecule e.g .see section [ sec : chemicalfeatures ] .we then use these features to assign a break tendency value using a linear function parameterized by a vector of weights i.e . . thiscan then be substituted into ( [ rhoequation ] ) to generate the probability of transition .the first feature of is a bias term , set to 1 for all breaks .note that the vector constitutes the parameters of the cfm model that we will be learning .we model the conditional probability of using a narrow gaussian distribution centred around the mass ) and hence can just use the mass here .] of , i.e. .the value for can be set according to the mass accuracy of the mass spectrometer used .so , we define this observation function to be the following our investigation ( see supplementary data ) of the mass error of the precursor ions in the metlin metabolite data used in section [ sec : results ] found that the distribution of mass errors had a mean offset of approximately 1 ppm , and a narrower shape than a gaussian distribution .however , in order to model a more general mass error , not specific to a particular instrument or set of empirical data , we think the gaussian distribution is a reasonable approach .our system estimates the values for the parameters of the proposed model by applying a training procedure to a set of molecules , for which we have both the chemical structure and a measured ms / ms spectrum . for the purposes of this work ,we assume we have a measured low , medium and high energy cid ms / ms spectrum for each molecule , which we denote ( , , ) each spectrum is further defined to be a set of peaks , where each peak is a pair , composed of a mass and a height ( or intensity ) \subset \mathbb{r} ] is the indicator function .this does not permit a simple closed - form solution for .however is concave in , so settings for can be found using gradient ascent .values for the joint probabilities in the terms can be computed efficiently using the junction tree algorithm .we also add an regularizer on the values of to ( excluding the bias term ) .this has the effect of discouraging overfitting by encouraging the parameters to remain close to zero .ms / ms spectra are often collected at multiple collision energies for the same molecule .increasing the collision energy usually causes more fragmentation events to occur .this means that fragments appearing in the medium and high energy spectra are almost always descendants of those that appear in the low and medium energy spectra , respectively .so the existence of a peak in the medium energy spectrum may help to differentiate between explanations for a related peak in the low or high energy spectra .for this reason , we also assessed an additional model , combined energy cfm ( ce - cfm ) , which extends the se - cfm concept by combining information from multiple energies as shown in fig .[ model ] ( b ) .plow , pmed and phigh each represent a peak from the low , medium and high energy spectrum respectively .the fragment states , transition rules and the observation model are all the same here as for se - cfm .the main difference now is that the homogeneity assumption is relaxed so that separate transition likelihoods can be learned for each energy block i.e ., to , to and to , where , and denote the fragmentation depths of the low , medium and high energy spectra respectively .this results in separate parameter values for each energy , denoted respectively as , and .the complete parameter set for this model thus becomes .we can again use a maximum likelihood approach to parameter estimation based on the em algorithm .this approach deviates from the se - cfm method only as follows : * for each energy level , ( [ gradequation ] ) is computed separately , restricting the terms to relevant parts of the model e.g . would sum from to when computing the gradients for , and from to when computing gradients for . *the computation of the terms combines evidence from the full set of three spectra . in se - cfm, we apply one spectrum at a time , effectively sampling from a distribution over the peaks from each observed spectra . in this extended modelwe can not do this because we do not have a full joint distribution over the peaks , but rather we only have marginal distributions corresponding to each spectrum .the standard inference algorithms e.g .the junction tree algorithm , do not allow us to deal with observations that are marginal distributions rather than single values .instead we use the iterative proportional fitting procedure ( ipfp ) , with minor modifications to better handle cases where the spectra are inconsistent ( not simultaneously achievable under any joint distribution ) .these modifications reassign the target spectra to be the average of those encountered when the algorithm oscillates in such circumstances .in this section we present results using the above described se - cfm ( =2 ) and ce - cfm ( =2 , =4 , =6 ) methods , on a spectrum prediction task , and then in a metabolite identification task .we used the metlin database , separated into two sets ( see description below ) each containing positive mode , esi - ms / ms spectra from a 6510 q - tof ( agilent technologies ) mass spectrometer , measured at three different collision energies : 10v , 20v and 40v , which we assign to be low , medium and high energy respectively .each set was randomly divided into 10 groups for use within a 10-fold cross validation framework . 1 .* tripeptides * : the metlin database contains data for over 4000 enumerated tripeptides .we randomly selected 2000 of these molecules , then omitted 15 that had four or more rings due to computational resource concerns , leaving 1985 remaining in the set .fragmentation patterns in peptides are reasonably well understood , leading to effective algorithms for identifying peptides from their esi ms / ms data e.g . . however , we think that the size of this dataset , and the fact that it contains so many similar yet different molecules , make it an interesting test case for our algorithms .* metlin metabolites * : we use a set of 1491 non - peptide metabolites from the metlin database .these are a more diverse set covering a much wider range of molecules .an initial set of 1500 were selected randomly .nine were then excluded because they were so much larger than the other molecules ( over 1000 da ) , such that their fragmentation graphs could not be computed in a reasonable amount of time .we also used an additional small validation set , selected because they were measured on a similar mass spectrometer , an agilent 6520 q - tof , but in a different laboratory .these were taken from the massbank database .all testing with this set used a model trained for the first cross - fold set of the metlin metabolite data ( of the data ) . 1 .* massbank metabolites * : this set contains 192 metabolites taken from the washington state university submission to the massbank database .all molecules from this submission were included that had ms2 spectra with collision energies 10v , 20v and 40v , in order to provide a good match with the metlin data .files containing test molecule lists and assigned cross validation groups are provided as supplementary data .the chemical features used in these experiments were as follows .note that the terms _ ion root atom _ and _ neutral loss ( nl ) root atom _ refer to the atoms connected to the broken bond(s ) on the ion and neutral loss sides respectively cf . , fig .[ fig : breaklabel ] . * _ break atom pair _ : indicators for the pair of ion and neutral loss root atoms , each from \{c , n , o , p , s , other } , included separately for those in a non - ring break vs those in a ring break e.g .[ fig : breaklabel]a ) : would be non - ring c - c .( 72 features ) * _ ion and nl root paths _ indicators for all paths of length 2 and 3 starting at the respective root atoms and stepping away from the break .each is an ordered double or triple from \{c , n , o , p , s , other } , taken separately for rings and non - rings .two more features indicate no paths of length 2 and 3 respectively e.g .[ fig : breaklabel]a ) : the ion root paths are c - o , c - n and c - n - c .( 2020 features ) . * _ gasteiger charges _ : indicators for the quantised pair of gasteiger charges for the ion and nl root atoms in the original unbroken molecule .( 288 features ) * _ hydrogen movement _ : indicator for how many hydrogens switched sides of the break and in which direction i.e .ion to nl ( - ) or nl to ion(+ ) \{0,,,,,other}. ( 10 features ) * _ ring features _ : properties of a broken ring .aromatic or not ?multiple ring system ?size \{3,4,5,6 , other } ?distance between the broken bonds \{1,2,3,4 + } ?[ fig : breaklabel]b ) is a break of a single aromatic ring of size 6 at distance 3 .( 12 features ) . of these 2402 features ,few take non - zero values for any given break .many are never encountered in our data set , in which case their corresponding parameters are set immediately to 0 .we also append _quadratic features _, containing all 2,881,200 pair - wise combinations of the above features , excluding the additional bias term .again , most are never encountered , so their parameters are set to 0 . for each cross validation fold , and the massbank validation set , a model ( trained as above ) , was used to predict a low , medium and high energy spectra for each molecule in the test set .the resulting marginal distributions for the peak variables are a mixture of gaussian distributions .we take the means and weights of these gaussians as our peak mass and intensity values .since all fragments in the fragmentation graph of a molecule have non - zero probabilities in the marginal distribution , it is necessary to place a cut - off on the intensity values to select only the most likely peaks . here, we use a post - processing step that removes peaks with low probability , keeping as many of the highest peaks as required to form at least 80% of the total intensity sum .we also set limits on the number of selected peaks to be at least 5 and at most 30 .this ensures that more peaks are included than just the precursor ion , and also prevents spectra occurring that have large numbers of very small peaks .these values were selected arbitrarily , but post - analysis suggests that they are reasonable ( see supplementary data ) .when matching peaks we use a mass tolerance set to the larger of 10 ppm and 0.01 da ( depending on the peak mass ) , and set the observation parameter to be one third of this value .[ [ metrics ] ] metrics + + + + + + + we consider a peak in the predicted ms / ms spectrum to match a peak in the measured ms / ms spectrum if their masses are within the mass tolerance above .we use the following metrics : 1 .* weighted recall * : the percentage of the total peak intensity in the measured spectrum with a matching peak in the predicted spectrum : \hspace{2pt}\div \hspace{-10pt}\sum\limits_{(m , h)\in s_{m}}\hspace{-12pt}h ] .* recall * : the percentage of peaks in the measured spectrum that have a matching peak in the predicted spectrum : .* precision * : the percentage of peaks in the predicted spectrum that have a matching peak in the measured spectrum : .* jaccard score * : .the intensity weighted metrics were included because the unweighted precision and recall values can be misleading in the presence of low - level noise e.g . when there are many small peaks in the measured spectrum .the weighted metrics place a greater importance on matching higher intensity peaks , and therefore give a better indication of how much of a spectrum has been matched .however , these weighted metrics can also be susceptible to an over - emphasis of just one or two peaks , and in particular of the peak corresponding to the precursor ion .consequently , we think it is informative to consider both weighted and non - weighted metrics for recall and precision .[ [ models - for - comparison ] ] models for comparison + + + + + + + + + + + + + + + + + + + + + : the pre - existing methods , e.g .metfrag , fingerid do not output a predicted spectrum , but skip directly to metabolite identification .so , instead we compare against : * * full enumeration * : this model considers the predicted spectrum to be one that enumerates all possible fragments in the molecule s fragmentation tree with uniform intensity values . * * heuristic * ( tripeptides only ) : this model enumerates known peptide fragmentations as described by , including , , , , , and immonium ions . [ [ results ] ] results + + + + + + + : the results are presented in figure [ fig : metricspeptides ] . for all three data sets , se - cfm and ce - cfm obtain several orders of magnitude better precision and jaccard scores than the full enumerations of possible peaks .there is a corresponding loss of recall .however , if we take into account the intensity of the measured peaks , by considering the weighted recall scores , we see that our methods perform well on the more important , higher intensity peaks .more than 75% of the total peak intensity in the tripeptide spectra , and approximately 60% of the total peak intensity in the metabolite spectra , were predicted .the results presented in figure [ fig : metricspeptides ] show scores averaged across the three energy levels for each molecule . if we consider the results for the energy levels separately ( see supplementary data ) , we find that the low and medium energy results are much better for all methods we assessed . for example , in the case of the low energy spectra , the weighted recall scores for se - cfm are 78% , 73% and 81% for the tripeptide , metlin metabolite and massbank metabolite data sets respectively , as compared to 73% , 29% and 37% respectively for the high energy spectra .the poorer high energy spectra results may be due to increased noise and a lower predictability of events at the higher collision energies .another possible explanation is that the even - electron rule and other assumptions listed in section [ statespace ] may be less reliable when there is more energy in the system . in the case of the tripeptide data ,our methods achieve higher recall scores and similar rates of precision to that of the heuristic model of known fragmentation mechanisms , resulting in improved jaccard scores .since peptide fragmentation mechanisms are fairly well understood , this result is not intended to suggest that our method should be used in place of current peptide fragmentation programs , but rather to demonstrate that se - cfm and ce - cfm are able to extract fragmentation patterns from data to a similar extent to human experts , given a sufficiently large and consistent data set . like our methods ,the heuristic models also perform better for the lower energy levels , with a weighted recall score of 66% for the low energy , as compared to only 24% for the high energy. unsurprisingly , being a smaller and more diverse data set , the metlin metabolite results are poorer than those of the tripeptides .however the weighted recall for both our methods is still above 60% and the precision and jaccard scores are much higher than for the full enumeration , suggesting that the cfm model is still able to capture some of the common fragmentation trends .the weighted recall and precision results for the massbank metabolites are fairly comparable to those of the metlin metabolites .there is a small loss in the non - weighted recall , however this is probably due to a higher incidence of low - level noise in the massbank data .this results in a small loss in the average jaccard score .however these results demonstrate that the fragmentation trends learned still apply to a significant degree on data collected at a different time in a different laboratory .since this is the first method , to the authors knowledge , capable of predicting intensity values as well as m / z values , we also investigated the accuracy of cfm s predicted intensity values .we found that the pearson correlation coefficients for matched pairs of predicted and measured peaks , were 0.7 , 0.6 and 0.45 for the low , medium and high spectra respectively ( se - cfm and ce - cfm results were not significantly different ) .this indicates a positive , though imperfect correlation .full results and scatter plots are contained in the supplementary data . herewe apply our cfm ms / ms spectrum predictions to a metabolite identification task .for each molecule , we produce two candidate sets via queries to two public databases of chemical entities : 1 .we query the pubchem compound database for all molecules within 5 ppm of the known molecule mass .this simulates the case where little is known about the candidate compound , but the parent ion mass is known with high accuracy .2 . we query kegg ( kyoto encyclopedia of genes and genomes ) for all the molecules within 0.5 da of the known molecular mass .this simulates the case where the molecule is thought to be a naturally occurring metabolite , but there is more uncertainty in the target mass range . to conduct this assessment , duplicate candidates were filtered out i.e .those with the same chemical structure , including those that only differ in their stereochemistry .charged molecules and ionic compounds were also removed since the program assumes single fragment , neutral candidates ( to which it will add a proton ) . after filtering , the median number of candidates returned from pubchem was 911 for the tripeptides and 1025 for the metabolites .note that 9 tripeptides and 57 of the metlin metabolites were excluded from this testing because no matching entry was found in pubchem for these molecules .the kegg queries were only carried out for the metabolite data .the median number of candidates returned was 22 , however no matching entry was found in kegg for 833 of the metlin metabolites and 111 of the massbank metabolites .whenever a matching entry could be found , we ranked the candidates according to how well their predicted low , medium and high spectra matched the measured spectra of the test molecule .the ranking score we used was the jaccard score described in section [ sec : spectrumprediction ] .we compared the ranking performance of our se - cfm and ce - cfm methods against those of metfrag and fingerid .we used the same candidate lists for all programs .for candidate molecules with equal scores , we had each program break ties in a uniformly random manner .this was in contrast to the original metfrag code , which used the most pessimistic ranking ; we did not use that approach as it seemed unnecessarily pessimistic .we set the mass tolerances used by metfrag when matching peaks to the same as those used in our method ( maximum of 0.01da and 10ppm ) .metfrag and fingerid only accept one spectrum , so to input the three spectra we first merged them as described by : we took the union of all peaks , and then merge together any peaks within 10 ppm or 0.01 da of one another , retaining the average mass and the maximum intensity of the two .in fingerid we used the linear high resolution mass kernel including both peaks and neutral losses , and trained using the same cross - fold sets as for our own method .overall , we attempted to assess cfm , metfrag and fingerid as fairly as possible , using identical constraints , identical databases and near - identical data input .the results are shown in figure [ fig : rankingresults ] .as seen in this figure , our cfm method achieved substantially better rankings than both the existing methods on all three data sets , for both the pubchem and kegg queries .when querying against kegg , our methods found the correct metabolite as the top - scoring candidate in over 70% of cases for both metabolite sets and almost always ranked the correct candidate in the top 5 . in comparison, metfrag ranked the correct metabolite first in approximately 50% of cases for both metabolite sets , and in the top 5 in 89% .fingerid ranked the correct metabolite first in less than 15% of cases .for pubchem , our methods performed well on the tripeptide data , identifying the correct metabolite as the top - scoring candidate in more than 50% of cases and ranking the correct candidate in the top 10 for more than 98% of cases .this is again convincingly better than both metfrag and fingerid , which rank the correct candidate first in less than 35% and 2% of cases respectively . for the metabolite data , ce - cfm and se - cfm were able to identify the correct metabolite in only 12% and 10% of cases respectively , however given that this is from a list of approximately one thousand candidates , this performance is still not bad .once again , it is substantially better than metfrag and fingerid , which correctly identified less than 6% and 1% of cases respectively .our methods rank the correct candidate in the top 10 in more than 40% of cases on both data sets , as compared to metfrag s performance of 31% on the metlin metabolites and 21% on the massbank metabolites .additionally , the top - ranked compound was found to have the correct molecular formula in more than 88% of cases for se - cfm and 90% of cases for ce - cfm , suggesting that both methods mainly fail to distinguish between isomers . while the performance of all three methods ( cfm , metfrag and fingerid ) is not particularly impressive for the pubchem data sets ( i.e. % correct ) we would argue that the pubchem database is generally a poor database choice for anyone wishing to do ms / ms metabolomic studies . with only 1% of its molecules having a biological or natural product origin , one is already dealing with a rather significant challenge of how to eliminate a 100:1 excess of false positives .so we would regard the results from the pubchem assessment as a `` worst - case '' scenario and the results from the kegg assessment as a more typical metabolomics scenario . the results for ce - cfm showed minimal difference when compared to those of se - cfm , casting doubt on whether the additional complexity of ce - cfm is justified .however we think this idea is still interesting as a means for integrating information across energy levels and may yet prove more useful in future work .( see section [ sec : spectrumprediction ] ) , http://cfmid.wishartlab.com we encourage readers to make use of our executables and source code , made available at http://sourceforge.net / projects / cfm - id/.we have proposed a model for the esi - ms / ms fragmentation process and a method for training this model from data .the performance has been benchmarked in cross validation testing on a large molecule set , and further validated using an additional dataset from another laboratory .head - to - head comparisons using multiple data sets under multiple conditions show that the cfm method significantly outperforms existing state - of - the - art methods , and has attained a level that could be useful to experimentalists performing metabolomics studies .bolton e , wang y , thiessen p , bryant s ( 2008 ) pubchem : integrated platform of small molecules and biological activities . in : chapeter 12 in annual reports in computational chemistry ,vol 4 , american chemical society , washington dc galezowska a , harrison mw , herniman jm , skylaris ck , langley gj ( 2013 ) a predictive science approach to aid understanding of electrospray ionisation tandem mass spectrometric fragmentation pathways of small molecules using density functional calculations .rapid communications in mass spectrometry : rcm 27(9):964970 hillaw , mortishire - smith rj ( 2005 ) automated assignment of high - resolution collisionally activated dissociation mass spectra using a systematic bond disconnection approach .rapid communications in mass spectrometry 19(21):31113118 kangas lj , metz to , isaac g , schrom bt , ginovska - pangovska b , wang l , tan l , lewis rr , miller jh ( 2012 ) in silico identification software ( isis ) : a machine learning approach to tandem mass spectral identification of lipids .bioinformatics 28(13):170513 levsen k , schiebelhm , terlouw jk , _et al_(2007 ) even - electron ions : a systematic study of the neutral species lost in the dissociation of quasi - molecular ions .journal of mass spectrometry : jms 42:10241044 oberacher h , pavlic m , libiseller k , _et al _ ( 2009 ) on the inter - instrument and the inter - laboratory transferability of a tandem mass spectral reference library : 2 .optimization and characterization of the search algorithm .journal of mass spectrometry : jms 44(4):494502
electrospray tandem mass spectrometry ( esi - ms / ms ) is commonly used in high throughput metabolomics . one of the key obstacles to the effective use of this technology is the difficulty in interpreting measured spectra to accurately and efficiently identify metabolites . traditional methods for automated metabolite identification compare the target ms or ms / ms spectrum to the spectra in a reference database , ranking candidates based on the closeness of the match . however the limited coverage of available databases has led to an interest in computational methods for predicting reference ms / ms spectra from chemical structures . this work proposes a probabilistic generative model for the ms / ms fragmentation process , which we call competitive fragmentation modeling ( cfm ) , and a machine learning approach for learning parameters for this model from ms / ms data . we show that cfm can be used in both a ms / ms spectrum prediction task ( ie , predicting the mass spectrum from a chemical structure ) , and in a putative metabolite identification task ( ranking possible structures for a target ms / ms spectrum ) . in the ms / ms spectrum prediction task , cfm shows significantly improved performance when compared to a full enumeration of all peaks corresponding to substructures of the molecule . in the metabolite identification task , cfm obtains substantially better rankings for the correct candidate than existing methods ( metfrag and fingerid ) on tripeptide and metabolite data , when querying pubchem or kegg for candidate structures of similar mass . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
50% of the traffic in cellular networks today is video .mounting evidence suggests that video will keep increasing its share of the cellular traffic at an even faster pace .the reason behind this phenomenon is the explosive demand for high quality video streaming from mobile devices ( e.g. , tablets , smart - phones ) .the challenge for mobile network operators ( mnos ) is to offer higher data rates that can keep up with this demand for high quality video .heterogenous cellular networks ( hcns ) , illustrated in fig . [ fig : system - model ] , are envisioned to be one of the solutions to this problem .hcns introduce low power base stations ( bs ) like pico bs ( pbs ) and femto bs ( fbs ) , that form around them picocells and femtocells respectively . lower transmission power from these small cellsreduces the transmission range and allows improved spatial reuse .hence , the first novel feature of hcns is the _ higher wireless capacity _ they offer to the complete macrocell .the second novel feature of hcns is that they can _ lower the use of the backhaul capacity _ by employing caching at the small cell bss .caching enables local access of frequently requested videos and this means lower utilization of the backhaul links between a bs and the video server .these two features of hcns constitute them a central component of the envisioned 5 g cellular network architecture .research for video distribution in hcns has focused primarily on caching , with the objective to reduce the startup playback delay of the video for each user , or lower the costs for the operator . in this paperwe are concerned with the first novel feature of hcns which is the higher wireless capacity .we focus on this topic since hcns introduce a new way for sharing the wireless resources . with the time domain resource partitioning ( tdrp ) mechanism , the mbs shuts off its transmissions for a fraction of the available resources during which the small cells can achieve a higher data rate ( fig .[ fig : problem - model ] ) . during the fraction , there is _ intra - cell _ interference since the mbs transmits simultaneously with the small cells .this technique was recently standardized through the concept of almost blank subframes ( abs ) and regular subframes ( rs ) in 3gpp lte - a under the more general name of enhanced inter - cell interference coordination ( eicic ) .one important detail is that the lte - a standard currently allows the dynamic adaptation of but it does not specify how it should be configured . given the increasing number of video streaming users in cellular networks , and the necessity of tdrp , it is of outmost importance to perform optimally both the configuration of and the allocation of the wireless resources in an hcn .hence , the specific questions that should be answered in this case are : what is the optimal when we have video traffic ? what is the best video quality that each user should receive ? what happens when a subset of the users receive video? currently there are no definite answers to these pressing questions .* related work . *tdrp for hcns is a topic investigated only recently because abs / rs were also very recently standardized in lte - a .the authors in derived the optimal fraction from the available abs and rs resources that each user should be allocated ( a representative rate allocation is illustrated in fig . [ fig : problem - model ] ) under a proportionally fair rate allocation ( pfra ) metric .the authors of that work assumed a constant fraction of abs that is configured by the hcn operator .in another recent work reported in , the authors investigated the joint optimization of tdrp and user association ( for traffic offloading ) but with an assumption for equal rate allocation to the associated users . to the best of our knowledgethere is no work that addresses tdrp in hcns for video distribution and streaming .as we already discussed , much of the early research work for video streaming hcns has focused on caching , or exploiting particular features like the density of the small cells . however , these works assumed the availability of a constant fraction of the resources for the mbs and the small cells that is effectively translated to a constant . on the other hand ,multi - user rate allocation for video streaming has been a topic thoroughly investigated the last few years for specific types of wireless networks .the works were primarily motivated from the network utility maximization ( num ) framework . from the category of works that were based on num , the ones that are more related to this paper focused on cellular networks and considered more details of the physical layer ( phy ) .for example the authors in investigated scheduling and resource allocation for a downlink lte system that employs discrete decisions for optimizing the selected video streaming quality . the same problem , but for scalable encoded video, was considered in .another class of works focused on optimizing rate / resource allocation with the objective to improve the playback performance of video clients . in the last works the authors take into consideration recent standardization developments in streaming and in particular dynamic adaptive streaming over http ( dash ) .however , these works do not target hcns and assume access to fixed capacity resources .* contributions .* in this paper , we present contributions on three fronts .first , we present a comprehensive _ joint tdrp , rate allocation , and video quality selection ( jtravqs ) _ optimization framework for video streaming in hcns .the framework identifies the optimal tdrp , the rate allocated to each user , and the video quality description for each user , so as to maximize the aggregate video quality in the hcn .our framework includes additional system - level parameters like the fraction of users that receive video .the np - hard problem is solved with a primal - dual approximation algorithm that provides an asymptotically optimal solution .our solution approach decomposes the problem into simpler subproblems , making them amenable to fast well - known solution algorithms .second , we propose enhancements to the basic optimization framework that allow it to support dash - based video streaming . additional system parameters like the buffer contents of individual users , time - dependent user population , and channel capacity with tcp , are taken into account to optimize the the playback performance .third , we present a thorough performance evaluation our main framework against : a ) a video - unaware system that jointly optimizes the tdrp and rate allocation under a pfra metric .b ) from the results of our scheme and that of pfra , we can infer the performance of a system that applies optimized rate allocation and video quality selection ( ravqs ) but with a fixed tdrp .finally , our enhanced system for dash , that considers the content of the playback buffer , is compared against a buffer - aware system that again uses fixed resources .* main results . *our results reveal that : i ) for video streaming tdrp should be more aggressive in favor of the small cells when compared to tdrp optimization under a pfra metric . in particular even for 4 small cells and 100 users , the optimal should be nearly 22.2% higher than the optimal under pfra .video quality is improved by a factor of 50%-70% for this scenario .ii ) using a fixed _ but still optimal tdrp under a pfra metric _ , and then performing a ravqs optimization as an afterthought , is still suboptimal .in particular for a population of 100 users and 4 small cells , the previous approach leads to an average video quality loss of 18.6% when compared to our approach .iii ) for a dash - based system with a fixed , _ but again optimal tdrp under pfra _ , our optimization has more significant impact .in particular the rebuffering time / events of the clients can be reduced by more than 50% for a static network and 60% for a network with user churn . * paper organization . * the rest of the paper is organized in the following sections .section [ section : system - model ] describes in detail the system model . in section [ section : optimization ]we present the formulation and the solution of the optimization problem we introduce in this paper , while its extension for dash is presented in section [ section : optimization - dash ] .performance evaluation results are presented in section [ section : performance - evaluation ] , and finally we conclude in section [ section : conclusions ] .* network model . * in fig . [ fig : system - model ] we present the network that we study in this paper and it includes a single macrocell with a mbs , the pbss , and the users . each bs in the set communicates with the set of associated users .we also denote with a subset of the users associated to bs that are not optimized in a video - aware fashion .this parameter allows us to investigate the possibility that a fraction of the users are optimized . during the fraction of the abs resourcesall the small cells transmit and interfere with every active user in the network .thus , we consider _ resource reuse _ across bss of the same tier ( pbss in our case ) which is one of the main benefits of small cells since it allows spatial reuse .the aggregate average interference power that user receives is denoted as . during the fraction of the non - blanked resources , or regular subframes , both the mbs and pbss transmit and the aggregate interference power that a user receives is denoted as .* user model .* each bs transmits with unicast streaming video to the similarly denoted user .the users associate to a bs by using an signal - to - interference plus noise ratio ( sinr ) biasing rule , i.e. , a user is associated to the small cell , and not the mbs , if the following is true : .this ensures that users are offloaded to the small cells .our primary objective is a static user population similarly to the literature , since we are interested to optimize the system operation within the complete playback duration of the video .however , motivated by recent experimental results that identify slow user variations in the cell throughout the day , we also evaluate our system for this more dynamic scenario . * video streaming and playback model .* without loosing generality we assume that all the bss are assumed to have cached the videos for all the users .now if the video representation that is transmitted to user is indexed by , the average bitrate that must be sustained is where is the size of the -th video representation , is the total playback time of the video and is the duration of playable content received during startup buffering ( time 0 ) .this formulation ensures that the average probability of rebuffering events is zero . in the first part of our optimization in section [ section : optimization ] , this is the condition we adopt since we are interested to reach a decision once for the duration of the video streaming session .however , with dash this formula is revised in section [ section : optimization - dash ] .* channel & phy model .* nodes use a single omni - directional half - duplex antenna .the channel from the -th bs to the -th user is denoted as .the fading coefficients are independent and , i.e. , they are complex gaussian random variables with unit variance and mean equal to that depends on path loss and shadowing effects according to the lte channel model .all the channels are considered to be block - fading rayleigh and quasi - stationary , that is they remain constant for the coherence period of the channel that is equal to the transmission length of the complete phy block .additive white gaussian noise ( awgn ) is assumed at every receiver with variance .the transmission power that the pbs and mbs use is , and respectively .* mcs & cqi . * a modulation and coding scheme ( mcs ) with bits / symbol is used by each bs while its value is determined by each pbs independently and optimally as we will later explain .the set of available mcss is , i.e. , we assume that the most spectral efficient quadrature amplitude modulation ( qam ) mcs is 128-qam .we also assume that users provide only _ average channel quality indicator _ ( cqi )feedback to the bss .modeling the quality - of - experience ( qoe ) of users in video streaming applications is not easy .qoe is affected both by the video signal quality and delay . in this subsection, we define a utility model only for the quality of the video signal while during the analysis of our optimization framework we discuss our approach for minimizing the effects of delay .the main objective of our video quality model is to capture the rate - distortion ( rd ) relationship of different representations of each video stream .this will allow our optimization framework to allocate resources to videos depending on their quality . in this paperwe assume we have the rd information information for each frame that belongs to representation of video and consists of its size in bits and the importance of the frame for the overall reconstruction quality of the video denoted as . in practice, is the total decrease in the mean square error ( mse ) distortion that will affect the video if the frame is decoded by the video player .the value of the mse in includes both the distortion that is added when frame is not decoded , and also the frames that have a decoding dependency with .for an i frame includes the of the p and b frames that depend on it . ] hence , the video quality model considers also the possible drift that might occur due to the inability to decode a particular video frame .these values can be obtained easily but only during the offline encoding of the video as discussed in . consequently , the aggregate video quality of a group of video frames indexed by ( also referred to as segment to ensure consistency with dash terminology ) , that belong to representation of video , is the average : this fraction is the average mse reduction of the frames contained in a dash segment or packet , versus their total number . _this formulation is in line with our initial objective since it expresses the `` value '' for a group of frames ._ for a group of segments starting from -th segment until the end of the video , we can similarly characterize the video quality as : for packet - based video , this rd information associated with a packet can be contained in each packet header . in the case of scalable video the information about the importance of a packet is already embedded in the header since it indicates the video layer that the packet belongs . for segment - based dash streaming a media presentation description ( mpd ) file is already used for conveying a subset of this information .hence , the model can support packetized non - scalable , scalable , and segment - based video .the final result of the previous discussion is that a single video for user will be available at the following discrete set of qualities indicating the set of available representations for each user / video .it is important to understand the use of the previous model in our optimization . in our initial framework , where the problem is solved for the complete playback duration of the video , the formulation in is used by setting =0 , i.e. , we use the average quality of the complete video .however , for dash the optimization is solved during a specific time period and captures the video quality of the remaining segments that still need to be communicated . to complete our discussion, we have to recall that our optimization targets a heterogeneous user population where a subset of them do not receive video .when we have elastic flows , or when the users do not participate in the video - aware optimization , then rate allocation is exercised with a pfra metric , i.e. , is generated by taking the logarithm of the communication rate achievable by user .we consider that the bss optimize independently the phy parameters of the point - to - point links , as it is typically done in wireless communication systems . to estimate the average communication rate that each user achieves when it is associated to bs we proceed as follows .the bs receives from each user the average channel gain ] denote the fraction of the abs resources that the pbs allocates to for streaming the video representation .similarly for the rs , we define ] denotes the projection onto the non - negative orthant , and is the step size at iteration .similarly we define the update rules and the subgradients for the remaining dual variables . in each iteration , the dual objective is improved using the subgradient update and accordingly the primal relaxed problems are solved again in order to update the primal variables ( which are then used in the subsequent dual objective update ) . for the final step ,the dual variables for constraints of each are provided to the master problem that has to be solved now for the optimal at the central controller ( cc ) of the system .recall that for the master problem we applied primal decomposition .it is thus solved very efficiently by collecting only the resource prices for , i.e , from each pbs in order to form the global subgradient . in practicewe only transmit the local subgradient from each bs .the current value of is updated as follows : } _\text{global subgradient}\big ] ^+ \label{eq : primal_update}\ ] ] now is a vector that can be selected as before in order to control the speed of convergence .the time complexity of the discrete knapsack problem denoted by , when solved with dynamic programming , is polynomial with respect to the size of the problem for each bs , but for bounded knapsack capacity .hence , in our case it is , i.e. , linear .this is because from we notice that the capacity of the knapsack is 1 , in other words only one item ( video representation ) can be selected . is a lp and so polynomial with respect to the number of associated users , i.e. , is .hence , the time complexity of may be better than that of .we conclude that the worst execution time of one iteration of the jtravqs algorithm , as a function of its inputs , can be expressed as : notation have to be multipled with the execution time of the fundamental algorithm operation . ] where is the delay between bs and the cc , corresponds to the execution time of the primal update ( one vector multiplication and one addition ) and is the delay for communicating the new primal update to the bss . in our simulationwe also considered that the backhaul links have the _ worst case _ transmission delay of =60ms .for 100 iterations the total delay until the optimal solution is reached is 6 seconds which means that the calculations have to take place within 4 seconds in order to reach a solution within a 10 second period .this is well within the capabilities of modern processors for solving the discussed lp and ilp . also , each bs communicates only to the cc which constitutes a negligible overhead .the good performance of the algorithm , allow us to investigate its use in shorter time scales next .* motivation . * in a network it is possible that channel conditions and users are more dynamic . in this casethe bitrate of the transmitted video should be adapted .one way to accomplish that is dash . with dash a videois stored as a sequence of short duration ( typically 2 - 10 sec ) video segments .each segment may be available at different sizes , snr qualities , spatial resolutions , frame rates .however , it has been shown that allowing the client to be fully responsible for requesting video segments ( a pull - based system ) , after estimating the variations in the end - to - end throughput , results in significant waste of resources . in this paper ,our optimization framework at the bs is responsible for the choice of the optimal video representation .this is also a realistic option since dash does not specify where video adaptation occurs .* enhanced system model . *we define the term _ slot _ as the period that the problem is solved and its decisions are enforced ( see fig .[ fig : enforcement - of - ra ] ) . without loosing generalitywe assume that jtravqs - dash is solved during a slot with a duration of 10 seconds .since the problem is solved for every slot , the instance of the problem currently solved is also indexed by .the result is that the jtravqs - dash problem is solved during slot to calculate the optimal ra and vqs for slot .the algorithm requires several iterations as before , that are indexed also by . regarding the input parameters to jtravqs - dash our estimate of the tcp throughput of user during slot according to , and the set of associated users .this approach for modeling , is consistent with the behavior of dash that uses tcp for downloading segments .hence , contrary to related work our approach is more realistic with respect to . the video quality model inis used with denoting the quality of the remaining segments that should be transmitted .let us finally define some minimal additional notation since the optimization variables must be indexed by slot : and .now indicates that in slot segments from representation will be transmitted to user .the same algorithm is used for solving jtravqs - dash but the two subproblems are adapted . *dash rate allocation ( dashra ) problem . * the most important aspect is the re - formulation of constraint that is now indexed by the slot , and is packed into constraint vector : re - writing for the dash case yields the problem : in is the average bitrate that must be sustained by the remaining segments of the -th representation similarly to , .the key difference from the initial problem is parameter .this is the total duration of the playable video in seconds that user has in its buffer at the start of slot and is denoted as , minus the playable video that the slowest user has in its buffer , i.e. , .each user updates the estimate of the playable video as : .this parameter ensures that users that have received lower volume of data are effectively prioritized .hence , if the ra decision for user in the -th slot is , then at the start of slot the bs can calculate , since it knows the result of ra and of course the duration of video that will be played . to summarize , this is effectively a rebuffering constraint that contains the differential buffer information .* dash client model example . *to explain the playback model for the dash client and the setting of , let us use a specific example with clients that have different playback buffer contents as illustrated in fig .[ fig : enforcement - of - ra ] .the number of transmitted segments depend on the value of .also in this example we consider the downloading of complete segments for exposition purposes but our model supports partially downloaded segments . foruser 1 assume that it is the client that is lagging behind from the rest and the playble video it has in its buffer at the start of slot is =0 .hence , at the start of slot it will rebuffer until it receives the segment and after it finishes , the video player enters the playback mode . also = + 10 - 10=0 , and ==0 . at the start of slot user 2has 20 seconds worth of video , while during it will receive two additional segments leading to =20 + 20 - 10=30 , and ==30 .hence , by having allocated more resources with the result is a pre - fetching of data . for user 3similarly we obtain =10 + 10 - 10=10 and =10 .* dash video quality selection ( dashvqs ) problem .* now the vqs problem is solved by adding one constraint in the problem .we reformulate the problem to that is also solved over the slot : note that for partial downloading , if in the next slot dashvqs identifies that a lower or higher video quality is transmitted , then pre - buffered data are not discarded but the unfinished segment is received . the new decision for the video qualityis enforced when a new segment will be transmitted .the dual variables are updated as before and the sub - gradients are similarly modified based on the new constraint .in this section , we present a comprehensive evaluation of the proposed algorithms comprising our framework through custom matlab simulation . our simulator performs a precise phy - level simulation of wireless packet transmissions . * jtravqs evaluation . *the parameter settings for our simulations are set as follows .downlink mbs and pbs transmit power are equal to 46dbm and 30dbm respectively .distance - dependent path loss is given by , where is the distance between two nodes in km , and the shadowing standard deviation is 8 db .the user speed is 3 kmph ( quasi - static as we already stated ) , and average cqi is provided every 10 minutes . the macrocell area is set to be a circle with radius equal to 1 km .the wireless channel parameters include a channel bandwidth of =20 mhz , noise power spectral density of = watt / hz , while the same rayleigh fading model was used for all the channels .packets of 1500 bytes are transmitted at the phy , while the optimal mcs is calculated according to .the user distribution and picocell locations are random and uniform within the macrocell .we set the biasing threshold to 0 db for all the systems to calculate .the user population increases up to a number of 200 to evaluate the performance in networks that continuously become more dense , consistently with the recent trends . for the pfra system we configured the users to request randomly and uniformly one of the available video representations ,while for jtravqs users request randomly and uniformly one of the available videos .the video content used in the experiments consists of 26 cif ( 352x288 ) , and high definition ( 1920x1080 ) sequences that were encoded with svc h.264 as a single layers .the videos are compressed at 30 fps and different rates ranging from 128 kbps and reaching values mbps .the frame - type patterns were g16b1,g16b3,g16b7,g16b15 , i.e. , there are different numbers of b frames between every two p frames and a gop size is always equal to 16 frames . regarding the presentation of the results , fig . [fig : results2 ] shows the average video quality ( in terms of the representation ) that is delivered to the picocell and macrocell users .for example one data point that has the value 3.2 indicates that on average the users received the quality representation 3.2 .hence , higher values indicate that the users received on average higher video quality representations .the data points in these figures correspond to different values of . also , the data points correspond to the average ( mean ) of all the measurements for 100 randomly generated topologies .the sample variance for this set the measurements is between 0.1 and 0.2 which is fairly small compared to the value of the mean and its difference between all the tested systems .* video quality . * for this set of results we present the average video quality of the picocell users versus the average video quality of the users associated to the macrocell ( only from the mbs to its associated users ) for different constant values of to illustrate the impact of different tdrp .the results for all systems can be seen in fig .[ fig : results2](a , b ) for =0.5 .jtravqs is superior when compared to pfra for high user density and low pbs density in fig .[ fig : results2](a ) . as the number of the pbss is increased to 8 in fig .[ fig : results2](b ) , all the systems can achieve higher performance .the reason is that fewer users are associated to each picocell and so a higher communication rate is available for each user under any scheme .so more picocells leads to better results due to the higher available rate per user as expected .another important result is that for constant pbs density ( either 4 or 8) , we have higher gain as the user population grows .the reason is the higher importance of optimal rate allocation , since the rate of a single pbs is shared among several users .for example in the very left data point of fig .[ fig : results2](b ) performance improvement of jtravqs over pfra is 21% for 100 users and 36% for 200 users ) .of course the average video quality is reduced for all systems since more users are present . also note that in the left part of the axis , where all the resources are practically allocated to the picocells ( ) , we observe the maximum possible video quality in the network . in this regime , the performance gap between jtravqs and the other systems is increased as the number of picocells and users is increased .another important result in the same figure , is related to the optimal .it is indicated with a dashed line that is intersected with representative performance curves .this shows that the interpretation of the optimal tdrp with jtravqs , that is denoted as (jtravqs ) , results in higher value for when compared to (pfra ) by 22% ( highlighted with the horizontal arrow ) .also if we assume that the system executes first pfra to calculate the optimal tdrp indicated as (pfra ) , and then perform ravqs with this fixed value , the result is an average quality equal to 4.3 ( the gain is highlighted with the lower vertical arrow ) . however , our complete system calculates the optimal operating point indicated with (jtravqs ) in the figure which gives an average quality equal to 5.1 , a performance difference of 18.6% ( the gain is highlighted with the upper vertical arrow ) .our system offers significant performance increase for =0.5 but the benefits are more important when =0.75 in fig .[ fig : results2](c , d ) .note that for =0.75 the slope of the curve is reduced as the is decreased .the benefit is because we have a higher number of users that can be optimized under jtravqs .also in this case the benefits are even more important when the fraction of the resources that the macrocell uses is below 50% ( left part of the axis ) since this gives more resources to the highly spectral efficient links in the picocells to be used and so a higher communication rate is possible . *fraction of optimized users .* now an interesting set of results is obtained for different values of .we notice in fig .[ fig : results6](a ) that as this fraction is reduced , jtravqs essentially degenerates to the pfra system .nevertheless , we still obtain significant benefits even for ratios of around 30% since few users are enough for jtravqs to be able to improve the overall system performance . * primal - dual convergence . * the convergence speed of jtravqs versus the number of iterationsis illustrated in fig .[ fig : results6](b ) .results for the algorithm execution are shown for a specific fixed number of picocells and user population .the results for the primal - dual algorithm used for jtravqs show that the convergence is achieved within 150 iterations . also fig .[ fig : results6](b ) illustrates another important aspect of our system : if the system uses fewer discrete quality representations for each video file , it allows the faster convergence of the algorithm . * jtravqs - dash evaluation .* the performance of jtravqs - dash is evaluated with the same setup as before , that is however augmented when necessary .now we present results for the playback performance : the time that a client is rebuffering in seconds , and the number of rebuffering events per minute . to ensure fairness, we calculate first the optimal solution with jtravqs .then , the minimum video quality for the jtravqs - dash system is set equal to jtravqs .this ensures that jtravqs - dash delivers at least the same video quality and the question is then to evaluate its ability to minimize rebuffering .for static user conditions the results are illustrated in fig .[ fig : results6](c ) .we draw a notched box plot of the rebuffering time for all the clients in the upper part of fig . [fig : results6](c ) , and the number of rebuffering events in the lower part of fig .[ fig : results6](c ) .the notch here marks the 95% confidence interval for the _ median_.all the systems perform well for 20 and 40 users since high capacity is available in the network .however , for higher user densities the buffering time with the baseline jtravqs is increased by a factor that is worse than linear .the same is true for pfra that is slightly worse . with a higher number of 8 pbss , rebuffering timeis improved for jtravqs because of higher available capacity .the same is true for the number of rebuffering events / minute that is a very high number for the first three systems we discussed ( 2 - 3 events in a 10 minute period ) .hence , the higher capacity in the network achieved with 8 pbss , simply delays the inevitable sharp increase but only of the rebuffering time .this result has an interesting interpretation for mnos : with increasing user density , expanding the network with more small cells improves marginally the rebuffering time and practically not at all the rebuffering events .this is in contrast to the video quality that can achieve significant improvement in our earlier plots . for extra gains , solutions with buffer - awarenessare required .better results are obtained for ravqs - dash again with a constant =50% .this is effectively a system configuration that encompasses the main features of the dash - aware streaming literature for single cell networks , e.g. , where the rate allocation and video quality are optimized by considering the buffer contents , but the overall communication resources are constant .recall than =50% is the optimal under a pfra metric for a population of 100 users and 4 pbss ( as illustrated in our earlier figures ) .buffer - awareness can indeed reduce the rebuffering when compared to the previous systems , while it can also reduce the variations of playback buffering for the users ( we have more predictable performance ) .the proposed jtravqs - dash system illustrates that it can reduce the time spent in rebuffering by over 50% when compared to the previous system , while the number of rebuffering events is reduced even more significantly ( 1 event/25 min . vs. 1 event/10 min . ) .hence , for a hcn the fixed tdrp is not the best option even if we design a dash - aware system .also , our overall results indicate that using a fixed tdrp has worse consequences in the rebuffering time / events of dash than on the video quality ( e.g. , the results illustrated in fig .[ fig : results2 ] ) . finally , we evaluate our system for a time - varying user population based on results from a real 3 g network reported in .we simulate an 8 hour period between 4 pm and 12 am , with a user peak occurring around 9 pm . during this peak ,the number of users is nearly 30% higher than the number of users at 4 pm and 12 am .hence , in our simulation we set accordingly ( the increase and decrease are approximated as linear in time as shown in ) . in the results in fig .[ fig : results6](d ) we present in the axis the peak number of users . for each user is also affected since tcp shares equally the communication rate among the competing traffic .also , we only compare different flavors of jtravqs - dash since the previous systems are not designed for a dynamic network .first , we observe again that the systems with fixed resource partitioning =50% have worse performance .second , we notice that the rebuffering time is higher when the fraction of optimized users is also high and equal to =0.75 .this means that with increasing user density in a network with user churn , optimizing a lower fraction of the users increases the dash playback quality of the optimized users by a significant amount for jtravqs - dash .this is an important result since it provides a tool for an mno to differentiate qoe in terms of rebuffering to various users .in this paper , we presented a framework for improving the quality of video streaming in a hcn that employs tdrp .tdrp is essential for the efficient operation of hcns and when high quality video distribution enters the game , efficiency becomes even more important .our framework addressed precisely this challenge , i.e. , it ensures optimal and video - aware allocation of resources in hcns that apply tdrp .we formulated this problem in a linear non - convex formulation for which we proposed a primal - dual approximation algorithm .our problem was decomposed into several problems that included a convex rate allocation problem , and a binary ilp for optimal video quality selection .an extensive performance evaluation under different hcn configurations highlighted the value of our framework for obtaining video quality improvements .another implication of our solution approach is that it can be solved very fast .this allowed us to augment it to support the more challenging case of dash . in this casesignificant additional improvement in terms playback performance was obtained .s. deb , p. monogioudis , j. miernik , and j. seymour , `` algorithms for enhanced inter - cell interference coordination ( eicic ) in lte hetnets , '' _ networking , ieee / acm transactions on _ , vol .22 , no . 1 ,137150 , feb 2014 .d. kosmanos , a. argyriou , y. liu , l. tassiulas , and s. ci , `` a cooperative protocol for video streaming in dense small cell wireless relay networks , '' _ signal processing : image communication _ , vol. 31 , pp . 151160 , february 2015 .f. dobrian , v. sekar , a. awan , i. stoica , d. joseph , a. ganjam , j. zhan , and h. zhang , `` understanding the impact of video quality on user engagement , '' in _ proceedings of the acm sigcomm conference _ , 2011 .w. wu , a. arefin , r. rivas , k. nahrstedt , r. sheppard , and z. yang , `` quality of experience in distributed interactive multimedia environments : toward a theoretical framework , '' in _acm multimedia _ , 2009 .j. chakareski , j. apostolopoulos , s. wee , w. tian tan , and b. girod , `` rate - distortion hint tracks for adaptive video streaming , '' _ circuits and systems for video technology , ieee transactions on _ , vol . 15 , no . 10 , pp .12571269 , oct 2005 .
heterogenous cellular networks ( hcn ) introduce small cells within the transmission range of a macrocell . for the efficient operation of hcns it is essential that the high power macrocell shuts off its transmissions for an appropriate amount of time in order for the low power small cells to transmit . this is a mechanism that allows time - domain resource partitioning ( tdrp ) and is critical to be optimized for maximizing the throughput of the complete hcn . in this paper , we investigate video communication in hcns when tdrp is employed . after defining a detailed system model for video streaming in such a hcn , we consider the problem of maximizing the experienced video quality at all the users , by jointly optimizing the tdrp for the hcn , the rate allocated to each specific user , and the selected video quality transmitted to a user . the np - hard problem is solved with a primal - dual approximation algorithm that decomposes the problem into simpler subproblems , making them amenable to fast well - known solution algorithms . consequently , the calculated solution can be enforced in the time scale of real - life video streaming sessions . this last observation motivates the enhancement of the proposed framework to support video delivery with dynamic adaptive streaming over http ( dash ) . our extensive simulation results demonstrate clearly the need for our holistic approach for improving the video quality and playback performance of the video streaming users in hcns . heterogeneous cellular networks , small cells , intra - cell interference , video streaming , video distribution , dash , rate allocation , resource allocation , optimization , 5 g wireless networks .
the sld resolution used in prolog may not be complete or efficient for programs in the presence of recursion .for example , for a recursive definition of the transitive closure of a relation , a query may never terminate under sld resolution if the program contains left - recursion or the graph represented by the relation contains cycles even if no rule is left - recursive . for a natural definition of the fibonacci function, the evaluation of a subgoal under sld resolution spawns an exponential number of subgoals , many of which are variants .the lack of completeness and efficiency in evaluating recursive programs is problematic : novice programmers may lose confidence in writing declarative programs that terminate and real programmers have to reformulate a natural and declarative formulation to avoid these problems , resulting in cluttered programs .tabling is a technique that can get rid of infinite loops for bounded - term - size programs and redundant computations in the execution of recursive programs .the main idea of tabling is to memorize the answers to subgoals and use the answers to resolve their variant descendents .tabling helps narrow the gap between declarative and procedural readings of logic programs .it not only is useful in the problem domains that motivated its birth , such as program analysis , parsing , deductive databases , and theorem proving , but also has been found essential in several other problem domains such as model checking and logic - based probabilistic learning .this idea of caching previously calculated solutions , called _ memoization _ , was first used to speed up the evaluation of functions .oldt is the first resolution mechanism that accommodates the idea of tabling in logic programming and xsb is the first prolog system that successfully supports tabling .tabling has become a practical technique thanks to the availability of large amounts of memory in computers .it has become an embedded feature in a number of other logic programming systems such as b - prolog , mercury , tals , and yap .oldt , and slg alike , is non - linear in the sense that the state of a consumer must be preserved before execution backtracks to its producer .this non - linearity requires freezing stack segments or copying stack segments into a different area before backtracking takes place .linear tabling is an alternative tabling scheme .the main idea of linear tabling is to use iterative computation of looping subgoals rather than suspension and resumption of them as is done in oldt to compute fixpoints .this basic idea dates back to the et * algorithm .the dra method proposed in is based on the same idea but employs different strategies for handling looping subgoals and clauses . in linear tabling , a cluster of inter - dependent subgoals as represented by a _ top - most looping subgoal _is iteratively evaluated until no subgoal in it can produce any new answers .linear tabling is relatively easy to implement on top of a stack machine thanks to its linearity , and is more space efficient than oldt since the states of subgoals need not be preserved .linear tabling is a framework from which different methods can be derived based on the strategies used in handling looping subgoals .one decision concerns when answers are consumed and returned .the _ lazy _ strategy postpones the consumption of answers until no answers can be produced .it is in general space efficient because of its locality and is well suited for all - solution search programs . the _ eager _strategy , in contrast , prefers answer consumption and return over production .it is well suited for programs with cuts .these two strategies have been compared in slg - wam as two scheduling strategies called _ local _ and _ single - stack _ .this paper gives a comprehensive analysis of these two strategies and compares their performance experimentally .linear tabling relies on iterative evaluation of top - most looping subgoals to compute fixpoints .naive re - evaluation of all looping subgoals may be computationally expensive ._ semi - naive optimization _ is an effective technique used in bottom - up evaluation of datalog programs .it avoids redundant joins by ensuring that the join of the subgoals in the body of each rule must involve at least one new answer produced in the previous round .the impact of semi - naive optimization on top - down evaluation had been unknown before . in this paper, we also propose to introduce semi - naive optimization into linear tabling .we have made efforts to properly tailor semi - naive optimization to linear tabling . in our semi - naive optimization ,answers for each tabled subgoal are divided into three regions as in bottom - up evaluation , but answers are consumed sequentially until exhaustion not incrementally as in bottom - up evaluation so that answers produced in a round are consumed in the same round .we have found that incremental consumption of answers does not fit linear tabling since it may require more iterations to reach fixpoints .moreover , consuming answers incrementally may cause redundant consumption of answers .we further propose a technique called _ early promotion _ of answers to reduce redundant consumption of answers .our benchmarking shows that this technique gives significant speed - ups to some programs .an efficient tabling system has been implemented in b - prolog , in which the lazy strategy is employed by default but the eager strategy can be used through declarations for subgoals that are in the scopes of cuts or are not required to return all the answers .our tabling system not only consumes considerably less stack space than xsb for some programs but also compares favorably well in speed with xsb .the theoretical framework of linear tabling is given in .the main objective of this paper is to propose evaluation strategies and their optimizations for linear tabling .the remainder of the paper is structured as follows : in the next section we define the terms used in this paper . in section 3we give the linear tabling framework and the two answer consumption strategies . in section 4we introduce semi - naive optimization into linear tabling and prove its completeness . in section 5we describe the implementation of our tabling system and also show how to implement semi - naive optimization . in section 6we compare the tabling strategies experimentally , evaluate the effectiveness of semi - naive optimization , and also compare the performance of b - prolog with xsb . in section 7we survey the related work and in section 8 we conclude the paper .in this section we give the definitions of the terms to make this paper as much self - contained as possible .the reader is referred to for a description of sld resolution . in this paper, we always assume the top - down strategy for selecting clauses and the left - to - right computation rule .let be a program .tabled predicates in are explicitly declared and all the other predicates are assumed to be non - tabled .a subgoal of a tabled predicate is called a _ tabled subgoal_. tabled predicates are transformed into a form that facilitates execution : each rule ends with a dummy subgoal named where is the head , and each tabled predicate contains a dummy ending rule whose body contains only one subgoal named _check_completion(h)_. for example , given the definition of the transitive closure of a relation , .... : -table p/2 .p(x , y):-p(x , z),e(z , y ) .p(x , y):-e(x , y ) ..... the transformed predicate is as follows : .... p(x , y):-p(x , z),e(z , y),memo(p(x , y ) ) .p(x , y):-e(x , y),memo(p(x , y ) ) .p(x , y):-check_completion(p(x , y ) ) . .... a table is used to record subgoals and their answers . for each subgoal and its variants, there is an entry in the table that stores the state of the subgoal ( e.g. , complete or not ) and an answer table for holding the answers generated for the subgoal .initially , the answer table is empty .let and be two terms with no shared variables .the term _ subsumes _ if there exists a substitution such that = .the two terms and are called _ variants _ if they subsume each other .let be a goal .the first subgoal is called the _ selected subgoal _ of the goal . is _ derived _ from by using a tabled _ answer _ if there exists a unifier such that and . is _ derived _ from by using a rule `` '' if and . is said to be the _ parent _ of , ... , and .the relation _ ancestor _ is defined recursively from the parent relation .a tabled subgoal that occurs first in the construction of an sld tree is called a _ pioneer _ , and all subsequent variants are called _ followers _ of the pioneer .let be a given goal , and be a _ derivation _ where each goal is derived from the goal immediately preceding it .let be a sub - sequence of the derivation where and .the sub - sequence forms a _ loop _ if and are variants . the subgoals and are called _ looping subgoals_. in particular , is called the _ pioneer looping subgoal _ and is called the _ follower looping subgoal _ of the loop . notice that the pioneer and follower looping subgoals are not required to have the ancestor - descendent relationship , and thus a derivation that contains two variant subgoals may not be a _ real _ loop .consider , for example , the goal `` '' where is defined by facts .the derivation `` '' is treated as a loop although the selected subgoal in the second goal is not a descendant of .a subgoal is said to be _ dependent _ on another subgoal if occurs in a derived goal from , i.e. , .two subgoals are said to be _inter - dependent _ if they are dependent on each other .inter - dependent subgoals constitute a _ cluster _ , which is called a _ strongly connected component _elsewhere .a subgoal in a cluster is called the _ top - most _ subgoal of the cluster if none of its ancestors is included in the cluster .unless a cluster contains only a single subgoal , its top - most subgoal must also be a looping subgoal .for example , the subgoals at the nodes in the sld tree in figure [ fig : loops ] constitute a cluster and the subgoal p at node 1 is the top - most looping subgoal of the cluster .linear tabling takes a transformed program and a goal , and tries to find a path in the sld tree that leads to an empty goal .the primitive is executed when a tabled subgoal is encountered .just as in sld resolution , linear tabling explores the sld tree in a depth - first fashion , taking special actions when _table_start(a ) _ , _ memo(a ) _ , and _check_completion(a ) _ are encountered .backtracking is done in exactly the same way as in sld resolution . when the current path reaches a dead end , meaning that no action can be taken on the selected subgoal , execution backtracks to the latest previous goal in the path and continues with an alternative branch .when execution backtracks to the top - most looping subgoal of a cluster , however , we can not fail the subgoal even after all the alternative clauses have been tried .in general , the evaluation of a top - most looping subgoal must be iterated until its fixpoint is reached .we call each iteration of a top - most looping subgoal a _ round_. various linear tabling methods can be devised based on the framework . a linear tabling methodcomprises strategies used in the three primitives : _table_start(a ) _ , _ memo(a ) _ , and _ check_completion(a)_. in linear tabling , a pioneer subgoal has two roles : one is to produce answers into the table and the other is to return answers to its parent through its variables .different strategies can be used to produce and return answers .the _ lazy strategy _ gives priority to answer production and the _ eager strategy _ prefers answer consumption over production . in the followingwe define the three primitives in both strategies .the lazy strategy postpones the consumption of answers until no answers can be produced . in concrete , for top - most looping subgoals no answer is returned until they are complete , and for other pioneer subgoals answers are consumed only after all the rules have been tried .this primitive is executed when a tabled subgoal is encountered . the subgoal is registered into the table if it is not registered yet . if s state is _ complete _ meaning that has been completely evaluated before , then is resolved by using the answers in the table . if is a pioneer , meaning that it is encountered for the first time in the current path , then different actions are taken depending on s state .if s state is _ evaluated _ meaning that has occurred before in a different path during the current round , then it is resolved by using answers .otherwise , if has never occurred before during the current round , it is resolved by using rules . in this way, a pioneer subgoal needs to be evaluated only once in each round .if is a follower of some ancestor , meaning that a loop has been encountered , must be an ancestor of under the lazy strategy .] then it is resolved by using the answers in the table .after the answers are exhausted , fails . failing is unsafe in general since it may have not returned all of its possible answers .for this reason , the top - most looping subgoal of the cluster of needs be iterated until no new answer can be produced .this primitive is executed when an answer is found for the tabled subgoal .if the answer is already in the table , then just fail ; otherwise fail after the answer is added into the table .the failure of _ memo _ postpones the return of answers until all rules have been tried .this primitive is executed when the subgoal is being resolved by using rules and the dummy ending rule is being tried .if has never occurred in a loop , then s state is set to _ complete _ and is failed after all the answers are consumed . if is a top - most looping subgoal , we check if any new answers are produced during the last iteration of the cluster under .if so , is re - evaluated by calling after all the dependent subgoals s states are initialized .otherwise , if no new answer is produced , is resolved by using answers after its state and all its dependent subgoals states are set to _complete_. notice that a top - most looping subgoal does not return any answers until it is complete .if is a looping subgoal but not a top - most one , will be resolved by using answers after its state is set to _evaluated_. notice that s state can not be set to _ complete _ since is contained in a loop whose top - most subgoal has not been completely evaluated . for example , in figure [ fig : loops ] , q reaches its fixpoint only after the top - most looping subgoal p reaches its fixpoint .as described in the definition of , an _ evaluated _ subgoal is never evaluated using rules again in the same round .this optimization is called _ subgoal optimization _ in .if evaluating a subgoal produces some new answers then the top - most looping subgoal will be re - evaluated and so will the subgoal ; and if evaluating a subgoal does not produce any new answer , then evaluating it again in the same round would not produce any new answers either .therefore , the subgoal optimization is safe .consider the following program , where p/2 is tabled , and the query p(a , y0 ) . ....p(x , y):-p(x , z),e(z , y),memo(p(x , y ) ) .( p1 ) p(x , y):-e(x , y),memo(p(x , y ) ) .( p2 ) p(x , y):-check_completion(p(x , y ) ) .( p3 ) e(a , b ) .e(b , c ) ..... the following shows the steps that lead to the production of the first answer : aa = aaa = aaa = aaa = aaa = aaa = aaa 1 : + p1 + 2 : , e(z1,y0),memo(p(a , y0 ) ) + + 1 : + apply p2 + 3 : , memo(p(a , y0 ) ) + apply e(a , b ) + 4 : + add answer p(a , b ) after the answer p(a , b ) is added into the table , memo(p(a , b ) ) fails .the failure forces execution to backtrack to p(a , y0 ) . aa = aaa = aaa = aaa = aaa = aaa = aaa 1 : + apply p3 + 5 : since p(a , y0 ) is a top - most looping subgoal which has not been completely evaluated yet , check_completion(p(a , y0 ) )does not consume the answer in the table but instead starts re - evaluation of the subgoal .aa = aaa = aaa = aaa = aaa = aaa = aaa 1 : + p1 + 6 : , e(z1,y0),memo(p(a , y0 ) ) + answer p(a , b ) + 7 : , memo(p(a , y0 ) ) + e(b , c ) + 8 : when the follower p(a , z1 ) is encountered this time , it consumes the answer p(a , b ) .the current path leads to the second answer p(a , c ) . on backtracking ,the goal numbered 6 becomes the current goal .aa = aaa = aaa = aaa = aaa = aaa = aaa 6 : , e(z1,y0),memo(p(a , y0 ) ) + answer p(a , c ) + 9 : , memo(p(a , y0 ) ) goal 9 fails .execution backtracks to the top goal and tries the clause p3 on it .aa = aaa = aaa = aaa = aaa = aaa = aaa 1 : + apply p3 + 10 : since the new answer p(a , c ) is produced in the last round , the top - most looping subgoal p(a , y0 ) needs to be evaluated again .the next round produces no new answer and thus the subgoal s state is set to _ complete_. after that the top - most subgoal returns the answers p(a , b ) and p(a , c ) . under the lazy strategy ,answers are not returned immediately after they are produced but are returned via the table after all clauses are tried .no answer is returned for a top - most looping subgoal until the subgoal is complete .all loops are guaranteed to be real : for any loop where and are variants , must be an ancestor of . because each cluster of inter - dependent subgoals is completely evaluated before any answers are returned to outside of the cluster , the lazy strategy has good locality and is thus suited for finding all solutions .for example , when the subgoal is encountered in the goal `` p(x),p(y ) '' , the subtree for p(x ) must have been explored completely and thus needs not be saved for evaluating p(y ) .the cut operator can not be handled efficiently under the lazy strategy .the goal `` '' produces all the answers for even though only one is needed .the eager strategy prefers answer consumption and return over production . for a pioneer ,answers are used first and rules are used only after all available answers are exhausted , and moreover a new answer is returned to its parent immediately after it is added into the table .the following describes how the three primitives behave under the eager strategy . just as in the lazy strategy , is registered if it is not registered yet . is resolved by using the tabled answers if is complete or is a follower of some former variant subgoal .if is a pioneer , being encountered for the first time in the current round , it is resolved by using answers first , and then rules after all existing answers are exhausted .if the answer is already in the table , then this primitive fails ; otherwise , this primitive succeeds after adding the answer into the table .notice that is returned immediately after it is added into the table .if is not new , then it must have been returned before .if is a top - most looping subgoal , just as in the lazy strategy , we check whether any new answers are produced during the last iteration of .if so , is evaluated again by calling .otherwise , if no new answer is produced , this primitive fails after s and all its dependent subgoals states are set to _complete_. if is a looping subgoal but not a top - most one , this primitive fails after s state is set to _evaluated_. an _ evaluated _ subgoal is never evaluated using rules again in the same round .notice that unlike under the lazy strategy , the primitive never returns any answers under the eager strategy . as described above , all the available answers must have been returned by and by the time is executed .because of the need to re - evaluate a top - most looping subgoal , redundant solutions may be observed for a query .consider , for example , the following program and the query `` p(x),p(y ) '' . ....p(1):-memo(p(1 ) ) .( r1 ) p(2):-memo(p(2 ) ) .( r2 ) p(x):-check_completion(p(x ) ) .( r3 ) .... the following derivation steps lead to the return of the first solution ( 1,1 ) for ( x , y ) .aa = aaa = aaa = aaa = aaa = aaa = aaa 1 : , p(y ) + use r1 + 2 : , p(y ) + add answer p(1 ) + 3 : + loop found , use answer p(1 ) + when the subgoal p(y ) is encountered , it is treated as a follower and is resolved using the tabled answer p(1 ) .after that the first solution ( 1,1 ) is returned to the top query . when execution backtracks to p(y ) , it fails since it is a follower and no more answer is available in the table .execution backtracks to p(x ) , which produces and adds the second answer p(2 ) into the table .aa = aaa = aaa = aaa = aaa = aaa = aaa 1 : , p(y ) + use r2 + 4 : , p(y ) + add answer p(2 ) + 5 : + use answer p(1 ) + when p(y ) is encountered this time , there are two answers p(1 ) and p(2 ) in the table .so the next two solutions returned are ( 2,1 ) and ( 2,2 ) . when execution backtracks to goal 1 , the dummy ending rule is applied .aa = aaa = aaa = aaa = aaa = aaa = aaa 1 : , p(y ) + use r3 + 6 : , p(y ) + since new answers are added into the table during this round , the subgoal p(x ) needs to be evaluated again , first using answers and then using rules .the second round produces no answer but returns the four solutions ( 1,1 ) , ( 1,2 ) , ( 2,1 ) and ( 2,2 ) among which only ( 1,2 ) has not been observed before .since answers are returned eagerly , a pioneer and a follower may not have an ancestor - descendant relationship .because of the existence of this kind of _ fake _ loops and the necessity of iterating the evaluation of top - most looping subgoals , redundant solutions may be observed . in the previous example , the solutions ( 1,1 ) , ( 2,1 ) and ( 2,2 ) are each observed twice .provided that the top - most looping subgoal p(x ) did not return the answer p(1 ) again in the second round , the solution ( 1,2 ) would have been lost .the eager strategy is more suited than the lazy strategy for single - solution search . for certain applications such as planningit is unreasonable to find all answers either because the set is infinite or because only one answer is needed . for these applicationsthe eager strategy is more effective than the lazy one .cuts are handled more efficiently under the eager strategy .the basic linear tabling framework described in the previous section does not distinguish between new and old answers .the problem with this naive method is that it redundantly joins answers of subgoals that have been joined in early rounds .semi - naive optimization reduces the redundancy by ensuring that at least one new answer is involved in the join of the answers for each rule . in this section ,we introduce semi - naive optimization into linear tabling and identify sufficient conditions for it to be complete .we also propose a technique called _ early answer promotion _ to further avoid redundant consumption of answers .this optimization works with both the lazy and eager strategies . to make semi - naive optimization possible ,we divide the answer table for each tabled subgoal into three regions : the names of the regions indicate the rounds during which the answers in the regions are produced : _ old _ means that the answers were produced before the previous round , _ previous _ the answers produced during the previous round , and _ current _ the answers produced in the current round .the answers stored in _previous _ and _ current _ are said to be _new_. before each round is started , answers are promoted accordingly : _ previous _ answers become _ old _ and _ current _ answers become _previous_. in our optimization , answers are consumed _sequentially_. for a subgoal , either all the available answers or only new answers are consumed .this is unlike in bottom - up evaluation where answers are consumed _ incrementally _ , i.e. , answers produced in a round are not consumed until the next round .as will be discussed later , incremental consumption of answers as is done in bottom - up evaluation does avoid certain redundant joins but does not fit linear tabling since it may require more rounds to reach fixpoints .a predicate _ calls _ a predicate if : ( 1 ) if occurs in the body of at least one rule in the definition of ( calls _ directly _ ) ; or ( 2 ) does not occur in the body of any rule in the definition of but there exists a predicate in the body of a rule in the definition of that calls ( calls _ indirectly _ ) .the calling relationship constitutes a graph called a _ call graph_. for a given program , we find a level mapping from the predicate symbols in the program to the set of integers to represent the _ call graph _ of the program .let be a level mapping .we extend the notation to assume that for any subgoal of arity . for a given program ,a level mapping represents the _ call graph _ if : for each rule `` '' in the program , iff the predicate of does not call ( either directly or indirectly ) the predicate of , and iff the predicates of and call each other .the level mapping as defined divides predicates in a program into several strata .the predicate at each stratum depends only on those on the lower strata .the level mapping is an abstract representation of the dependence relationship of the subgoals that may occur in execution .if two subgoals and occur in a loop , then it is guaranteed that .let `` '' be a rule in a program and be the level mapping that represents the call graph of the program . is called the _ last depending subgoal _ of the rule if and for .the last depending subgoal is the last subgoal in the body that may depend on the head to become complete .thus , when the rule is re - executed on a subgoal , all the subgoals to the right of that have occurred before must already be complete .let `` '' be a rule in a program and be a level mapping that represents the call graph of the program .if there is no depending subgoal in the body , i.e. , for , then the rule is called a _base rule_. let `` '' be a rule where is the last depending tabled subgoal , and be a subgoal that is being resolved by using the rule in an iteration of a top - most looping subgoal .for a combination of answers of , , and , if has occurred in an early round and the combination does not contain any new answers , then it is safe to let consume new answers only . because is the last depending subgoal ,the subgoals , , and must have been completely evaluated when is re - evaluated .let and be the _ old _ and _ new _ answers of the subgoal , respectively . for a combination of answers of , , and , if the combination does not contain new answers then the join of the combination and must have been done and all possible answers for that can result from the join must have been produced during the previous round because the subgoal has been encountered before . therefore only new answers in should be used .base rules need not be considered in the re - evaluation of any subgoals .semi - naive optimization would be unsafe if it were applied to new subgoals that have never been encountered before .the following example illustrates this possibility : .... ?- p(x , y ) .: -table p/2 .p(x , y ) : - p(x , z),q(z , y ) .( c1 ) p(b , c ) : - p(x , y ) .( c2 ) p(a , b ) .( c3 ) : -table q/2 .q(c , d ) : - p(x , y),t(x , y ) .( c4 ) t(a , b ) .( c5 ) .... in the first round of p(x , y ) the answer p(a , b ) is added to the table by c3 , and in the second round the rule c2 produces the answer p(b , c ) by using the answer produced in the first round . in the third round ,the rule c1 generates a new subgoal q(c , y ) after p(x , z ) consumes p(b , c ) .if semi - naive optimization were applied to q(c , y ) , then the subgoal p(x , y ) in c4 could consume only the new answer p(b , c ) and the third answer p(b , d ) would be lost .semi - naive optimization can lower the complexity of evaluation for some programs . consider the following example created by david s. warren : .... : -table p/2 .p(x , y ) : - p(x , z),c(z , a , y ) .p(x , y ) : - p(x , z),c(z , b , y ) .p(x , x ) . .... which detects if a given string represented as facts c( , or ) is a sentence of the regular expression . for a string , the query p(0, )needs rounds to reach the fixpoint . with semi - naive optimization ,the variants of p(x , z ) in the bodies consume only new answers , and therefore the program takes linear time . without semi - naive optimization , however , the program would take time since the variants of p(x , z ) would consume all existing answers . in our semi - naive optimization ,answers produced in the current round are consumed immediately rather than postponed to the next round as in the bottom - up version , and answers are promoted each time a new round is started .this way of consuming and promoting answers may cause certain redundancy . consider the conjunction . assume , , and are the sets of answers in the three regions ( respectively , _ old _ , _ previous _ , and _ current _ ) of the subgoal when is encountered in round .assume also that had been complete before round and is the set of answers .the join is computed for the conjunction in round .assume , , and are the sets of answers in the three regions when is encountered in round i+1 . since answers are promoted before round is started , we have : aa = aaa = aaa = aaa = aaa = aaa = aaa + where denotes the new answers produced for after the conjunction in round . when the conjunction is encountered in round , the following join is computed .aa = aaa = aaa = aaa = aaa = aaa = aaa notice that the join is computed in both round and .we could allow last depending subgoals to consume answers incrementally as is done in bottom - up evaluation , but doing so may require more rounds to reach fixpoints .consider the following example , which is the same as the one shown above but has a different ordering of clauses : .... ?- p(x , y ) .: -table p/2 .p(a , b ) .( c1 ) p(b , c ) : - p(x , y ) . ( c2 )p(x , y ) : - p(x , z),q(z , y ) .( c3 ) : -table q/2 .q(c , d ) : - p(x , y),t(x , y ) . ( c4 )t(a , b ) .( c5 ) .... in the first round , c1 produces the answer p(a , b ) .when c2 is executed , the subgoal in the body can not consume p(a , b ) since it is produced in the current round .similarly , c3 produces no answer either . in the second round , p(a , b )is moved to the _ previous _ region , and thus can be consumed .c2 produces a new answer p(b , c ) .when c3 is executed , no answer is produced since p(b , c ) can not be consumed . in the third round , p(a , b )is moved to the _ old _ region , and p(b , c ) is moved to the _ previous _ region .c3 produces the third answer p(b , d ) .the fourth round produces no new answer and confirms the completion of the computation .so in total four rounds are needed to compute the fixpoint .if answers produced in the current round are consumed in the same round , then only two rounds are needed to reach the fixpoint . as discussed above, sequential consumption of answers may cause redundant joins . in this subsection ,we propose a technique called _ early promotion _ of answers to reduce the redundancy .let be the first follower that exhausts its answers in the current round. then all the answers of in the _ current _ region are promoted to the _ previous _ region once being consumed by .consider again the conjunction where is the first follower that exhausts its answers .the answers in the current region are promoted to the _ previous _ region after has consumed all its answers in round . by doing so, the join will not be recomputed in round since must have been promoted to the _ old _ region in round .consider , for example , the following program : .... ?- p(x , y ) .: -table p/2 .p(a , b ) .( c1 ) p(b , c ) : - p(x , y ) .( c2 ) .... before c2 is executed in the first round , p(a , b ) is in the _ current _ region . executing c2produces the second answer p(b , c ) . since the subgoal p(x , y ) in c2 is the first follower that exhausts its answers in the current round , it is qualified to promote its answers .so the answers p(a , b ) and p(b , c ) are moved from the _ current _ region to the _ previous _ region immediately after being consumed by p(x , y ) . as a result, the potential redundant consumption of these answers by p(x , y ) is avoided in the second round since they will all be transferred to the _ old _ region before the second round starts .early promotion does not lose any answers .first note that although answers are tabled in three disjoint regions , all tabled answers will be consumed except for some last depending subgoals that would skip the answers in their _ old _ regions ( see theorem 1 ) .assume , on the contrary , that applying early promotion loses answers .then there must be a last depending subgoal in a rule `` '' and a tabled answer for such that has been moved to the _ old _ region before being consumed by so that will never be consumed by .assume is produced in round by a variant of .we distinguish between the following two cases : 1 .the last depending subgoal is not selected in round . in round , is selected either because is new or some consumes a new answer . by theorem 1, will consume all answers in the three regions , including the answer .otherwise , must be produced by itself or a variant subgoal of that is selected either _ before _ or _ after _ in round .if is produced by itself or _ before _ is selected , then the answer will be consumed by since promoted answers will remain new by the end of the round . if is produced by a variant _ after _ is selected , then the answer can not be promoted because exhausts its answers _ before _ the variant . in this case , the answer will remain new in the next round and will thus be consumed by . both of the above two cases contradict our assumption .the proof then concludes .changes to the prolog machine atoam are needed to implement linear tabling . in this sectionwe describe the changes to the data structures and the instruction set . to make the paper self - contained , we first give an overview of the atoam architecture .the atoam uses all the data areas used by the wam .the _ heap _ stores terms created during execution .the register h points to the top of the heap .the _ trail _ stack stores updates that must be undone upon backtracking .the register t points to the top of the trail stack .the _ control _stack stores frames associated with predicate calls .unlike in the wam where arguments are passed through argument registers , arguments in the atoam are passed through stack frames and only one frame is used for each predicate call .each time a predicate is invoked by a call , a frame is placed on top of the local stack unless the frame currently at the top can be reused .frames for different types of predicates have different structures . for standard prolog ,a frame is either _ determinate _ or _ nondeterminate_. a nondeterminate frame is also called a_ choice point_. the register ar points to the current frame and the register b points to the latest _choice point_. a determinate frame has the following structure : & pointer to the parent frame + & continuation program pointer + & bottom of the frame + & top of the frame + & local variables + where btm points to the bottom of the frame , i.e. , the slot for the first argument , and top points to the top of the frame , i.e. , the slot just next to that for the last local variable .the top register points to the next available slot on the stack .the btm slot is not in the original version .this slot is introduced for supporting garbage collection and co - routining .the ar register points to the ar slot of the current frame .arguments and local variables are accessed through offsets with respect to the ar slot .an argument or a local variable is denoted as y(i ) where i is the offset .arguments have positive offsets and local variables have negative offsets .it is the caller s job to place the arguments and fill in the ar , and cp slots .the callee fills in the btm and top slots and initializes the local variables .a choice point frame contains , in addition to the slots in a determinate frame , four slots located between the top slot and local variables : & top of the heap + & top of the trail + & parent choice point + the cpf slot stores the program pointer to continue with when the current branch fails .the slot h points to the top of the heap when the frame is allocated . as in the wam ,a new register , called hb , is used as an alias for b->h .when a variable is bound , it must be trailed if it is older than b or hb . a new data area , called _ table area _ , is introduced for memorizing tabled subgoals and their answers .the _ subgoal table _ is a hash table that stores all the tabled subgoals encountered in execution . for each tabled subgoal and its variants, there is an entry in the table , which is a record containing the following information : + + + + + the field subgoalcopy points to the copy of the subgoal in the table area . in the copy ,all variables are numbered . therefore all variants of the subgoal are identical .the field pioneerar points to the frame of the pioneer , which is needed for implementing cuts .when the choice point of a tabled subgoal is cut off before the subgoal reaches completion , the field pioneerar will be set to null . when a variant of the subgoal is encountered again after, the subgoal will be treated as a pioneer .the field state indicates whether the subgoal is a looping subgoal , whether the answer table has been revised , and whether the subgoal is _ complete _ or _evaluated_. when execution backtracks to a top - most looping subgoal , if the _ revised _ bit is set , then another round will be started for the subgoal .a top - most looping subgoal becomes complete if this _ revised _ bit is unset after a round . at that time ,the subgoal and all of its dependent subgoals will be set to _ complete_. as described in [ sub : evaluated ] , an _ evaluated _ subgoal is never evaluated again using rules in each round .the topmostloopingsubgoal field points to the entry for the top - most looping subgoal , and the field dependentsubgoals stores the list of subgoals on which this subgoal depends . when a top - most looping subgoal becomes complete , all of its dependent subgoals turn to complete too .the field answertable points to the answer table for this subgoal , which is also a hash table .hash tables expand dynamically .let g be the pointer to the record for a subgoal in the table .the first answer in the answer table is referenced as ` g->answertable->firstanswer ` and the last answer is referenced as ` g->answertable->lastanswer ` . in the beginning , the answer table is empty and both firstanswer and lastanswer reference a dummy answer .the frame for a tabled predicate contains the following two slots in addition to those slots stored in a choice point frame : + the subgoaltable points to the subgoal table entry , and the currentanswer points to the last answer that has been consumed .the next answer can be reached from this reference on backtracking . when a frame is created , the slot currentanswer is initialized to be ` g->answertable->firstanswer ` where g is the pointer to the record for the tabled subgoal .three new instructions , namely table_start , memo , and check_completion , are introduced into the atoam for encoding the three table primitives .figure [ fig : ins ] shows the compiled code of an example program . ....% : -tabled p/2 .% p(x , y):-p(x , z),e(z , y ) .% p(x , y):-e(x , y ) .p/2 : table_start 2,1 fork r2 para_valuey(2 ) para_var y(-13 ) call p/2 % p(x , z ) para_value y(-13 ) para_value y(1 ) call e/2 % e(z , y ) memo r2 : fork r3 para_value y(2 ) para_value y(1 ) call e/2 % e(x , y ) memo r3 : check_completion p/2 .... the table_start instruction takes two operands : the arity ( 2 ) and the number of local variables ( 1 ) .the fork instruction sets the cpf slot to hold the address to backtrack to on failure .the parameter passing instructions ( para_value and para_var in this example ) pass arguments to the callee s frame .the memo instruction is executed after an answer has been found .the check_completion instruction takes the entrance ( p/2 ) as an operand so that the predicate can be re - entered when it needs re - evaluation . to implement semi - naive optimization, we add the following two pointers into the record for each tabled subgoal : + where the pointer lastoldanswer points to the last answer in the old region and the pointer lastprevanswer points to the last answer in the previous region .the check_completion instruction resets the pointers for all the tabled subgoals in the current cluster before it starts the next round : .... for each subgoal g in the current cluster { g->lastoldanswer = g->lastprevanswer ; g->lastprevanswer = g->answertable->lastanswer ; } .... the memo instruction is changed so that early promotion of answers is performed if the condition for promotion is met .let g be the pointer to the tabled subgoal .if the subgoal has exhausted all its answers in the table and early promotion has never be done before on the subgoal in the same round , then answers in the current region are promoted to the previous region : .... g->lastprevanswer = g->answertable->lastanswer .... the promoted answers will be moved to the old region before the start of the next round .a bit vector is added into the frame for each tabled predicate to indicate if any new answer has been consumed by any tabled subgoal .semi - naive optimization can be applied only if no tabled subgoal in the predicate has consumed any new answer . a new instruction , called last_depending_tabled_call ,is introduced to encode last depending tabled subgoals . in the example shown in figure [ fig : ins ] , the `` call p/2 '' instruction is changed to `` last_depending_tabled_call p/2 '' to enable semi - naive optimization .the last_depending_tabled_call instruction has the same behavior as the call instruction , but the callee can check the type of the instruction to see if it is invoked by a last depending tabled subgoal .let g be the pointer to the current tabled subgoal .the table_start instruction sets the currentanswer slot of the frame to ` g->lastoldanswer ` so that the subgoal consumes only new answers if : ( 1 ) the parent frame is a tabled frame ; ( 2 ) no bit in the bit vector in the parent frame is set , which means that no tabled subgoal has consumed any new answer ; and ( 3 ) the predicate is invoked by a last_depending_tabled_call instruction .if any of these condition is not satisfied , the currentanswer slot is set to ` g->answertable->firstanswer ` and all the answers will be consumed by the subgoal .we empirically compared the two answer consumption strategies and evaluated the effectiveness of semi - naive optimization .we also compared the performance of b - prolog ( version 6.9 ) with xsb ( version 3.0 ) . a linux machine with 750mhz intel process and 512 gb ram was used in the experiment .benchmarks from three different sources were used : datalog programs shown in figure [ fig : datalog ] with randomly generated graphs ; the chat benchmark suite ; and a parser , called _ atr _, for the japanese language defined by a grammar of over 860 rules .this section presents the experimental results and reports the statistics to support the results .this section also gives experimental results on the warren s example for which slg as implemented in xsb has lower time complexity than linear tabling when semi - naive optimization ceases to be effective .aa = aaa = aaa = aaa = aaa = aaa = aaa tcl : + + + tcr : + + + tcn : + + + sg : + table [ tab : strategies ] compares the two answer - consumption strategies in terms of speed and stack space efficiencies .the difference of these two strategies in terms of cpu time is small on average .this result implies that for programs with cuts declaring the use of the eager strategy would not cause significant slow - down .the difference in the usage of stack space is more significant than in cpu time .this is because , as discussed before , the lazy strategy has better locality than the eager strategy . &lazy & eager & lazy & eager + tcl & 1 & 1.02 & 1 & 1.00 + tcr & 1 & 0.96 & 1 & 1.00 + tcn & 1 & 0.90 & 1 & 1.00 + sg & 1 & 0.89 & 1 & 1.02 + cs_o & 1 & 1.17 & 1 & 1.36 + cs_r & 1 & 1.09 & 1 & 1.36 + disj & 1 & 1.06 & 1 & 1.41 + gabriel & 1 & 1.08 & 1 & 1.18 + kalah & 1 & 1.17 & 1 & 2.03 + pg & 1 & 2.28 & 1 & 3.59 + peep & 1 & 0.99 & 1 & 2.88 + read & 1 & 0.85 & 1 & 2.22 + atr & 1 & 1.03 & 1 & 1.06 + & 1 & 1.12 & 1 & 1.62 + table [ tab : semi ] shows the effectiveness of semi - naive optimization in gaining speed - ups under both strategies . without this optimization ,the system would consume over 30% more cpu time on average under either strategy .our experiment also indicates that on average over 95% of the gains in speed are attributed to the _ early promotion _ technique .& lazy & eager + tcl & 2.00 & 1.89 + tcr & 1.22 & 1.19 + tcn & 1.68 & 1.74 + sg & 1.22 & 1.51 + cs_o & 1.10 & 1.10 + cs_r & 1.09 & 1.10 + disj & 1.52 & 1.46 + gabriel & 1.32 & 1.15 + kalah & 1.52 & 1.41 + pg & 1.21 & 1.05 + peep & 1.09 & 1.11 + read & 1.98 & 1.27 + atr & 1.00 & 1.00 + & 1.38 & 1.31 + table [ tab : xsb ] compares bp with xsb on time and stack space efficiencies . for xsb ,the stack space is the total of the maximum amounts of global , local , trail , choice point , and slg completion stack spaces .the default setting , namely , the slg - wam and the local scheduling strategy , is used .bp is faster than xsb on the datalog programs and the parser but slower than xsb on the chat benchmark suite ; and bp consumes considerably less stack space than xsb on some of the programs ( _ tcr _ , _ tcn _ , _ sg _ , and _ atr _ ) .the results must be interpreted with two differences of the two compared systems taken into account : on the one hand , bp is on average more than twice as fast as xsb for standard prolog programs , and on the other hand the trie data structure used in xsb is far more advanced than hash tables used in bp for managing the table area .it is unclear to what extent each difference contributes to the overall efficiency .the yap implementation of slg - wam is up to twice as fast as xsb on the transitive closure and same - generation benchmarks with both chain and cyclic graphs .this entails that the bp implementation of linear tabling is comparable in speed with the most sophisticated implementation of slg - wam for the datalog benchmarks . &( lazy ) & cpu time & stack space + tcl & 1 & 1.85 & 0.81 + tcr & 1 & 1.46 & 33.41 + tcn & 1 & 1.31 & 32.84 + sg & 1 & 1.47 & 109.12 + cs_o & 1 & 0.37 & 0.57 + cs_r & 1 & 0.35 & 0.73 + disj & 1 & 0.68 & 0.82 + gabriel & 1 & 0.61 & 2.05 + kalah & 1 & 1.00 & 0.58 + pg & 1 & 0.76 & 1.85 + peep & 1 & 0.37 & 2.97 + read & 1 & 0.69 & 11.12 + atr & 1 & 2.26 & 21.24 + the empirical data on the usage of table space are not reported .bp constantly consumes less table space than xsb for the benchmarks . in bp , both subgoal and answer tablesare maintained as dynamic hashtables . in xsb , in contrast , tables are maintained as tries .the usage of table space is independent of the strategies and optimizations .both bp and xsb would consume the same amount of table space if the same data structure were employed .table [ tab : its ] reports the statistics on the maximum ( max its . ) and average ( ave . its . ) numbers of iterations for tabled subgoals to reach their fixpoints .the column # subgoals shows the number of tabled subgoals . while for some programs , the maximum number of iterations performed is high ( e.g. , the maximum number for _ atr _ is 6 ) , the average numbers are quite low .the necessity of re - evaluating looping subgoals has been blamed for the low speed of iteration - based tabling systems .our new findings indicate that re - evaluation is not a dominant factor for the benchmarks .this statistics well explain why an implementation of linear tabling could achieve comparable speed performance with slg - wam for the benchmarks .tcl & 1 & 2 & 2.00 + tcr & 51 & 2 & 1.96 + tcn & 51 & 2 & 1.98 + sg & 153 & 2 & 1.32 + cs_o & 76 & 2 & 1.14 + cs_r & 76 & 2 & 1.16 + disj & 74 & 2 & 1.20 + gabriel & 59 & 2 & 1.20 + kalah & 102 & 3 & 1.24+ pg & 48 & 2 & 1.13 + peep & 49 & 3 & 1.29 + read & 131 & 5 & 1.34 + atr & 7139 & 6 & 1.81 + the following is a slightly changed version of the warren s example which disenables semi - naive optimization : .... : -table p/2 .p(x , y ) : - q(x , z),c(z , a , y ) .p(x , y ) : - q(x , z),c(z , b , y ) .p(x , x ) .q(x , y ) : - p(x , y ) . .... since the last depending subgoals q(x , z ) in p/2 are not tabled , semi - naive optimization can not be applied to p/2 . for a string , the query p(0, )needs iterations to reach the fixpoint .since in each iteration the subgoal q(x , z ) is rewritten into p(x , z ) which returns all existing answers , the total time taken is .in contrast , the program takes only time under slg . for the size n=5000, it took bp 3.5 seconds to run the program while xsb only 15 milliseconds .for the original version of the program to which semi - naive optimization is applicable , it took bp only 7 milliseconds .there are three different tabling schemes , namely oldt and slg , cat , and iteration - based tabling including linear tabling and dra .slg is a formalization based on oldt for computing well - founded semantics for general programs with negation .the basic idea of using iterative deepening to compute fixpoints dates back to the et * algorithm . in slg - wam , a consumer fails after it exhausts all the existing answers and its state is preserved by freezing the stack so that it can be reactivated after new answers are generated .the cat approach does not freeze the stack but instead copies the stack segments between the consumer and its producer into a separate area so that backtracking can be done normally .the saved state is reinstalled after a new answer is generated .chat is a hybrid approach that combines slg - wam and cat .linear tabling relies on iterative computation of looping subgoals to compute fixpoints .linear tabling is probably the easiest scheme to implement since no effort is needed to preserve states of consumers and the garbage collector can be kept untouched for tabling .linear tabling is also the most space - efficient scheme since no extra space is needed to save states of consumers .nevertheless , linear tabling without optimization could be computationally more expensive than the other two schemes .the dra method is also iteration based , but it identifies looping clauses dynamically and iterates the execution of looping clauses to compute fixpoints . while in linear tabling iteration is performed on only top - most looping subgoals , in dra iteration is performed on every looping subgoal . in et * , every tabled subgoal is iterated even if it does not occur in a loop .besides the difference in answer consumption strategies and optimizations , the linear tabling scheme described in this paper differs from the original version in that followers fail after they exhaust their answers rather than steal their pioneers choice points .this strategy is originally adopted in the dra method .the two consumption strategies have been compared in xsb as two scheduling strategies .the lazy strategy is called _ local scheduling _ and the eager strategy is called _ single - stack scheduling_. another strategy , called _ batched scheduling _ , is similar to local scheduling but top - most looping subgoals do not have to wait until their clusters become complete to return answers .their experimental results indicate that local scheduling constantly outperforms the other two strategies on stack space and can perform asymptotically better than the other two strategies on speed .the superior performance of local scheduling is attributed to the saving of freezing stack segments .although our experiment confirms the good space performance of the lazy strategy , it gives a counterintuitive result that the eager strategy is as fast as the lazy strategy .this result implies that the cost of iterative evaluation is considerably smaller than that of freezing stack segments , and for predicates with cuts the eager strategy can be used without significant slow - down . in our tabling system, different answer consumption strategies can be used for different predicates .the tabling system described in also supports mixed strategies .semi - naive optimization is a fundamental idea for reducing redundancy in bottom - up evaluation of logic database queries . as far as we know , its impact on top - down evaluation had been unknown before .oldt and slg do not need this technique since it is not iterative and the underlying delaying mechanism successfully avoids the repetition of any derivation step .an attempt has been made by guo and gupta to make incremental consumption of tabled answers possible in dra . in their scheme ,answers are also divided into three regions but answers are consumed incrementally as in bottom - up evaluation . since no condition is given for the completeness and no experimental result is reported on the impact of the technique , we are unable to give a detailed comparison .our semi - naive optimization differs from the bottom - up version in two major aspects : firstly , no differentiated rules are used . in the bottom - up versiondifferentiated rules are used to ensure that at least one new answer is involved in the join of answers for each rule .consider , for example , the clause : aa = aaa = aaa = aaa = aaa = aaa = aaa the following two differentiated rules are used in the evaluation instead of the original one : aa = aaa = aaa = aaa = aaa = aaa = aaa + where denotes the new answers produced in the previous round for p. using differentiated rules in top - down evaluation can cause considerable redundancy , especially when the body of a clause contains non - tabled subgoals .the second major difference between our semi - naive optimization and the bottom - up version is that answers in our method are consumed sequentially until exhaustion , not incrementally as in bottom - up evaluation .a tabled subgoal consumes either all available answers or only new answers including answers produced in the current round .neither incremental consumption nor sequential consumption seems satisfactory .incremental consumption avoids redundant joins but may require more rounds to reach fixpoints .in contrast , sequential consumption never need more rounds to reach fixpoints but may cause redundant joins of answers .the early promotion technique alleviates the problem of sequential consumption . by promoting answers early from the _ current _ region to the _ previous _ region, we can considerably reduce the redundancy in joins .semi - naive optimization may lower time complexities in bottom - up evaluation .the same result holds to the top - down version as demonstrated by warren s example .our experimental results show that semi - naive optimization gives an average speed - up of over to linear tabling if answers are promoted early , and almost no speed gain if no answer is promoted early . in linear tabling ,only looping subgoals need to be iteratively evaluated . for non - looping subgoals ,no re - evaluation is necessary and thus semi - naive optimization has no effect at all on the performance .most of the looping subgoals in our chosen benchmarks reach their fixpoints after 2 - 3 iterations . in general , more iterations are needed to reach fixpoints in bottom - up evaluation .in addition , in bottom - up evaluation , the order of the joins can be optimized and no further joins are necessary once a participating set is known to be empty .in contrast , in linear tabling joins are done in strictly chronological order . for a conjunction , the join is computed even if no answer is available for . because of all these factors , semi - naive optimization is not as effective in linear tabling as in bottom - up evaluation . our semi - naive optimization requires the identification of last depending subgoals . for this purpose ,a level mapping is used to represent the call graph of a given program .the use of a level mapping to identify optimizable subgoals is analogous to the idea used in the stratification - based methods for evaluating logic programs . in our level mapping ,only predicate symbols are considered .it is expected that more accurate approximations can be achieved if arguments are considered as well .semi - naive optimization does not solve all the problems of recomputation in linear tabling .recall the warren s example : .... : -table p/2 .p(x , y ) : - p(x , z),c(z , a , y ) .p(x , y ) : - p(x , z),c(z , b , y ) .p(x , x ) ..... assume there is a very costly non - tabled subgoal preceding p(x , z ) , then the subgoal has to be executed in each iteration even with semi - naive optimization .this example demonstrates the acuteness of the problem of recomputation because the number of iterations needed to reach the fixpoint is not constant .one treatment would be to table the subgoal to avoid recomputation , as suggested in , but tabling extra predicates can cause other problems such as over consumption of table space .in this paper we have described two answer consumption strategies ( namely , _lazy _ and _ eager _ strategies ) and semi - naive optimization for linear tabling .we have compared the two strategies both qualitatively and quantitatively .our results indicate that , while the lazy strategy has better space efficiency than the eager strategy , the eager strategy is comparable in speed with the lazy strategy .this result implies that for all - solution search programs the lazy strategy should be adopted and for partial - solution search programs including programs with cuts the eager strategy should be used .we have tailored semi - naive optimization to linear tabling and have given sufficient conditions for it to be complete .moreover , we have proposed a technique called _ early answer promotion _ to reduce redundant consumption of answers .our experimental result indicates that semi - naive optimization gives significant speed - ups to some programs .linear tabling has several attractive advantages including its simplicity , ease of implementation , and good space efficiency .early implementations of linear tabling were several times slower than xsb .this paper has demonstrated for the first time that linear tabling with optimization is as competitive as slg on time efficiency as well for the benchmarks .semi - naive optimization does not solve all the problems of recomputation in linear tabling .there are programs for which recomputation can be costly , even leading to higher complexities .the future work is to identify the patterns of such programs and find methods to deal with them .the preliminary results of this article appear in acm ppdp03 and ppdp04 .taisuke sato is supported in part by crest , and yi - dong shen is supported in part by the national natural science foundation of china grants numbered 60673103 and 60421001 .\2001 . a simple scheme for implementing tabled logic programming systems based on dynamic reordering of alternatives . in _ proceedings international conference on logic programming ( iclp)_. lncs 2237 , 181195 . , nielson , h. r. , sun , h. , buchholtz , m. , hansen , r. r. , pilegaard , h. , and seidl , h. 2004 .the succinct solver suite . in _ proc .tools and algorithms for the construction and analysis of systems : 10th international conference ( tacas ) , lncs 2988_. 251265 . \1989 .every logic program has a natural stratification and an iterated least fixed point model . in _proceedings of the eighth acm sigact - sigmod - sigart symposium on principles of database systems_. acm press , 1121 .
recently there has been a growing interest of research in tabling in the logic programming community because of its usefulness in a variety of application domains including program analysis , parsing , deductive databases , theorem proving , model checking , and logic - based probabilistic learning . the main idea of tabling is to memorize the answers to some subgoals and use the answers to resolve subsequent variant subgoals . early resolution mechanisms proposed for tabling such as oldt and slg rely on suspension and resumption of subgoals to compute fixpoints . recently , the iterative approach named linear tabling has received considerable attention because of its simplicity , ease of implementation , and good space efficiency . linear tabling is a framework from which different methods can be derived based on the strategies used in handling looping subgoals . one decision concerns when answers are consumed and returned . this paper describes two strategies , namely , _ lazy _ and _ eager _ strategies , and compares them both qualitatively and quantitatively . the results indicate that , while the lazy strategy has good locality and is well suited for finding all solutions , the eager strategy is comparable in speed with the lazy strategy and is well suited for programs with cuts . linear tabling relies on depth - first iterative deepening rather than suspension to compute fixpoints . each cluster of inter - dependent subgoals as represented by a top - most looping subgoal is iteratively evaluated until no subgoal in it can produce any new answers . naive re - evaluation of all looping subgoals , albeit simple , may be computationally unacceptable . in this paper , we also introduce semi - naive optimization , an effective technique employed in bottom - up evaluation of logic programs to avoid redundant joins of answers , into linear tabling . we give the conditions for the technique to be safe ( i.e. sound and complete ) and propose an optimization technique called _ early answer promotion _ to enhance its effectiveness . benchmarking in b - prolog demonstrates that with this optimization linear tabling compares favorably well in speed with the state - of - the - art implementation of slg . # 1 [ firstpage ] prolog , semi - naive evaluation , recursion , tabling , memoization , linear tabling .
consistency in preferences ( by pairwise comparisons ) can be traced to , published by kendall and smith in 1939 . by 1976 , there were four articles listed in , with the inconsistency in the title . in , the consistency index ( ci ) was introduced .many other definitions followed .all of them attempt to answer the most important question : how far have we departed from the consistent state which is well established by the consistency condition illustrated in fig . [ fig : fig2 ] in this text .briefly , every cycle of three interrelated comparisons : , , and must be reducible to two comparisons , since the third comparison is a result of multiplication or division of these two comparisons .for example , given and , should be equal to . if and are given , is equal to .if and are given , is equal to . no other combination exists for a cycle of three comparisons .if all three ratios are given , inconsistency in such cycle may take place and usually , it does .earlier definitions of inconsistency were based on the total count of inconsistent triads .evidently , such cardinal inconsistency was imprecise .this study proposes to regard the inconsistency in pairwise as a degree or extent of disagreement modeled on the concept of probability measure .our study also demonstrates a problem shown in , where the normalization was not used .the interval ] for inconsistency indicators is not only coming from the occam s razor principle .the section [ sec:3 ] shows that the lack of normalization in leads to problems which this study attempts to correct . in this study, we will follow a business principle `` change a problem into opportunity '' and use it not only to correct but to reason for an important axiom .the fallacy of the definition 3.6 on page 83 in : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ let be a group . a -distance - based inconsistency indicator map ( in abbreviation : a - inconsistency indicator map ) on the group is a function such that , for all , the following conditions are satisfied : 1. ; 2 . ; 3 ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ leads to a contradiction in the theory presented there .the fallacy in the above definition is evidenced below by using the sequence of triads converging to a consistent matrix when the limit of , a natural number , is the infinity .formally , an inconsistency indicator ( ) is defined for a pc matrix , say m. so , should be used .however , a pc matrix : \ ] ] is reduced to the three values : which generate the entire matrix . for this reason, we will be using as a notation shortcut ( to save on typing and space ) .to make our point , all we need to do is demonstrate just one counter - example ( for any particular distance ) .in particular , our goal is achieved when we construct all triads in the sequence this way so that they have constant inconsistency greater than 0 ( e.g. , one or ) and yet such sequence converges .in fact , it is enough to show the convergence for any sequence ( e.g. , cauchy sequence ) . evidently , is a cauchy sequence .the well - established consistency condition for pc matrix ] .then is a topological space that is not compact , since has no finite subcover .again , the set of all subsets of a geodetic line the plane is not compact for the same reason .let be a non - compact space and let be a point that does not lie in .the next step is to take a non - compact space and modify it to make it compact .the result is a _ compactification _ of ( see , _ e.g. _ , ) .one of the most important properties of normalization to ] is evident in terms of the upper bound , since we can not extend 0 to a negative value . if we extend 1 by the smallest positive value imaginable , say , the multiplication gives : it is worthwhile to notice that every non generative subinterval of ] is the only bounded ( by two numbers excluding ) interval of the maximal size containing the multiplication of two non negative real numbers belonging to it .the zero one infinity rule " is attributed to willem louis van der poel ( a pioneering dutch computer scientist , who is known for designing the zebra computer ) . by the above elimination of , our choice for the range of inconsistency indicatorsis reduced to 0 and 1 .acting in this spirit of the probability axioms , we postulate the lower and upper bounds for inconsistency indicator as : without the loss of generality .unlike probability , inconsistency indicators do not need to reach the upper bound value .it reflects the reality that no one can be indefinitely inconsistent with some exceptions such as `` 0 - 1 '' inconsistency indicator with 0 for all consistent triads and 1 otherwise .evidently , for can not ever have a value of 1 since neither nor can be 0 ( are ratios hence must be strictly positive ) .traditionally , most measures are constructed by setting two thresholds , namely , lower and upper bound .the lower bound is nearly always ( if possible ) assumed as 0 .the upper bound is , where for probability , fuzzy sets , rough sets , and more ; for various indicators such as school grades ; for percent values ( hence approximation error and business ) .there are a number of reasons why the interval is not as good as ] is more convenient than .for some inconsistency indicators ( e.g. , introduced in , called here ) , the level of tolerance is set to 1/3 since , and it does not have easy interpretation in .the related concepts in statistics are confidence levels and confidence intervals introduced by neyman in 1937 in . for the inconsistency in pairwise comparisons , the issue of a tolerance levelis of as fundamental importance as the statistical significance in statistics .the lack of `` point of reference '' in was a problem in , where a random inconsistency measure was introduced .it is in the direct contradiction of stevens observations in stating that : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` the assignment can be any consistent rule .the only rule not allowed would be random assignment , for randomness amounts in effect to a nonrule '' ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the interval ] may be the most natural way human brains process / assess concepts of nothing , a part of and total / sum ( which is 1 ) .however , the most important reasoning for us is the following celebrated mathematical theorem .normalizing mappings is well researched and there are many functions mapping onto .for example , generalized logistic function or curve , also known as richards curve , a gompertz curve or gompertz function , named after benjamin gompertz , a sigmoid function , and many other functions are normalized .it is worth mentioning that the inconsistency indicator , proposed in in 1993 , is normalized .recently , it was simplified in to : the equation ( [ eq:7 ] ) has an equivalent form + a different ( and a bit unusual ) type of normalization is needed to prevent a rating scale paradox ( addressed in ) .+ the original title of this section was `` why ] normalization of inconsistency indicators in pairwise comparisons may become one of `` the best practices '' of future knowledge management standards . in conclusion, we have learned an important lesson here : when we use applied mathematics , we no longer have a complete freedom of using any term in any way we wish .the mathematical distance can not serve as the inconsistence indicator as we demonstrated here by the use of the relative error .the research has been supported in part by : * the polish ministry of science and higher education * the ministry of education , youth and sports czech republic within the institutional support for long - term development of a research organization in 2017 .00 koczkodaj , w.w . ,pairwise comparisons rating scale paradox , transactions on computational collective intelligence xxii , 9655:19 , 2016 .koczkodaj , w. kulakowski , ligeza , a. , on the quality evaluation of scientific entities in poland supported by consistency - driven pairwise comparisons method , scientometrics , 99:911926 , 2014 .koczkodaj , w.w . ; szybowski , j. ; wajch , e. , inconsistency indicator maps on groups for pairwise comparisons , international journal of approximate reasoning , 69 : 81-90 , 2016 .krantz , s.g ., essentials of topology with applications .crc press , london , uk , 2016 .j. neyman , outline of a theory of statistical estimation based on the classical theory of probability , philosophical transactions of the royal society of london .series a , mathematical and physical sciences vol . 236 , no . 767 ( aug . 30 , 1937 ) ,
in this study , we provide mathematical and practice - driven justification for using $ ] normalization of inconsistency indicators in pairwise comparisons . the need for normalization , as well as problems with the lack of normalization , are presented . a new type of paradox of infinity is described . keywords : pairwise comparisons , , inconsistency indicator , , normalization , , decision making .
to study biological ion channels , the poisson - nernst - planck system ( , , , ) is commonly used as a model to describe the ionic flows in open ion channels .the poisson - nernst - planck system is given by ,\\ \vspace{3 mm } \frac{\displaystyle \partial c_p}{\displaystyle\partial t}=\frac{\displaystyle\partial}{\displaystyle\partial x}\,\left[d_p\,(\frac{\displaystyle\partial c_p}{\displaystyle\partial x}+z_p\,c_p\,\frac{\displaystyle\partial \phi}{\displaystyle\partial x})\,\right ] , \\-\varepsilon\,\frac{\displaystyle\partial^2 \phi}{\displaystyle\partial x^2}=z_n\,c_n+z_p\,c_p-\rho , \end{cases}\ ] ] for and . here is the density of anions and is the density of cations ; is the electrostatic potential ; and are the diffusion coefficients ; and are positive integers ; is the permanent charge density ; is a parameter .next we introduce a new type of poisson - nernst - planck system by considering the steric effect ( or size effect ) which occurs due to the fact that each atom within a molecule occupies a certain amount of space : + \frac{\displaystyle 1}{\displaystyle a(x)}\,\frac{\displaystyle\partial}{\displaystyle\partial x}\,\left[\,a(x)\,(g_{nn}\,c_n\,\frac{\displaystyle\partial c_n}{\displaystyle\partial x}+g_{np}\,c_n\,\frac{\displaystyle\partial c_{p}}{\displaystyle\partial x})\,\right],\\ \vspace{3 mm } \frac{\displaystyle \partial c_p}{\displaystyle\partial t}=\frac{\displaystyle \delta}{\displaystyle a(x)}\,\frac{\displaystyle\partial}{\displaystyle\partial x}\,\left[\,a(x)\,(\frac{\displaystyle\partial c_p}{\displaystyle\partial x}+z_p\,c_p\,\frac{\displaystyle\partial \phi}{\displaystyle\partial x})\,\right]+ \frac{\displaystyle 1}{\displaystyle a(x)}\,\frac{\displaystyle\partial}{\displaystyle\partial x}\,\left[\,a(x)\,(g_{pp}\,c_p\,\frac{\displaystyle\partial c_p}{\displaystyle\partial x}+g_{np}\,c_p\,\frac{\displaystyle\partial c_{n}}{\displaystyle\partial x})\,\right ] , \\-\frac{\displaystyle\varepsilon}{\displaystyle a(x)}\frac{\displaystyle\partial}{\displaystyle\partial x}\left ( a(x)\frac{\displaystyle\partial \phi}{\displaystyle\partial x } \right)=z_n\,c_n+z_p\,c_p , \end{cases}\ ] ] for and . here is the density of anions and is the density of cations ; is the electrostatic potential ; is the cross - sectional area of the ion channel at position ; , and are positive constants ; and are positive integers ; and are two parameters . assuming constant cross - sectional area , without loss of generality ,we consider the following two - component _ drift - diffusion system _ where and are densities of the two species and , which are assumed to be nonnegative functions ; is the electric potential ; and are diffusion rates . throughout this paper , , , , , and positive constants ; and are negative constants , unless otherwise specified . is a bounded domain with smooth boundary . when the domain under consideration is extended to the entire space ,becomes we note that , in the absence of , , , and , reduces to the keller - segel type equations keller - segel model is a classical model in chemotaxis introduced by keller and segel ( ) .for the last two decades , there has been considerable literature devoted to the keller - segel model . for the keller - segel model, we refer to the textbook , review papers and references therein . in this paper , we are concerned with stationary solutions to and , i.e. with time - independent solutions to the following elliptic systems using the elementary fact that , the first and second equations in can be rewritten as it is readily seen that if we can find , and satisfying the _ algebraic equations _ where and are constants , then such , and automatically form a solution of .a natural question arises as to whether _ any _ solution of also satisfies .it will be shown in proposition [ prop : equivalence of algebraic equations and differential equations ] that the answer is indeed affirmative when certain appropriate boundary conditions are imposed on the solutions , i.e. and where and .for instance , on , or the neumann boundary conditions as a consequence , our problem now turns to establishing the existence of solutions for the _ differential algebraic equations _ ( daes ) : to the best of the authors knowledge , this is the first time such dae approach has been employed to drift - diffusion systems such as . in section [ sec : two species equations ] , we investigate in theorem [ thm : existence of solutions to aes ] the existence and uniqueness of solutions and to under the following hypothesis : * . note that is a system of nonlinear algebraic equations for which _ explicit solutions _ expressed by the form and in general can not exist . due to , the solution and of _ in implicit form _ can be given _uniquely_. with the aid of theorem [ thm : existence of solutions to aes ] , we are finally led to the following semilinear poisson equation where . to establish existence of solutions of under the zero neumann boundary condition delicate properties of the nonlinearity and the solution , are explored in proposition [ prop : alge eqns ] .these important properties turn out to be essential in proving the existence of solutions of by the standard _direct method _ in the calculus of variations .[ thm : main h1 ] assume that holds and and be fixed .coupled with the neumann boundary conditions has a solution .the intuition behind the differential algebraic equation approach we use in obtaining theorem [ thm : main h1 ] is elementary .however , the result is remarkable in that only is needed to ensure the existence of solutions to the elliptic system together with the neumann boundary conditions . on the other hand , under the hypothesis : * , admits _ triple solutions _ ( see section [ sec : triple solutions ] ) .moreover , in this case there may exist infinitely many solutions for .[ thm : main h2 ] assume that holds and and be fixed .coupled with the neumann boundary conditions has either a solution or infinitely many discontinuous solutions .to begin with , we show that under the boundary conditions and , every solution of also solves , as mentioned in introduction .[ prop : equivalence of algebraic equations and differential equations ] the systems of pdes together with the boundary conditions and where and , is equivalent to the system of algebraic equations integration by parts leads to the desired result .indeed , however , =0 gives this shows that , and thus should be a constant independent of . in a similar manner , we can prove that is a constant independent of from .the proof of the converse is trivial .hence the proof is finished .since we have established proposition [ prop : equivalence of algebraic equations and differential equations ] , it is now necessary to investigate existence and uniqueness of solutions to . in particular , the solution of interest takes the form .when the zero neumann boundary conditions on are considered , it is easy to see that these boundary conditions lead to and on .however , note that the converse is not true . since the first two equations of are in divergence form , the total charges are conserved : where is a solution of .it follows from the divergence theorem and the poisson equation in that in the case of the homogeneous neumann boundary condition , we have which is equivalent to the _ electroneutrality condition_. here and stand for the valence numbers of ions and , respectively .[ thm : existence of solutions to aes ] assume . then for any given , there exists a unique solution of moreover , has a unique solution which can be represented implicitly as and are functions .prove that for any given , we first show that the system of algebraic equations has at least one solution . in other words , the two curves and on the - plane has at least one intersection point . to see this, we differentiate with respect to to obtain which is always strictly less than zero because of the term appearing in the first equation of and can not take nonpositive value in . therefore , the profile of on the - plane has the following property : * as , ; * as , ; * is decreasing in . in a similar manner, we have for the second equation of , the profile of on the - plane enjoys the following property : * as , ; * as , ; * is decreasing in . as a consequence, it follows from the property of the profiles of the two curves and , that the two curves in the first quadrant of the - plane intersect at least once .that is , given any , we can find at least one solution which satisfies .we eliminate the possibility of non - uniqueness of solutions to for a given by contradiction .suppose that , contrary to our claim , there exist in the first quadrant of the - plane two distinct solutions and which satisfy for given .denote by ( respectively , ) the slope of the curve ( respectively , ) at .it is easy to observe that , without loss of generality , we may assume that and . according to the intermediate value theorem, there exists for which , where lies between and while lies between and .however , using and , leads to it turns out that the last equation is equivalent to which contradicts .consequently , for given , uniqueness of solutions to follows . by applying the local implicit function theorem at each point which satisfies , we obtain smoothness of the solution .the proof is completed .certain important properties of solutions to are investigated in the next proposition .these properties include non - simultaneousness of vanishing and , monotonicity of and , and monotonicity and convexity of . an important consequence of these properties is , which turns out to play a crucial role in employing the direct method to establish existence of solutions for the semilinear poisson equation .[ prop : alge eqns ] as in theorem [ thm : existence of solutions to aes ] , assume .then is uniquely solvable by the implicit functions .concerning the properties of , we have * * ( non - simultaneous vanishing of and )*. for fixed and which solves , for one of the following holds : + + + it is equivalent to say : there exists so that and can not simultaneously be true ; * * ( monotonicity of and ) * and can be expressed in terms of and , i.e. + + also , and for , i.e. is monotonically decreasing in , while is monotonically increasing in .furthermore , and are uniformly bounded for .in addition , we have for some constant ; * * ( monotonicity and convexity of ) * can be expressed by .moreover , which implies competition between and , and we prove by contradiction .assume to the contrary that there exists such that and hold simultaneously .as is sufficiently small , are even smaller and . on the other hand , and have opposite signs since and .therefore , in either can not be balanced with or can not be balanced with .either case leads to a contradiction . to prove , we first differentiate the two equations in one by one with respect to , and obtain two equations in which the unknowns can be viewed as and . solving them gives and as stated in .due to , it is easy to see that and for . using , when , clearly it follows from and that and for . as for the other two cases and , it suffices to consider one of them since they are symmetric .suppose that . for , andlead to and on the other hand , when , estimates of and similar to and are given by and as a consequence , we conclude that there exists a constant such thtat . multiplying the first equation in by and the second equation in by , we obtain two equations .subtraction of the two resulting equations gives an equation in terms of and .implicit differentiation then gives the desired result in .theorem [ thm : existence of solutions to aes ] and proposition [ prop : alge eqns ] lead us to consider the neumann problem for the semilinear poisson equation , i.e. we are now in a position to prove theorem [ thm : main h1 ] .first of all , it can be shown by means of proposition [ prop : alge eqns ] that satisfies and of theorem [ thm : existence of solns to poission eqns with disconti nonlinearity ] . upon using theorems [ thm : existence of solutions to aes ] and [ thm : existence of solns to poission eqns with disconti nonlinearity ], we establish the existence of .applying proposition [ prop : equivalence of algebraic equations and differential equations ] completes the proof of theorem [ thm : main h1 ] .for the case where the hypothesis is assumed we can also establish an existence theorem for . in particular , in this case admits a _ triple solution _ in the sense that for some , we can find _ three _ pairs of solutions .see figure [ fig : u = u(phi ) ] for an example .this is essentially different from the case where holds .[ 1.0 ] as shown in figure [ fig : u = u(phi ) ] , there may exist a _triple solution _ and thus uniqueness no longer holds when is violated , i.e. when .when non - uniqueness occurs , it is readily seen that there must be a such that .due to and , this can occur only when .indeed , it is possible that the numerator ( ) in ( ) as ( ) . however , as we have mentioned in the proof of proposition [ prop : alge eqns ] that it is therefore necessary to find at which vanishes by solving the equations : it turns out that is reduced to a single equation by eliminating and substituting .more precisely , now the question remains to determine the profile of . to this end , we first observe that makes sense only when , where .also , it is readily verified that to find extreme points of , we calculate where , and we remark that the denominator of is always away from since . let the numerator of be zero .then the four roots are , , , and , where , , and are the three roots of .indeed , , , and are three distinct real roots . to show this , we use fan shengjin s method . as in , let and the _ discriminant _[ lem : discriminant ] there are three possible cases using the discriminant : * if , then has one real root and two nonreal complex conjugate roots . * if , then has three real roots with one root which is at least of multiplicity 2 . * if , then has three distinct real roots .it can be shown that can not be true under the assumption .using lemma [ lem : discriminant ] , we conclude that the cubic equation has three distinct real roots .due to and , it is easy to see that either or .however , the case can not occur since makes sense only for .for the other case , can not be a extreme point of because of .accordingly , there are _ at most two _ extreme points and .we have by the asymptotic behavior as or , which leads to the fact that the number of extreme points of can only be _odd_. as a consequence , one of and can not be a extreme point of and the other one is _ the _ extreme point of .suppose that is a extreme point of , then so is , which yields a contradiction .therefore , is _ the _ extreme point of .in fact , it can be shown that so that dose not make sense. now a question remains , i.e. , how to determine the sign of ?we see that the maximum of is attained at or .moreover , the criterion for determining the roots of the equation is stated as : * when , has two distinct positive solutions ; * when , has no solutions ; * when , has a unique positive solution ( i.e. ) . noting that as and , we have evaluated at . in other words and has a _ reflection point _ at for some . on the other hand , as , the denominator of ( also ) always keeps it sign since has no solutions .indeed , it is easy to see that , which leads to and .[ thm : g11g22<g12g21 existence of alge soln ] assume that holds and solves then * * ( triple solutions ) * when , there exist such that * * ( and ) takes three distinct values for ; * * ( and ) can be represented uniquely for ; * * ( and ) takes two distinct values at ; * * ( uniqueness of monotone solutions ) * when , and can be represented uniquely for .moreover , and for ; * * ( unique and monotone solutions with inflection points ) * when , and can be represented uniquely for . furthermore , there exists such that * * and for ; * * at .we are able to explain in the following manner . according to theorem [ thm : g11g22<g12g21 existence of alge soln ], the sign of determines the properties , such as the number of solutions , of . here is the largest positive root of and the discriminant as shown in relies on and .consequently , is determined once and are given .now and can be suitably chosen such that anyone of the three cases as described in theorem [ thm : g11g22<g12g21 existence of alge soln ] can occur .[ thm : existence of solns to poission eqns with disconti nonlinearity ] assume that is an open and bounded domain with smooth boundary and * is piecewise continuous and discontinuous at finite points for ; for all ; * there exist constants , such that for and for . then the following neumann problem has a unique weak solution . for ,let as seen previously , it is sufficient to consider the minimizing problem using , we have therefore , there exists such that and we can find a sequence of functions with .now we show that is coercive , that is where . indeed ,when as , we clearly have . on the other hand , when is bounded and as , it follows that also holds . to see this , since as and , we conclude that and as .the coercivity condition leads to the fact that there exists a constant such that ( otherwise , the unboundness of together with contradicts ) .it follows that the sequence is bounded in , and there exists a subsequence of with we select a subsequence of which converges in , i.e. since convergence implies pointwise convergence of a subsequence almost everywhere , we have where is a subsequence of .we denote this subsequence obtained by . to find a minimizer of our minimizing problem, it suffices to establish weak lower semicontinuity , i.e. which gives this shows that solves the minimizing problem .now we prove .indeed , is continuous since for all .since a.e . in as , we have a.e .. applying fatou s lemma to obtain yields where we have used the fact that as . in fact , elliptic regularity of weak solutions ensure smoothness of .this completes the proof of the theorem .in this section , the following initial conditions are imposed to determine completely the evolution . since the first two equations of are in divergence form ,total charges are conserved : the means and of the charges and over are defined by respectively .the following two lemmas are crucial in proving theorem [ thm : toward the equilibria ] .we note that csiszr - kullback - pinsker inequality ( references refer to the ones cited in ) comes from information theory .[ lem : logarithmic sobolev inequality ] for , , where , , is the logarithmic sobolev constant , and is a constant depending on and . under certain hypotheses on the initial conditions and the parameters appearing in ,we show that the solution of the initial - boundary value problem tends to and in the sense . more precisely , we have in and , is the poincar constant .if a global - in - time solution of the time - varying problem , , exists , then it tends to the corresponding constant steady - state solution in the -norm as , i.e. furthermore , the time - varying solution converges in the -norm exponentially fast to its mean with explicit rate : where will be specified in the proof , and is the constant in the csiszr - kullback - pinsker inequality ( lemma [ lem : csiszar - kullback inequality ] ) . as in ,we define the _ relative entropy functional _ (t) ] for all . indeed , (t)+\int_{\omega}\,\bar{w}_1\,dx-\int_{\omega}\,u\,dx+ \int_{\omega}\,\bar{w}_2\,dx-\int_{\omega}\,v\,dx \notag\\ = & h[u , v](t ) . \end{aligned}\ ] ] note in particular that , for all . for simplicity of notation ,let and .we calculate the time derivative of ] determined by is a positive constant depending on and .gronwall s inequality yields <h0 e - t } h[u , v](t)\leq e^{-\mathcal{b}\,t}\,h_0,\ ] ] where .thanks to * ( ) * , . since from the csiszr - kullback - pinsker inequality ( lemma [ lem : csiszar - kullback inequality ] ), the -norm of and can be controlled by (t)$ ] . the same decay rate as in is given by this completes the proof of the theorem .
a method based on a differential algebraic equation ( dae ) approach is employed to find stationary solutions of the poisson - nernst - planck equations with steric effects ( pnp - steric equations ) . under appropriates boundary conditions , the equivalence of the pnp - steric equations and a corresponding system of daes is shown . solving this system of daes leads to the existence of stationary solutions of pnp - steric equations . we show that for suitable range of the parameters , the steric effect can produces infinitely many discontinuous stationary solutions . moreover under a stronger intra - species steric effect , we prove that a smooth solution converges to a constant stationary solution .
the set of possible spectra for the sum of two deterministic hermitian matrices and depends in complicated ways on the spectra of and ( see ) . nevertheless , if one adds some randomness to the eigenspaces of then , as becomes large , free probability provides a good understanding of the behavior of the spectrum of this sum .more precisely , set , where is a random unitary matrix distributed according to the haar measure on the unitary group , and suppose that the empirical eigenvalue distributions of and converge weakly to compactly supported distributions and , respectively . building on the groundbreaking result of voiculescu ,speicher proved in the almost sure weak convergence of the empirical eigenvalue distribution of to the free additive convolution .this convolution is again a compactly supported probability measure on .a similar result holds for products of matrices : if are in addition assumed to be nonnegative definite , then the empirical eigenvalue distribution of converges to the free multiplicative convolution , a compactly supported probability measure on .( we recall that and have the same eigenvalue distribution , and that is a commutative operation . ) finally , if and are deterministic unitary matrices , their empirical eigenvalue distributions are supported on the unit circle and the empirical eigenvalue distribution of converges to the free multiplicative convolution , a probability measure supported on .( we refer the reader to for an introduction to free probability theory .we describe later the mechanics of calculating the free convolutions and . )the fact that the empirical eigenvalue distribution of converges weakly to does not mean that all the eigenvalues of are close to the support of this measure .there can be outliers , though they must not affect the limiting empirical eigenvalue distribution .sometimes one can argue that these outliers must in fact exist .for instance , the case when the rank of and its nonzero eigenvalues are fixed is investigated by benaych - georges and nadakuditi in . denote by these fixed eigenvalues .of course , in this case is a point mass at , so the _ limiting _ behavior of the empirical eigenvalue distribution of is not affected by such a matrix .more precisely , the empirical eigenvalue distribution of converges almost surely to the limiting spectral distribution of .one can however detect , among the outlying eigenvalues of , the influence of the eigenvalues of above a certain critical threshold .we use the notation for the eigenvalues of an matrix , repeated according to multiplicity .the cauchy - stieltjes transform of a finite positive borel measure on is given by for outside the support of , and denotes the inverse of this function relative to composition .when the support of is contained in the compact interval ] denotes the conditional expectation onto the von neumann algebra generated by and . the function is referred to as the _ subordination function_. ( this formula is a particular case of biane s result .formula appears in this form in ( * ? ? ?* appendix ) . )the subordination function continues analytically via schwarz reflection through the complement in of the spectrum of .if in distribution as , while a single eigenvalue , common to all of the matrices , stays outside the spectrum of , this eigenvalue will disappear in the large limit , in the sense that it will not influence the spectrum of .thus , the analytic function will not be prevented _ a priori _ from taking the value . however , if relation were true with and replaced by and , respectively , and with the _ same _ subordination function , then any number in the domain of analyticity of so that must generate a polar singularity for the right - hand side of .therefore , each such must generate a similar singularity for the term on the left - hand side of the same equality , thus necessarily producing an eigenvalue of . while this scenario is not true , we prove that an approximate version does hold .namely , we show that a compression of the matrix ^{-1}+a_n\ ] ] to a subspace is close to , almost surely as .this insight is crucial in our arguments .it follows from our results that a remarkable new phenomenon occurs : a single spike of can generate asymptotically a finite , possibly arbitrarily large , set of outliers for .this arises from the fact that the restriction to the real line of some subordination functions may be many - to - one , that is , with the above notation , the set may have cardinality strictly greater than 1 , unlike the subordination function related to free convolution with a semicircular distribution that was used in .the case of multiplicative perturbations is based on similar ideas , with the subordination function replaced by its multiplicative counterparts ( * ? ? ?* theorems 3.5 and 3.6 ) .in addition to outliers , we investigate the corresponding eigenspaces of .it turns out that the angle between these eigenvectors and the eigenvectors associated to the original spikes is determined by biane s subordination function , this time via its derivative .the paper is organized as follows . in section 2 , we describe in detail the matrix models to be analysed , and state the main results of the paper . in section 3we introduce free convolutions and the analytic transforms involved in their study .we give the functional equations characterising the subordination functions . in section 4we collect and prove a number of of auxiliary results , and in section 5 we prove the main results .we denote by the upper half - plane , by the lower half - plane , and by the unit disc in .the topological boundary of the unit disc is denoted by . for any vector subspace of , denotes the orthogonal projection onto . stands for the set of matrices with complex entries , for the subset of invertible matrices , and for the unitary group .the operator norm of a matrix is , its spectrum is its kernel is , its trace is and its normalized trace is .the eigenvalues of a hermitian matrix are denoted and the probability measure is the empirical eigenvalue distribution of .when is unitary , its eigenvalues are ordered decreasingly according to the size of their principal arguments in . if is a normal matrix , we denote by its spectral measure .thus , if is a borel set , then is the orthogonal projection onto the linear span of all eigenvectors of corresponding to eigenvalues in .the support of a measure on is denoted .given any set , we define its -neighbourhood by as long as there is no risk of confusion , the same notation will be used when and are subsets of .open intervals in and open arcs in are denoted . as already seen in section 1 ,the cauchy ( or cauchy - stieltjes ) transform of a finite positive borel measure on is an analytic function defined by iwe only consider finite measures supported or .we denote by the reciprocal of that is , .the moment generating function for is the -transform of is defined as the relevant analytic properties of the transforms above are presented in subsections [ sec : boxplus][sec : boxtimest ] .the _ free additive convolution _ of the borel probability measures and on is denoted by and the _ free multiplicative convolution _ of the borel probability measures and either on or on is denoted by .given , denote by and the _ subordination functions _ corresponding to the free convolution .they are known to be analytic on the complement of .as the name suggests , they satisfy an analytic subordination property in the sense of littlewood : a similar result holds for the multiplicative counterparts of the subordination functions .we have : where and are analytic on .free convolutions are defined in subsections [ sec : boxplus][sec : boxtimest ] , and the subordination functions are defined via functional equations in subsections [ subsubsec : eq+][subsubsec : eqt ] in the following three subsections we describe our models and state the main results . to avoid dealing with too many special cases , we make the following technical assumption , which will be in force for the remainder of the paper , except for remark [ atoms ] . under this assumption , the subordination functions extend continuously to the boundary ( see lemmas [ lem : extension ] , [ lem : extensionx ] , and [ lem : extensiont ] ) .our results however remain substantially valid without this assumption , and we discuss in remark [ atoms ] the relevant modifications . here are the ingredients for constructing the additive matrix model : * two compactly supported borel probability measures and on . * a positive integer and fixed real numbers which do not belong to . * a sequence of deterministic hermitian matrices of size such that * * converges weakly to as ; * * for and , the sequence satisfies * * the eigenvalues of which are not equal to some converge uniformly to as , that is * a positive integer and fixed real numbers which do not belong to . * a sequence of deterministic hermitian matrices of size such that * * converges weakly to as ; * * for and ,the sequence satisfies * * the eigenvalues of which are not equal to some converge uniformly to as . * a sequence of unitary random matrices such that the distribution of is the normalized haar measure on the unitary group .it is known from that the asymptotic eigenvalue distribution of is . in the following statementwe take advantage of the fact , discussed later , that the functions and extend continuously to the real line .the points in satisfying are isolated but they may accumulate to .we denote by and the projection onto the space generated by the eigenvectors corresponding to the spikes of and , respectively .these can also be written as in terms of the spectral measures of and .[ main+ ] with the above notation , set , \cup\left[\bigcup_{j=1}^q \omega_2^{-1}(\{\tau_j\})\right],\ ] ] and let be the subordination functions satisfying ( [ subord1 ] ) .the following results hold almost surely for large : 1 . given , we have 2 .fix a number , let be such that , and set , . then 3 .with and as in part , we have and 4 . with and as in part ,suppose in addition that .then analogously , if , then in case and , the result of ( 3 ) above implies the following .let be an orthonormal basis of . then , almost surely for large , we have and for , where is the usual kronecker symbol .thus , assertion ( 4 ) is a strenghtening of ( 3 ) in the special case .here are the ingredients for constructing the multiplicative model : * two compactly supported borel probability measures and on . * a positive integer , and fixed positive numbers which do not belong to . * a sequence of deterministic nonnegative matrices of size that * * converges weakly to as , * * for and the sequence satisfies * * the eigenvalues of which are not equal to some converge uniformly to as .* a positive integer , and fixed positive numbers which do not belong to . * a sequence of deterministic nonnegative matrices of size such that * * converges weakly to as , * * for and the sequence satisfies * * the eigenvalues of which are not equal to some converge uniformly to as . * a sequence of random matrices such that the distribution of is the normalized haar measure on the unitary group .it is known from that the asymptotic eigenvalue distribution of is .the projections and used in the following statement were defined in .[ mainx+ ] with the above notations , let be the subordination functions satisfying _ ( [ subord1 ] ) _ , set , , , and \cup\left[\bigcup_{j=1}^q v_2^{-1}(\{1/\tau_j\})\right].\ ] ] the following results hold almost surely for large : 1. given we have 2 . fix a positive number , let be such that and set , . then 3 . with and as in part , we have and 4 . with and as in part ,suppose in addition that .then analogously , if , then finally , we describe the ingredients for the construction of the multiplicative matrix model with unitary and : * two borel probability measures and on with nonzero first moments such that . * a positive integer and fixed complex numbers which do not belong to and such that * a sequence of deterministic unitary matrices of size such that * * converges weakly to as , * * for and the sequence satisfies * * the eigenvalues of which are not equal to some converge uniformly to as . * a positive integer and fixed complex numbers which do not belong to and such that * a sequence of deterministic unitary matrices of size such that * * converges weakly to as , * * for and the sequence satisfies * * the eigenvalues of which are not equal to some converge uniformly to as . * a sequence of random matrices such that the distribution of is the normalized haar measure on the unitary group .it is known from that the asymptotic eigenvalue distribution of is .when and , the interval consists of those numbers in whose argument differs from by less than . with this convention , theorem [ mainx+ ] holds _ verbatim _ in the unitary case as well .free convolutions arise as natural analogues of classical convolutions in the context of free probability theory . for two borel probability measures and on the real line , one defines the free additive convolution as the distribution of , where and are free self - adjoint random variables with distributions and , respectively . similarly ,if both are supported on or on , their free multiplicative convolution is the distribution of the product , where , as before , and are free , positive in the first case , unitary in the second , random variables with distributions and , respectively .the product of two free positive random variables is usually not positive , but it has the same moments as the positive random variables and we refer to for an introduction to free probability theory and to for the definitions and main properties of free convolutions . in this section , we recall the analytic approach developed in to calculate the free convolutions of measures , as well as the analytic subordination property and related results .recall from the definition of the cauchy - stieltjes of a finite positive borel measure on the real line : this function maps to and .conversely , any analytic function for which is finite is of the form for some finite positive borel measure on .when has compact support , the function is also analytic at and ( see ( * ? ?* chapter 3 ) for these results ) .the measure can be recovered from its cauchy - stieltjes transform as a weak limit ( this holds for signed measures as well . )the density of ( the absolutely continuous part of ) relative to lebesgue measure is calculated as for almost every relative to the lebesgue measure .in particular , can be described as the set of those points with the property that can be continued analytically to an open interval such that is real - valued .on the other hand , almost everywhere relative to the singular part of .indeed , these facts follow from the straightforward observation that , is the poisson integral of .see for these aspects of harmonic analysis .it is often convenient to work with the reciprocal cauchy - stieltjes transform , which defines an analytic self - map of the upper half - plane .this function enjoys the following properties : * for any , .if equality holds at one point of , then it holds at all points , and . * in particular, the function is a self - map of unless is a point mass , in which case is a real constant .* if is compactly supported , there exist a real number and a finite positive borel measure on with included in the convex hull of such that conversely , if extends to an analytic real - valued function to the complement in of a compact set , and if , then there exists a compactly supported positive borel measure on so that . the value is determined by . *if and is as in , we have and .equation is a special case of the nevanlinna representation of analytic self - maps of the upper half - plane ( * ? ? ?* chapter 3 ) : where and is a positive finite borel measure on .we identify , , if , then reduces to , with and .the cauchy - stieltjes transform of a compactly supported probability measure is conformal in the neighborhood of , and its functional inverse is meromorphic at zero with principal part .the -transform of is the convergent power series defined by the free additive convolution of two compactly supported probability measures and is another compactly supported probability measure characterized by the identity satisfied by these convergent power series .recall from the definition of the moment - generating function of a borel probability measure on : this function is related to the cauchy - stieltjes transform of via the relation it satisfies the following properties , for which we refer to ( * ? ? ?* section 6 ) : * . * and in addition , * in particular ,if is compact and not equal to , then is injective on some neighbourhood of zero in .* is injective on .it is often convenient to work with the eta transform , or boolean cumulant function , it inherits from the following properties : * for all , where takes values in on .moreover , if equality holds for one point in , it holds for all points in , and for any .* and in particular , if is compact and different from , then is injective on some neighbourhood of zero in . *if , is strictly increasing from ] , where should be replaced by if .moreover , is injective on . *conversely , if an analytic function satisfies for all and , then for some borel probability measure on ( * ? ? ?* proposition 2.2 ) .the -transform of a compactly supported measure is the convergent power series defined by where is the inverse of relative to composition .the free multiplicative convolution of two compactly supported probability measures is another compactly supported probability measure characterized by the identity in a neighbourhood of .the analytic transforms involved in the study of multiplicative free convolution on are formally the same ones as in subsection [ sec : boxtimes+ ] , but their analytical properties are different .thus , it satisfies for all .we work almost exclusively with the eta transform , or boolean cumulant function , the following properties of are relevant to our study : * for any , we have .if equality holds at one point in , it holds at all points in , and for any .* and in particular is injective on a neighbourhood of zero in if and only if .* the function continues via schwarz reflection through the set , that is * for almost all points with respect to the absolutely continuous part of ( relative to the haar measure on ) , the nontangential limit of at belongs to , and for almost all points in the complement of the support of the absolutely continuous part of , the nontangential limit of at belongs to .moreover , if has a singular component , then for almost all points with respect to this component , the nontangential limit of at equals one . *conversely , if an analytic function satisfies , then for a unique borel probability measure on ( * ? ? ?* proposition 3.2 ) .when , we define the -transform of as the convergent power series again , the free multiplicative convolution of two probability measures and with nonzero first moments is another probability measure with nonzero first moment characterized by the identity in a neighbourhood of .if both of and have zero first moment , then is the haar ( uniform ) distribution on , see . from now on, we always assume that all our probability measures on have nonzero first moments .the analytic subordination phenomenon for free convolutions , as described in and , was first noted by voiculescu in for free additive convolution of compactly supported probability measures .later , biane extended the result to free additive convolutions of arbitrary probability measures on , and also found a subordination result for multiplicative free convolution .more importantly , he proved the stronger result ( see heuristics in the introduction ) that the conditional expectation of the resolvent of a sum or product of free random variables onto the algebra generated by one of them is in fact also a resolvent . in , voiculescu deduced this property from the fact that such a conditional expectation is a coalgebra morphism for certain coalgebras , and through this observation he extended the subordination property to free convolutions of operator - valued distributions . for our purposes ,considerably less than that is required : we essentially need only complex analytic properties of these functions .given borel probability measures and on , there exist two unique analytic functions such that 1 . , ; 2 . 3 .in particular ( see ) , for any so that is analytic at , is the attracting fixed point of the self - map of defined by a similar statement , with interchanged , holds for .we note that implies that the functions continue analytically accross an interval such that and are real valued if and only if the same is true for . for the sake of providing an intrinsic characterization of the correspondence betweem spikes and outliers , we formalize and slightly strenghten this remark in the following lemma . herewe use the functions defined by ( [ h ] ) .[ lem : extension ] consider two compactly supported borel probability measures and , neither of them a point mass .then the subordination functions and have extensions to with the following properties : 1 . are continuous .if then the functions and continue meromorphically to a neighborhood of , , and . if , then , and if , then .conversely , suppose that continues meromorphically to a neighbourhood of a point and that when for some .if , then is an isolated atom for . in the context of part ( b ) of the above lemma, we note that is analytic around infinity , and .part ( a ) was proved in ( * ? ? ?* theorem 3.3 ) .fix .equation indicates that and must take real values on for some .schwarz reflection implies that and have meromorphic continuations with real values on accross the corresponding intervals .the relation shows that the limit is real for and therefore by the stieltjes inversion formula .in particular , . to conlcude the proof of ,suppose that .it follows from in conjunction with items ( c ) and ( d ) of subsection [ sec : boxplus ] that so that is analytic at .the statement for follows by symmetry .finally , suppose that the hypotheses of ( c ) is satisfied .it was observed in that is also real for .( indeed , if , relation implies and therefore .this relation can only hold when is a point mass , a case which we excluded . ) now , implies that is continuous and real - valued on , and this yields the desired conclusion via the stieltjes inversion formula . given borel probability measures on , there exist two unique analytic functions so that 1. for and ; 2 . 3 .in particular ( see ) , for any so that is analytic at , the point is the attracting fixed point of the self - map of defined by a similar statement , with interchanged , holds for . a version of lemma [ lem : extension ]holds for multiplicative free convolution on .since the proof is similar to the proof of lemma [ lem : extension ] and of lemma [ lem : extensiont ] below , we omit it .item ( a ) appears in the proof of ( * ? ? ? * theorem 3.2 ) .[ lem : extensionx ] consider two compactly supported borel probability measures on , neither of them a point mass .then the restrictions of the subordination functions and to have extensions to with the following properties : 1 . are continuous .2 . if then the functions and continue analytically to a neighborhood of , , and . given borel probability measures on with nonzero first moments , there exist unique analytic functions so that 1 . , , ; 2 . 3 .in particular ( see ) , if and is analytic at , then the point is the attracting fixed point of the map a similar statement , with interchanged , holds for .[ lem : extensiont ] consider two borel probability measures on with nonzero first moments , neither of them a point mass .suppose that .then the subordination functions and have extensions to with the following properties : 1 . are continuous .2 . if then the functions and continue analytically to a neighborhood of , , and .part ( a ) can be found in ( * ? ? ?* theorem 3.6 ) .fix . equation coupled with items ( d ) and ( e ) of subsection [ sec : boxtimest ] indicate clearly that must take values in at least a.e . on a neighbourhood of . as proved in ( * ? ? ?* proposition 1.9 ( a ) ) if , say , does not reflect analytically through a neighbourhood of , then for any the set of nontangential limits of around is dense in .as is nonempty , many of these limits will fall in the domain of analyticity of .in particular , we may choose an arbitrary interval \} ] for all ; * ; * in the weak-topology as , that is , ).\ ] ] then there exists a sequence converging to zero , independent of , such that fix .it is known from that for any fixed .we prove that there exists such that the proof is then completed by setting indeed , as noted in , there exists such that on the other hand , for ] be an analytic function .assume that is nonnegative definite for , and for ] is analytic in , it is invertible for ,\ ] ] and it is selfadjoint for .\ ] ] moreover , < 0,\quad z\in\mathbb{c}^{+}.\ ] ] it follows that the function )^{-1}b=\im b ] takes values in whenever .moreover , ^{-1}\ge\im b\text { and } \|\mathbb e[r_{n}(b)]^{-1}\|\le\|b\|+c_1+{4c_2}{\|(\im b)^{-1}\|},\quad\im b>0,\ ] ] where , and ^{2}) ] follows from ( * ? ? ?* remark 2.5 ) .the second inequality follows immediately from the observations preceding the lemma , and from the fact that for any deterministic matrix , -\mathbb{e}[u_{n}^{*}b_{n}u_{n}]z\mathbb{e}[u_{n}^{*}b_{n}u_{n}]}\\ & = ( { \rm tr}_{n}(b_{n}^{2})-[{\rm tr}_{n}(b_{n})]^{2})\left(\frac{n^2}{n^2 - 1}\text{tr}_n(z)i_n- \frac{1}{n^2 - 1}z\right).\qedhere\end{aligned}\ ] ] in some situations it is convenient to see how ] . then : 1 . for every we have .\label{eq : first - commutation}\ ] ]if is invertible , we also have (b)^{-1}.\nonumber\end{aligned}\ ] ] 2 . , where denotes the double commutant of in .the conclusion of item ( 2 ) of the above lemma applies to any complex differentiable map defined on an open set in with the property that for all .the analytic function ] and =yg(b) ] .then , in particular , for any unit vector , consider the rectangle having as corners the complex points and . by assumptions ,we have \cup [ \beta,\beta+\delta])=\varnothing ] . thus the spectral projections can be obtained by analytic functional calculus : \,d\lambda.\ ] ] an application of the resolvent equation and elementary norm estimates yield the lemma follows . for the following concentration of measureresult it is convenient to identify with the subspace of consisting of all vectors whose last component is zero .similarly , is identified with those matrices in whose last column and row are zero .we use the notation for variance .[ concentration ] fix a positive integer , a projection of rank , and a scalar .then : 1 . \,p^*)=0 ] . assertion ( i ) is equivalent to the statement that , given unit vectors )h=0\label{eq:55}\ ] ] almost surely .the random variable is a lipschitz function on the unitary group with lipschitz constant .an application of ( * ? ? ?* corollary 4.4.28 ) yields the inequality \right)h\vert>\frac{\varepsilon}{n^{\frac{1}{2}-\alpha}}\right)\leq2\exp\left(-cn^{2\alpha}\vert\im z\vert^{4}\varepsilon^{2}\right).\ ] ] for any , and follows by an application of the borel - cantelli lemma . to prove ( ii ) , apply the same inequality in the usual formula =\int_{0}^{+\infty}\mathbb{p}(x > t)\ , dt ] is diagonal . set {ii}}+(c_n)_{ii},\quad 1\leq i\leq p.\ ] ] and observe that for by lemma [ l ] .we proceed to show that this function satisfies an approximate subordination relation .we state this separately for future reference .the existence of ^{-1}+c_n,\quad z\in\mathbb{c}^{+},\ ] ] is guaranteed by lemma [ l ] .we apply now lemma [ bicommutant ] with in place of and in place of , so =g(zi_n - c_n) ] contains exactly one point of ( namely , ) for and for large .for each , choose a function with support in ] such that and for ] satisfies the concentration inequality the lemma follows from ( * ? ? ?* corollary 4.4.28 ) once we establish that the lipschitz constant of the function ,\quad u\in{\rm u}(n),\ ] ] is at most . for any and in we have |\\ & \leq & \|h ( a_n+u^*b_nu ) -h ( a_n+v^*b_nv)\|_2\\ & \leq&\gamma \vert u^ * b_n u -v^ * b_n v \vert_2 , \end{aligned}\ ] ] where we used the cauchy - schwarz inequality for the hilbert - schmidt norm and the fact that . since conclude that as desired . the above result , combined with the borel - cantelli lemma , yields immediately -\mathbb{e}\left [ { \rm tr}_n \left [ h ( x_n ) f_i(a_n)\right]\right]\right)=0,\quad i=1,\dots , p,\ ] ] almost surely .we conclude the proof of by showing that \right]=\frac{\delta_{i_0i}}{\omega'_1(\rho ) } , \quad i=1,\dots , p.\ ] ] lemma [ approxcauchy ] with allows us to rewrite this as \right]h(t)\,dt=- \frac{\delta_{i_0i}}{\omega'_1(\rho)},\quad i=1,\dots , p,\ ] ] or more simply , because is the projection of onto the coordinate , {ii } \right]h(t)\,dt=- \frac{\delta_{i_0i}}{\omega'_1(\rho)},\quad i=1,\dots , p.\ ] ] lemma [ approximatesubordination ] suggests writing = \frac{1}{\omega_1(z)-\theta_i } + \delta_{i , n}(z),\quad i=1,\dots , p , z\in\mathbb{c}^+.\ ] ] we proceed to estimate the functions .define analytic functions for using with in place of , respectively .these functions are analytic outside the interval ] for some independent of .the operator ] , with total mass .similarly , the subordination function from can be written as the hypothesis that the empirical eigenvalue distribution of converges to implies in particular in addition , the fact that uniformly on compact subsets of ] and that in the weak-topology .we also have lemma [ conv ] , applied to the sequence yields positive numbers such that and we can now estimate -\frac{1}{\omega_1(z)-\theta_i}\right|&= & \left|\frac{1}{\omega_{n , i}(z)-\theta_i}-\frac{1}{\omega_1(z)-\theta_i}\right|\\ & = & \frac{|\omega_{n , i}(z)-\omega_1(z)|}{|\omega_{n , i}(z)-\theta_i||\omega_{1}(z)-\theta_i|}\\ & < & \frac{|\omega_{n , i}(z)-\omega_1(z)|}{|\im z|^2}\\ & \leq&a_n(1+|z|)^4\cdot(1+|\im z|^{-4}),\end{aligned}\ ] ] where and the proposition follows .the preceding result , combined with , shows that is equivalent to this is easily verified. indeed , denote by , , the rectangle with vertices .calculus of residues yields on the other hand , \,dt.\ ] ] now we use the fact that on to conclude that is a sum of the following four integrals : all of which are easily seen to tend to zero as .this completes the proof of and therefore of .we observe now that the proof of for uses only the fact that . therefore switching the roles of and in this argumentyields a proof of and completes the proof of part ( 3 ) of theorem [ main+ ] in this case if and . the case and follows by symmetry . assertion ( 4 ) of the theorem follows from ( 3 ) simply because is a projection of rank one. indeed , denote by the canonical basis in , so .let be a unit vector in the range of , so for every .direct calculation shows that and thus , almost surely for large , in particular , we obtain , which is precisely the first relation in ( 4 ) .the case follows by symmetry .* step b : * in this step we prove ( 3 ) and ( 4 ) in the general case of spikes with higher multiplicities and arbitrary values for and .we use an idea from to reduce the problem to the case considered in step a. given positive numbers and , set for .these matrices have distinct spikes and , respectively .the fact that is increasing and continuous at implies that , for sufficiently small , there exist exactly indices such that the equations each have a solution , .similarly , for sufficiently small there exist indices and values such that , .the numbers and can be chosen such that the intervals are pairwise disjoint and contained in .we conclude that the arguments of step a hold with , , and in place of , , and , respectively .thus , almost surely for .we have and also , noting that for , for small .in addition , can be made arbitrarily close to by making sufficiently small .we conclude that almost surely for large if is sufficiently small .clearly , an application of lemma [ eigenspaces ] shows that can be made arbitrarily small for appropriate choices of and , uniformly in .the first inequality in ( 3 ) follows at once , and the second one is proved similarly .we now verify assertion ( 4 ) when .let be a unit vector in the range of .since the quantity in is small , we can find , almost surely for large , unit vectors such that , uniformly in .it suffices therefore to prove that can be made arbitrarily small for appropriate choices of and .write with in the range of , .the case of assertion ( 4 ) proved in step a shows that for any sufficiently small , we also have for .since the relation implies the desired conclusion follows by noting that can be made arbitrarily close to .the second part of assertion ( 4 ) follows by symmetry .the proofs of the versions of theorem [ mainx+ ] for the positive line and for the unit circle follow the same outline .we avoid excessive repetition and only indicate the differences in the tools used throughout the proof .we use the notations from subsection [ subsec : xperturb+ ] . as in the previous section, we assume that both and are diagonal matrices : since , fix and .we use the following multiplicative decompositions : [ [ reduction - to - the - almost - sure - convergence - of - a - ptimes - p - matrix521 ] ] reduction to the almost sure convergence of a matrix[521 ] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ we argue first that , in case , it follows that is almost surely invertible for large . indeed , in this case , .therefore , and are invertible for large , and thus so is .this observation allows us to restrict the analysis to nonzero eigenvalues of .denote fix so that the matrix is invertible . using sylvester s identity , we obtain for large the matrix is of the form : ,\ ] ] where is the analytic function with values in defined on by and is the diagonal matrix defined earlier .thus , for large , the nonzero eigenvalues of outside are precisely the zeros of in that open set . as in subsection[ plus ] , the random matrix functions sequence converges a.s . to a diagonal deterministic function : [ uniformconvergence ] fix a positive integer , and let and be deterministic nonnegative diagonal matrices with uniformly bounded norms such that the limits exist for .suppose that the empirical eigenvalue distributions of and converge weakly to and , respectively .then the resolvent satisfies ^*=\mathrm{diag}\left(\frac{1}{1-\eta_1 \omega_1(z^{-1})},\ldots,\frac{1}{1-\eta_p \omega_1(z^{-1})}\right).\ ] ] we consider without loss of generality elements .if is invertible , then lemma [ bicommutant ] , part ( 2 ) , applied to implies that ] is diagonal .define {ii}}\right),\quad1\leq i\leq p.\ ] ] we prove the uniform convergence on compact subsets of of the sequences of analytic functions to .the multiplicative counterpart of lemma [ approximatesubordination ] is as follows : [ approximatesubordination ] assume that for some .we have : -\left(i_n-\omega _ { n , i}\left(z^{-1}\right)c_n\right)^{-1}\right\| = 0,\quad z\in \mathbb{c}\setminus \mathbb{r},i\in\{1,\dots , p\}.\ ] ] for , define ^{-1}= \mathbb e\left[\left((zc_n)^{-1}-u_n^*d_nu_n\right)^{-1}\right]^{-1}.\ ] ] this function is well - defined by lemma [ l ] , and the second equality is justified by lemma [ bicommutant](2 ) .we apply lemma [ bicommutant](1 ) with to obtain \omega_n(z).\end{aligned}\ ] ] consider arbitrary vectors of norm one and an of rank one to conclude the existence of rank one projections and rank 2 projections so that ^{1/2}\\ & & \mbox{}\times\mathbb e\left[\left\|q_2\left(\left((zc_n)^{-1}-u_n^*d_nu_n\right)^{-1}-\omega_n(z)^{-1}\right)p_2\right\|^2\right]^{1/2}.\end{aligned}\ ] ] lemma [ l ] yields , with . remark [ 4.13 ] provides estimates for the last two factors .the estimate is obvious .thus , for some constant independent of .the entry of the matrix is precisely , which belongs to the numerical range of .lemma [ lem : w(t ) ] yields since is diagonal , the lemma follows by letting .the proof of proposition [ uniformconvergence ] is bounded from below by a positive multiple of is now completed by an application of the above lemma . indeed , using biane s subordination formula and the asymptotic freeness result of voiculescu , we obtain )=1+\psi_{\mu\boxtimes\nu } ( 1/z)=1+\psi_\mu(\omega_1(1/z)) ] .it follows that converges to uniformly on compacts of clearly , is also defined on a neighbourhood of zero .note that {ii}}\right)= \frac{1}{\theta_i}\left(1-\frac11\right)=0,\ ] ] and {ii}}{\mathbb e\left[iy(iy+x_n)^{-1}\right]_{ii}}\\ & = & -\frac{1}{\theta_i } \mathbb e[x_n]_{ii}.\end{aligned}\ ] ] in addition , since which is uniformly bounded , the map is analytic and real on the complement of an interval ] )= \frac{1}{\theta_i}\mathbb e\left[x_n-\chi_{\{0\}}(x_n)\right]_{ii} ] .thus , lemma [ conv ] applies to to allow the estimate with )+\sup_n\phi_{n , i}([0,m])$ ] .we have {ii}-\frac{1}{z}\cdot\frac{1}{1-\theta_i\omega_1\left(\frac{1}{z}\right)}\right|&=&\left|\frac{1}{z}\cdot\frac{1}{1-\theta_i\omega_{n , i}\left(\frac{1}{z}\right)}-\frac{1}{z}\cdot\frac{1}{1-\theta_i\omega_1\left(\frac{1}{z}\right)}\right|\\ & = & \frac{\theta_i}{|z|}\frac{\left|\omega_{n , i}\left(\frac{1}{z}\right)-\omega_1\left(\frac{1}{z } \right)\right|}{\left|(1-\theta_i\omega_1\left(\frac{1}{z}\right))(1-\theta_l\omega_{n , i}\left(\frac{1}{z}\right))\right|}\\ & < & \frac{1}{|z|}\frac{|z|^2+c_i}{(\im z)^4}\frac{(|z|+m)^4}{\theta_i\phi_{n , i}([0,m ] ) \phi([0,m])}v_{n , i}.\end{aligned}\ ] ] the proposition follows . to complete the argument of stepa , it suffices now to observe that the residue of the function at is equal to * step b : * we use the same perturbation argument as in step b of the proof of theorem [ main+ ] .we reduce the problem to the case of a spike with multiplicity one , to which we apply step a. the only change from the argument in step b of theorem [ main+ ] comes from the form of the subordination functions .we use perturbations and and define .the quantity tends to zero uniformly in as .the details are omitted .we use the notation from subsection [ subsec : xperturbt ] .the tools used are identical to the ones used in the analysis of the positive model . however , the domains of definition of the analytic transforms involved are different .we indicate the relevant differences .choose such that and .the reduction to the almost sure convergence of a matrix is performed the same way , and the same concentration inequality holds ( this time with lipschitz constant ) in lemma [ concentration ] .the counterparts of propositions [ uniformconvergence ] and [ estimationinmean ] hold , but in proposition [ estimationinmean ] we must consider .the resolvent is defined by .the function defined by {ii}}\right ) , \quad z\in\mathbb d,\ ] ] is easily seen to map into itself and fix the origin .indeed , . in the unitary version of lemma [ approximatesubordination ] , no supplementary condition on is required , and for defined as in the proof of lemma [ approximatesubordination ] , the estimate becomes if . the estimates for the corresponding resolventsare provided by lemma [ l ] .[ proof of theorem [ mainx+ ] for , parts ( 1 ) and ( 2)eigenvalue behaviour . ]we must now apply lemma [ alt - benaych - rao ] with .it will be applied to , the sequence defined by and the limit provided by proposition [ estimationinmean ] .observe that is invertible for . indeed , it is easy to see that , if is not invertible , belongs to the spectrum of the unitary operator .the convergence of to follows from the appropriate version of proposition [ estimationinmean ] . clearly , is diagonal and , again by the julia - carathodory theorem , this time applied to the disk , its diagonal entries have only simple zeros .the remainder of the argument requires no further adjustments .the relevant changes for this part of the proof occur in proposition [ estimfondax ] , where must be used instead of and an application of lemma [ convt ] in place of lemma [ conv ] .also , the perturbations and are applied to the arguments of and , respectively .it is easy to see that our results hold equally well when is random , independent of , and has spikes with the property that , , almost surely .similarly , can be taken to be random , independent of and , and with spikes that converge almost surely to .the proofs use the general form of propositions [ estimationinmean ] and [ uniformconvergence ] , respectively .the above remark allows us to treat sums or products of more than two spiked matrices .more precisely , let be an integer , let be deterministic hermitian matrices and let be independent haar - distributed random matrices .suppose that the eigenvalue distribution of tends weakly to and has spikes subject to the hypotheses of subsection [ subsec:+perturb ] .then has asymptotic eigenvalue distribution equal to , and the outliers in the spectrum of are described by an appropriate reformulation of theorem [ main+ ] .the result can be proved by induction on if we observe that has the same asymptotic eigenvalue distribution as , where and is a haar - distributed unitary independent from .a similar remark applies to theorem [ mainx+ ] in the case of the circle . for the case of the multiplicative model on , the corresponding generalization applies to a model of the form .[ atoms ] the analogue of theorem [ main+ ] when was proved in under the additional assumption that all eigenvalues of except for the spikes are equal to zero .our arguments provide a proof of this result without this additional assumption .similar observations apply to theorem [ mainx+ ] when either or is a point mass .the only case in which one needs to be more careful is that of theorem [ mainx+ ] for the positive half - line when or is equal to .suppose , for instance , that .the eigenvalues of are uniformly approximated arbitrarily well by the eigenvalues of and our methods do apply to the perturbed model , whose asymptotic eigenvalue distribution is .the spikes are calculated explicitly by noting that , so , .thus , and .the spikes of are the solutions of the equations and , .the first set of equations yields the outliers , , while the second set of equations can be rewritten as , \quad j=1,\dots , q.\ ] ] as , we conclude that the spikes of are the numbers , , where is the first moment of .if , a similar argument shows that has no outliers at all , that is , almost surely .[ ex:5.14 ] the following numerical simulation , due to charles bordenave , illustrates the appearance of two outliers arising from a single spike .we take and , where , with an orthogonal projection of rank . the matrix is given by the formula ,\ ] ] with being sampled from a standard gue .m. capitaine , additive / multiplicative free subordination property and limiting eigenvectors of spiked additive deformations of wigner matrices and spiked sample covariance matrices , _ journal of theoretical probability _ , volume 26 ( 3 ) ( 2013 ) , 595648 .m. capitaine , c. donati - martin , and d. fral , the largest eigenvalues of finite rank deformation of large wigner matrices : convergence and nonuniversality of the fluctuations , _ ann . probab_.* 37 * ( 2009 ) , 147 .m. capitaine , c. donati - martin , d. fral and m. fvrier , free convolution with a semi - circular distribution and eigenvalues of spiked deformations of wigner matrices , _ electronic journal of probability _ * 16 * ( 2011 ) , 17501792 .p. loubaton and p. vallet , almost sure localization of the eigenvalues in a gaussian information - plus - noise model .application to the spiked models , _ electron .j. probab . _* 16 * ( 2011 ) , no .70 , 19341959 .n. r. rao and j. w. silverstein , fundamental limit of sample generalized eigenvalue based detection of signals in noise using relatively few signal - bearing and noise - only samples , _ ieee journal of selected topics in signal processing _ * 4 * ( 2010 ) , 468480 .
in this paper we characterize the possible outliers in the spectrum of large deformed unitarily invariant additive and multiplicative models , as well as the eigenvectors corresponding to them . we allow both the non - deformed unitarily invariant model and the perturbation matrix to have non - trivial limiting spectral measures and spiked outliers in their spectrum . we uncover a remarkable new phenomenon : a single spike can generate asymptotically several outliers in the spectrum of the deformed model . the free subordination functions play a key role in this analysis . ]
this document contains a derivation of the likelihood - ratio formula for continuous quantum measurements with poissonian noise in sec .[ poisson ] and an example of quantum optomechanical stochastic force detection in sec . [ force ] .[ [ poissonlikelihood - ratio - formula - for - continuous - poissonian - measurements ] ] [ poisson]likelihood - ratio formula for continuous poissonian measurements ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ the completely positive map for a weak poissonian measurement is given by }\right\}},\end{aligned}\ ] ] where and models the effect of imperfect quantum efficiency . rearranging terms , } , \\\tilde p(\delta y ) & \equiv ( 1-\delta y)(1-\alpha\delta t)+ \delta y\alpha\delta t,\end{aligned}\ ] ] where is a reference probability distribution and is an arbitrary positive number. this gives equation ( [ df ] ) coincides with the quantum filtering equation for an unnormalized posterior density operator given poissonian observations .next , consider where is the filtering estimate of the observable assuming that the hypothesis is true .expanding in taylor series , with for and for poissonian observations , }.\end{aligned}\ ] ] [ [ force - quantum - optomechanical - detection - of - a - gaussian - stochastic - force ] ] [ force ] quantum optomechanical detection of a gaussian stochastic force ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ let be a classical force acting on a moving mirror with position operator and momentum , and be a vectoral classical gaussian stochastic process described by the ito equation the mirror is assumed to be a harmonic oscillator with mass and frequency and part of an optical cavity pumped by a near - resonant continuous - wave laser .the phase quadrature of the cavity output is measured continuously by homodyne detection , with an observation process given by . for simplicity ,i assume that the optical intracavity dynamics can be adiabatically eliminated , the phase modulation by the mirror motion is much smaller than radians , such that the homodyne detection is effectively measuring the mirror position , and there is no excess decoherence . under hypothesis ,the force is present and the quantum filtering equation for the unnormalized hybrid density operator is then given by + \mathcal l_c(x ) f_1 , \\h_1(x ) & \equiv \frac{p^2}{2 m } + \frac{m\omega^2}{2}q^2-q cx , \\ \mathcal l_c(x )f_1 & \equiv -\sum_\mu { \frac{\partial } { \partial x_\mu}}{\left[(a x)_\mu f_1\right ] } + \frac{1}{2}\sum_{\mu,\nu } { \frac{\partial ^2}{\partial x_\mu\partial x_\nu } } { \left(b_{\mu\nu}f_1\right)},\end{aligned}\ ] ] where is the measurement noise variance that depends on the laser intensity and the cavity properties and is the forward kolmogorov generator for the classical process . under the null hypothesis ,the force is absent and the filtering equation for the oscillator density operator is , \\h_0 & \equiv \frac{p^2}{2 m } + \frac{m\omega^2}{2}q^2.\end{aligned}\ ] ]these filtering equations can be transformed to equations for the wigner functions of : } , \\ \mathcal l_0 ' g_0 & \equiv -\frac{p}{m}{\frac{\partial g_0}{\partial q}}+m\omega^2 q{\frac{\partial g_0}{\partial p } } + \frac{\hbar^2}{8r}{\frac{\partial^2 g_0}{\partial p^2}},\end{aligned}\ ] ] where and are now phase - space variables , is seen to suffer from measurement - back - action - induced diffusion , and has the form of a forward kolmogorov generator for a new gaussian process : } + \frac{1}{2}\sum_{\mu,\nu } { \frac{\partial ^2}{\partial z_\mu\partial z_\nu } } { \left(s_{1\mu\nu}g_1\right ) } , \\j_1 & \equiv { \left(\begin{array}{cc|c}0 & 1/m & { \boldsymbol{0 } } \\ -m\omega^2 & 0 & c\\ \hline { \boldsymbol{0 } } & { \boldsymbol{0 } } & a \end{array}\right ) } , \quad s_1 \equiv { \left(\begin{array}{cc|c}0 & 0 & { \boldsymbol{0 } } \\ 0 & \hbar^2/4r & { \boldsymbol{0}}\\ \hline { \boldsymbol{0 } } & { \boldsymbol{0 } } & b \end{array}\right)},\end{aligned}\ ] ] with denoting zero matrices .similarly , under , we have and } + \frac{1}{2}\sum_{\mu,\nu } { \frac{\partial ^2}{\partial z_\mu\partial z_\nu } } { \left(s_{0\mu\nu}g_0\right ) } , \\j_0 & \equiv { \left(\begin{array}{ccc}0 & 1/m \\ -m\omega^2 & 0 \end{array}\right ) } , \quad s_0 \equiv { \left(\begin{array}{ccc}0 & 0 \\ 0 & \hbar^2/4r \end{array}\right)}.\end{aligned}\ ] ] the gaussian statistics mean that we can use kalman - bucy filters to compute the filtering estimates of the mirror position given : and the likelihood ratio becomes }.\end{aligned}\ ] ] given the gaussian structure of the problem under each hypothesis , we can use known results about the chernoff upper bounds for classical waveform estimation to bound the error probabilities : } , \\p_{01 } & \le \exp{\left[\mu(s)+(1-s)\gamma\right ] } , \\\mu(s ) & = \frac{1}{2r}\int_{t_0}^t dt { \left[(1-s)\sigma_{1q}(t)+s\sigma_{0q}(t ) -\tilde\sigma_{q}(s , t)\right]},\end{aligned}\ ] ] where , is the threshold of the likelihood - ratio test , is the variance component of , which obeys eq .( [ sigma ] ) , and is the variance of for a different filtering problem , in which observations of are made with noise variance and has the statistics of under , viz ., the tightest upper bounds are obtained by minimizing the bounds with respect to .if and therefore are stationary , and will converge to steady states in the long - time limit , and the chernoff bounds will decay exponentially with time .
i propose a general quantum hypothesis testing theory that enables one to test hypotheses about any aspect of a physical system , including its dynamics , based on a series of observations . for example , the hypotheses can be about the presence of a weak classical signal continuously coupled to a quantum sensor , or about competing quantum or classical models of the dynamics of a system . this generalization makes the theory useful for quantum detection and experimental tests of quantum mechanics in general . in the case of continuous measurements , the theory is significantly simplified to produce compact formulae for the likelihood ratio , the central quantity in statistical hypothesis testing . the likelihood ratio can then be computed efficiently in many cases of interest . two potential applications of the theory , namely quantum detection of a classical stochastic waveform and test of harmonic - oscillator energy quantization , are discussed . testing hypotheses about a physical system by observation is a fundamental endeavor in scientific research . observations are often indirect , noisy , and limited ; to choose the best model of a system among potential candidates , statistical inference is the most logical way and has been extensively employed in diverse fields of science and engineering . many important quantum mechanics experiments , such as tests of quantum mechanics , quantum detection of weak forces or magnetic fields , and quantum target detection , are examples of hypothesis testing . to test quantum nonlocality , for instance , one should compare the quantum model with the best classical model ; bell s inequality and its variations , which impose general bounds on observations of local - hidden - variable systems , have been widely used in this regard . the analyses of experimental data in many such tests have nonetheless been criticized by peres : the statistical averages in all these inequalities can never be measured exactly in a finite number of trials . one should use statistical inference to account for the uncertainties and provide an operational meaning to the data . another important recent development in quantum physics is the experimental demonstration of quantum behavior in increasingly macroscopic systems , such as mechanical oscillators and microwave resonators . to test the quantization of the oscillator energy , for example , the use of quantum filtering theory has been proposed to process the data , but testing quantum behavior by assuming quantum mechanics can be criticized as begging the question . an ingenious proposal by clerk _ et al . _ considers the third moment of energy as a test of energy quantization . like the correlations in bell s inequality , however , the third moment is a statistical average and can not be measured exactly in finite time . again , statistical inference should be used to test the quantum behavior of a system rigorously , especially when the measurements are weak and noisy . the good news here is that the error probabilities for hypothesis testing should decrease exponentially with the number of measurements when the number is large , so one can always compensate for a weak signal - to - noise ratio by increasing the number of trials . quantum hypothesis testing was first studied by holevo and yuen _ et al . _ . since then , researchers have focused on the use of statistical hypothesis testing techniques for initial quantum state discrimination . here i propose a more general quantum theory of hypothesis testing for _ model _ discrimination , allowing the hypotheses to be not just about the initial state but also about the dynamics of the system under a series of observations . this generalization makes the theory applicable to virtually any hypothesis testing problem that involves quantum mechanics , including tests of quantum dynamics and quantum waveform detection . in the case of continuous measurements with gaussian or poissonian noise , the theory is significantly simplified to produce compact formulae for the likelihood ratio , the central quantity in statistical hypothesis testing . the formulae enable one to compute the ratio efficiently in many cases of interest and should be useful for numerical approximations in general . notable prior work on continuous quantum hypothesis testing is reported in refs . , which study state discrimination or parameter estimation only and have not derived the general likelihood - ratio formulae proposed here . to illustrate the theory , i discuss two potential applications , namely quantum detection of a classical stochastic waveform and test of harmonic - oscillator energy quantization . waveform detection is a basic operation in future quantum sensing applications , such as gravitational - wave detection , optomechanical force detection , and atomic magnetometry . tests of energy quantization , on the other hand , have become increasingly popular in experimental physics due to the rapid recent progress in device fabrication technologies . besides these two applications , the theory is expected to find wide use in quantum information processing and quantum physics in general , whenever new claims about a quantum system need to be tested rigorously . statistical hypothesis testing entails the comparison of observation probabilities conditioned on different hypotheses . to test two hypotheses labeled and using an observation record , the observer splits the observation space into two parts and ; when falls in , the observer chooses , and when falls in , the observer chooses . the error probabilities are then and . all binary hypothesis testing protocols involve the computation of the likelihood ratio , defined as the ratio is then compared against a threshold that depends on the protocol ; one decides on if and if . for example , the neyman - pearson criterion minimizes under a constraint on , while the bayes criterion minimizes , and being arbitrary positive numbers . for multiple independent trials , the final likelihood ratio is simply the product of the ratios . in most cases , the error probabilities are difficult to calculate analytically and only bounds , such as the chernoff upper bound , may be available , but the likelihood ratio can be used to update the posterior hypothesis probabilities from prior probabilities and via and and therefore quantifies the strength of evidence for against given . generalization to multiple hypotheses beyond two is also possible by computing multiple likelihood ratios or the posterior probabilities . consider now two hypotheses about a system under a sequence of measurements , with results . for generality , i use quantum theory to derive for both hypotheses , but note that a classical model can always be expressed mathematically as a special case of a quantum model . the observation probability distribution is ,\end{aligned}\ ] ] where is the initial density operator at time , is the completely positive map that models the system dynamics from time to , is the completely positive map that models the measurement at time , and the subscripts for , , and denote the assumption of for these quantities . to proceed , let and assume the following kraus form of for gaussian measurements : } \nonumber\\&\quad\times m_j(\delta z , t)\rho m_j^\dagger(\delta z , t ) , \nonumber\\ m_j(\delta z , t ) & \equiv \frac{1}{(2\pi q_j\delta t)^{1/4 } } \exp{\left(-\frac{\delta z^2}{4q_j\delta t}\right ) } \nonumber\\&\quad\times { \left[1+\frac{\delta z}{2q_j}c_j-\frac{\delta t}{8q_j}c_j^\dagger c_j + o(\delta t)\right]},\end{aligned}\ ] ] where is the noise variance of the inherent quantum - limited measurement , is the excess noise variance , is a quantum operator depending on the measurement , and denotes terms asymptotically smaller than . the map becomes , \nonumber\\ \tilde p(\delta y ) & \equiv \frac{1}{\sqrt{2\pi r\delta t } } \exp{\left(-\frac{\delta y^2}{2r\delta t}\right ) } , \quad r \equiv q_j + s_j.\end{aligned}\ ] ] i assume that the total noise variance is independent of the hypothesis to focus on tests of hidden models rather than the observation noise levels . then factors out of both the numerator and denominator of the likelihood ratio and cancels itself . taking the continuous time limit using it calculus with , the likelihood ratio becomes , with obeying the following stochastic differential equation : and being the lindblad generator originating from . equation ( [ dmz ] ) has the exact same mathematical form as the linear belavkin equation for an unnormalized filtering density operator , but beware that represents the state of the system only if is true ; i call an assumptive state . to put in a form more amenable to numerics , consider the stochastic differential equation for : where is an assumptive estimate ; it is the posterior mean of the observable only if is true . the form of eq . ( [ dtrf ] ) suggests that it can be solved by taking the logarithm of , i.e. , , resulting in with the integral being an it integral . becomes }. \label{formula}\end{aligned}\ ] ] this compact formula for the likelihood ratio is the quantum generalization of a similar result by duncan and kailath in classical detection theory . generalization to the case of vectoral observations with noise covariance matrix is trivial ; the result is simply eq . ( [ formula ] ) with replaced by and by . for continuous measurements with poissonian noise , a formula for can be derived similarly : } , \label{formula2 } \\ \mu_j & \equiv \frac{1}{{\operatorname{tr}}f_j}{\operatorname{tr}}(\eta_j c_j^\dagger c_j f_j ) , \label{estimate2 } \\ df_j & = dt\mathcal l_j f_j + ( dy-\alpha dt ) { \left(\frac{\eta_j}{\alpha } c_jf_jc_j^\dagger - f_j\right ) } \nonumber\\&\quad + \frac{dt}{2 } { \left(2c_jf_j c_j^\dagger - c_j^\dagger c_j f_j - f_jc_j^\dagger c_j\right ) } , \label{filter2}\end{aligned}\ ] ] where is the quantum efficiency , can be any positive number , and eqs . ( [ estimate2 ] ) and ( [ filter2 ] ) form a quantum filter for poissonian observations . equation ( [ formula2 ] ) generalizes a similar classical result by snyder . equations ( [ formula ] ) and ( [ formula2 ] ) show that continuous hypothesis testing can be done simply by comparing how the observation process is correlated with the observable estimated by each hypothesis , as schematically depicted in fig . [ estimator - correlator ] . ) and ( [ formula2]).,scaledwidth=40.0% ] since eqs . ( [ dmz ] ) and ( [ estimate ] ) or eqs . ( [ estimate2 ] ) and ( [ filter2 ] ) have the same form as belavkin filters , one can leverage established quantum filtering techniques to update the estimates and the likelihood ratio continuously with incoming observations . if has a wigner function that remains gaussian in time , the problem has an equivalent classical linear gaussian model conditioned on each hypothesis , and can be computed efficiently using the kalman - bucy filter , which gives the mean vector and covariance matrix of the wigner function . the classical model also enables one to use existing formulae of chernoff bounds for classical waveform detection to bound the error probabilities . it remains a technical challenge to compute the quantum filter for problems without a gaussian phase - space representation beyond few - level systems , but the quantum trajectory method should help cut the required computational resources by employing an ensemble of wavefunctions instead of a density matrix . error bounds for such nonclassical problems also remain an important open problem . as an illustration of the theory , consider the detection of a weak classical stochastic signal , such as a gravitational wave or a magnetic field , using a quantum sensor , with hypothesizing the presence of the signal and its absence . let be a vector of the state variables for the classical signal . one way to account for the dynamics of is to use the hybrid density operator formalism , which includes as auxiliary degrees of freedom in the system . the initial assumptive state becomes , with being the initial density operator for the quantum sensor and the initial probability density of . equation ( [ dmz ] ) for becomes with /\int dx { \operatorname{tr}}f_1 $ ] . should include the lindblad generator for the quantum sensor , the coupling of to the quantum sensor via an interaction hamiltonian , and also the forward kolmogorov generator that models the classical dynamics of . is an operator that depends on the actual measurement of the quantum sensor ; for cavity optomechanical force detection for example , is the cavity optical annihilation operator or can be approximated as the mechanical position operator if the intracavity optical dynamics can be adiabatically eliminated . for the null hypothesis , the classical degrees of freedom need not be included . is then , eq . ( [ dmz ] ) becomes and includes only the lindblad generator for the quantum sensor . in most current cases of interest in quantum sensing , the wigner functions for and remain approximately gaussian . kalman - bucy filters can then be used to solve eqs . ( [ detection ] ) and ( [ null ] ) for the assumptive estimates , to be correlated with the observation process according to eq . ( [ formula ] ) to produce , and existing formulae of chernoff bounds for classial waveform detection can be used to bound the error probabilities . ref . contains a simple example of such calculations . quantum smoothing can further improve the estimation of in the event of a likely detection . although smoothing is not needed here for the exact computation of , it may be useful for improving the approximation of for non - gaussian problems when the exact estimates are too expensive to compute . as a second example , consider the test of energy quantization in a harmonic oscillator . to ensure the rigor of the test , imagine a classical physicist who wishes to challenge the quantum harmonic oscillator model by proposing a competing model based on classical mechanics . to devise a good classical model , he first examines quadrature measurements of a harmonic oscillator in a thermal bath . with the harmonic time dependence on the oscillator frequency removed in an interaction picture , the assumptive state for the quantum hypothesis obeys + \frac{dy}{2r}{\left(cf_1+f_1c\right ) } \nonumber\\&\quad + \frac{dt}{8q}{\left(2cf_1c - c^2f_1-f_1c^2\right ) } , \label{ho } \\ c & = q\cos\theta+p\sin\theta,\end{aligned}\ ] ] where is the annihilation operator , and are quadrature operators , is a quadrature operator with held fixed for each trial to eliminate any complicating measurement backaction effect , is the decay rate of the oscillator , and is a temperature - dependent parameter . this backaction - evading measurement scheme can be implemented approximately by double - sideband optical pumping in cavity optomechanics . an equivalent classical model for the quadrature measurements is where and are classical ornstein - uhlenbeck processes and , , and are uncorrelated classical wiener noises with and . one can make diagonal and embed it with a classical distribution to model classical statistics ; the equation for the classical assumptive state is + \frac{dy}{r}h g_0 . \label{g0}\end{aligned}\ ] ] which is a classical duncan - mortensen - zakai ( dmz ) equation . the assumptive estimate should be identical to the quantum one , as can be seen by transforming to a wigner function and neglecting the measurement backaction that does not affect the observations . given the quadrature observations then stays at 1 , confirming that the two models are indistinguishable . in a different experiment on the same oscillator , the energy of the oscillator is measured instead . let which can be implemented approximately by dispersive optomechanical coupling in cavity optomechanics . still obeys eq . ( [ ho ] ) , but with now given by eq . ( [ energy_c ] ) and different and . the measurements are again backaction - evading , as the backaction noise on the oscillator phase does not affect the energy observations . given the prior success of the classical model , the classical physicist decides to retain eqs . ( [ x ] ) and modifies only the observation as a function of and : the dmz equation given by eq . ( [ g0 ] ) , assuming continuous energy , should now produce an assumptive energy estimate different from the quantum one ; it is this difference that should make the likelihood ratio increase in favor of the quantum hypothesis with more observations , if quantum mechanics is correct . previous data analysis techniques that consider only the quantum estimate fail to take into account the probability that the observations can also be explained by a continuous - energy model and are therefore insufficient to demonstrate energy quantization conclusively . the non - gaussian nature of the problem means that bounds on the error probabilities may be difficult to compute analytically and one may have to resort to numerics , but one can also use as a bayesian statistic to quantify the strength of the evidence for one hypothesis against another . discussions with j. combes , c. caves , and a. chia are gratefully acknowledged . this material is based on work supported by the singapore national research foundation under nrf grant no . nrf - nrff2011 - 07 . 99 e. t. jaynes , _ probability theory : the logic of science _ ( cambridge university press , cambridge , england , 2003 ) ; j. m. bernardo and a. f. m. smith , _ bayesian theory _ ( wiley , new york , 2000 ) . h. l. van trees , _ detection , estimation , and modulation theory , part i _ ( wiley , new york , 2001 ) . j. s. bell , * 38 * 447 ( 1966 ) ; s. kochen and e. p. specker , j. math . mech . * 17 * , 59 ( 1967 ) ; j. s. bell , _ speakable and unspeakable in quantum mechanics _ ( cambridge university press , cambridge , england , 2004 ) ; a. zeilinger , * 71 * , s288 ( 1999 ) ; a. peres , _ quantum theory : concepts and methods _ ( kluwer , new york , 2002 ) . r. ruskov , a. n. korotkov , and a. mizel , * 96 * , 200404 ( 2006 ) ; a. palacios - laloy _ et al . _ , nature phys . * 6 * , 442 ( 2010 ) ; a. bednorz and w. belzig , * 105 * , 106803 ( 2010 ) . v. b. braginsky and f. ya . khalili , _ quantum measurement _ ( cambridge university press , cambridge , england , 1992 ) ; c. m. caves _ et al . _ , * 52 * , 341 ( 1980 ) ; r. schnabel , n. mavalvala , d. e. mcclelland , and p. k. lam , nature commun . * 1 * , 121 ( 2010 ) ; the ligo scientific collaboration , nature phys . * 7 * , 962 ( 2011 ) ; d. budker and m. romalis , nature phys . * 3 * , 227 ( 2007 ) ; c. a. muschik , h. krauter , k. hammerer , and e. s. polzik , quantum inf . process . * 10 * , 839 ( 2011 ) . c. w. helstrom , _ quantum detection and estimation theory _ ( academic press , new york , 1976 ) . s. lloyd , science * 321 * , 1463 ( 2008 ) ; s .- h . tan _ et al . _ , * 101 * , 253601 ( 2008 ) ; s. pirandola , * 106 * , 090504 ( 2011 ) . a. peres , fortschr . phys . * 48 * , 531 ( 2000 ) . k. c. schwab and m. l. roukes , phys . today * 58 * , 36 ( 2005 ) ; t. j. kippenberg and v. j. vahala , science * 321 * , 1172 ( 2008 ) ; m. aspelmeyer , s. grblacher , k. hammerer , and n. kiesel , * 27 * , a189 ( 2010 ) ; a. d. oconnell _ et al . _ , nature ( london ) * 464 * , 697 ( 2010 ) . j. d. thompson _ et al . _ , nature ( london ) * 452 * , 72 ( 2008 ) ; a. m. jayich _ et al . _ , new j. phys . * 10 * , 095008 ( 2008 ) ; j. c. sankey , c. yang , b. m. zwickl , a. m. jayich , and j. g. e. harris , nature phys . * 6 * , 707 ( 2010 ) . s. haroche and j. m. raimond , _ exploring the quantum : atoms , cavities , and photons _ ( oxford university press , oxford , 2006 ) ; s. gleyzes _ et al . _ , nature ( london ) * 446 * , 297 ( 2007 ) ; c. guerlin , _ et al . _ , nature ( london ) * 448 * , 889 ( 2007 ) ; b. r. johnson _ et al . _ , nature phys . * 6 * , 663 ( 2010 ) . d. h. santamore , a. c. doherty , and m. c. cross , * 70 * , 144301 ( 2004 ) ; j. gambetta _ et al . _ , * 77 * , 012112 ( 2008 ) ; f. helmer , m. mariantoni , e. solano , and f. marquardt , * 79 * , 052115 ( 2009 ) ; c. deng , j. m. gambetta , and a. lupacu , * 82 * , 220505(r ) ( 2010 ) . a. a. clerk , f. marquardt , and j. g. e. harris , * 104 * , 213603 ( 2010 ) . a. s. holevo , proc . 2nd japan - ussr symp . prob . theo . , kyoto , japan * 1 * , 22 ( 1972 ) . h. p. yuen , r. s. kennedy , and m. lax , ieee trans . inform . theory * it-21 * , 125 ( 1975 ) . a. chefles , contemp . phys . * 41 * , 401 ( 2000 ) . s. dolinar , mit rle quart . prog . rep . * 111 * , 115 ( 1973 ) ; r. l. cook , p. j. martin , and j. m. geremia , nature ( london ) * 446 * , 774 ( 2007 ) ; j. gambetta , w. a. braff , a. wallraff , s. m. girvin , and r. j. schoelkopf , * 76 * , 012325 ( 2007 ) ; k. jacobs , quantum inform . compu . * 7 * , 127 ( 2007 ) . b. a. chase and j. m. geremia , * 79 * , 022314 ( 2009 ) . h. m. wiseman and g. j. milburn , _ quantum measurement and control _ ( cambridge university press , cambridge , england , 2010 ) . m. tsang , * 102 * , 250403 ( 2009 ) ; * 80 * , 033840 ( 2009 ) ; * 81 * , 013824 ( 2010 ) ; v. petersen and k. mlmer , _ ibid . _ * 74 * , 043802 ( 2006 ) ; m. tsang and c. m. caves , * 105 * , 123601 ( 2010 ) ; m. tsang , h. m. wiseman , and c. m. caves , _ ibid . _ * 106 * , 090401 ( 2011 ) . v. p. belavkin , radiotekh . elektron . ( moscow ) * 25 * , 1445 ( 1980 ) ; theory probab . appl . * 38 * , 573 ( 1993 ) ; * 39 * , 363 ( 1994 ) ; l. bouten and r. van handel , e - print arxiv : math - ph/0508006 ; l. bouten , r. van handel , and m. r. james , siam j. control optim . * 46 * , 2199 ( 2007 ) . t. e. duncan , inform . control * 13 * , 62 ( 1968 ) ; t. kailath , ieee trans . inform . theory * it-15 * , 350 ( 1969 ) ; t. kailath and h. v. poor , _ ibid . _ * 44 * , 2230 ( 1998 ) . see supplementary material for a detailed derivation of eqs . ( [ formula2])-([filter2 ] ) and a simple example of the application of hypothesis testing theory to quantum optomechanical force detection . d. l. snyder , ieee trans . inform . theor . * it-18 * , 91 ( 1972 ) . h. l. van trees , _ detection , estimation , and modulation theory . part iii _ ( wiley , new york , 2001 ) . h. j. carmichael , _ statistical methods in quantum optics 1 _ ( springer - verlag , berlin , 1999 ) ; _ statistical methods in quantum optics 2 _ ( springer - verlag , berlin , 2008 ) ; a. barchielli and m. gregoratti , _ quantum trajectories and measurements in continuous time _ ( springer - verlag , berlin , 2009 ) ; r. dum , p. zoller , and h. ritsch , * 45 * , 4879 ( 1992 ) ; j. dalibard , y. castin , and k. mlmer , * 68 * , 580 ( 1992 ) ; k. mlmer , y. castin , and j. dalibard , * 10 * , 524 ( 1993 ) ; n. gisin , * 52 * , 1657 ( 1984 ) ; helvetica phys . acta * 62 * , 363 ( 1989 ) . p. warszawski , h. m. wiseman , and h. mabuchi , * 65 * , 023802 ( 2002 ) ; p. warszawski and h. m. wiseman , j. opt . b * 5 * , 1 ( 2003 ) ; * 5 * , 15 ( 2003 ) . l. bouten , r. van handel , and a. silberfarb , j. funct . anal . * 254 * , 3123 ( 2008 ) . j. c. crassidis and j. l. junkins , _ optimal estimation of dynamic systems _ ( chapman and hall , boca raton , 2004 ) . k. s. thorne , r. w. p. drever , c. m. caves , m. zimmermann , and v. d. sandberg , * 40 * , 667 ( 1978 ) ; j. b. hertzberg _ et al . _ , nature phys . * 6 * , 213 ( 2010 ) . t. e. duncan , ph.d . thesis , standord university , 1967 ; r. e. mortensen , ph.d . thesis , university of california , berkeley , 1966 ; m. zakai , z. wahrsch . verw . geb . , * 11 * , 230 ( 1969 ) .
mathematical models for scientific and engineering systems often involve with some uncertainties .we may roughly classify such uncertainties into two kinds .the first kind of uncertainties may be called _model uncertainty_. they involve with physical processes that are less known , not yet well understood , not well - observed or measured , and thus difficult to be represented in the mathematical models .the second kind of uncertainties may be called _simulation uncertainty_. this arises in numerical simulations of multiscale systems that display a wide range of spatial and temporal scales , with no clear scale separation . due to the limitations of computer power , at present and for the conceivable future, not all scales of variability can be explicitly simulated or resolved .although these unresolved scales may be very small or very fast , their long time impact on the resolved simulation may be delicate ( i.e. , may be negligible or may have significant effects , or in other words , uncertain ) .thus , to take the effects of unresolved scales on the resolved scales into account , representations or parameterizations of these effects are desirable . these uncertainties are sometimes also called _ unresolved scales _ , as they are not represented or not resolved in modeling or simulation .model uncertainties have been considered in , for example , and references therein .works relevant for parameterizing unresolved scales include , among others .in this paper we consider an issue of approximating model uncertainty or simulation uncertainty ( unresolved scales ) by stochastic processes , and then devise a stochastic scheme for such approximations .we first recall some basic facts about fractional brownian motion ( fbm ) in [ fbm ] .then we discuss model uncertainty and simulation uncertainty in [ modelerror ] and [ sles ] , respectively . finally , we present an example in [ example ] demonstrating our result .this example involves approximating subgrid scales via correlated noises , in the context of large eddy simulations of a partial differential equation .we discuss a model of colored noise in terms of fractional brownian motion ( fbm ) , including a special case which is white noise in terms of usual brownian motion .the fractional brownian motion , indexed by a so called hurst parameter , is a generalization of the more well - known process of the usual brownian motion .it is a centered gaussian process with stationary increments .however , the increments of the fractional brownian motion are not independent , except in the usual brownian motion case ( ) . for more details , see . + + definition of fractional brownian motion : for , a gaussian process , or , is a fractional brownian motion if it starts at zero , has mean zero = 0 ] for all t and s. the standard brownian motion is a fractional brownian motion with hurst parameter . + + some properties of fractional brownian motion : a fractional brownian motion has the following properties : + ( i ) it has stationary increments ; + ( ii ) when , it has independent increments ; + ( iii ) when , it is neither markovian , nor a semimartingale .+ we use the weierstrass - mandelbrot function to approximate the fractional brownian motion .the basic idea is to simulate fractional brownian motion by randomizing a representation due to weierstrass . given the hurst parameter with , we define the function to approximate the fractional brownian motion : where is a constant , s are normally distributed random variables with mean and standard deviation , and the s are uniformly distributed random variables in the interval .the underlying theoretical foundation for this approximation can be found in .figures 1 and 2 show a sample path of the usual brownian motion ( i.e. , ) , and fractional brownian motion with hurst parameter , respectively . _ , width=384,height=288 ] , with ,width=384,height=288 ]we consider a spatially extended system modeled by a partial differential equation ( pde ) : where is a linear ( unbounded ) differential operator , and is a nonlinear function of with and , and satisfies a local lipschitz condition .in fact , may also depend on the gradient of .if this ( deterministic ) model is accurate , i.e. , its prediction on the field matches with the observational data on a certain period of time ] on both sides , we obtain = \int_0^t \sigma(x ) d{b}^h_t = \sigma(x ) b^h_t.\end{aligned}\ ] ] therefore , taking mean - square on both sides , )^2 & = & \sigma^2(x ) t^{2h}.\end{aligned}\ ] ] thus an estimator for is )^2 } \;\ ; , \label{ax2}\end{aligned}\ ] ] which can be computed numerically . by the stochastic parameterization ( [ noise2 ] ) on the sgs term , with determined from ( [ force2 ] ) and from ( [ ax2 ] ) , the les model ( [ les2 ] ) becomes a stochastic partial differential equation ( spde ) for the large eddy solution : with the appropriately filtered boundary condition and filtered initial condition .we present a specific example of stochastic modeling of simulation uncertainty of subgrid scales , in the context of large eddy simulations . we consider the following nonlinear partial differential equation with a memory term ( time - integral term ) : under appropriate initial condition and boundary conditions with constants , on a bounded domain . here is a positive constant .this model arises in mathematical modeling in ecology , heat conduction in certain materials and materials science . the time - integral term here represents a memory effect depending on the past history of the system state , and this memory effect decays polynomially fast in time .the large eddy solution is the true solution looked through a filter : i.e. , through convolution with a spatial filter , with spatial scale ( or filter size or cut - off size ) : in this paper , we use a gaussian filter as in , . we can write with the large eddy term and the fluctuating term .note that .so the sgs term involves nonlinear interactions of fluctuations and the large eddy flows .thus may be regarded as a function of and : .the leads to a possibility of approximating by a suitable stochastic process defined on a probability space , with , the sample space , and probability measure .this means that we treat data as random data as in , which take different realizations , e.g. , due to fluctuating observations or due to numerical simulation with initial and boundary conditions with small fluctuations . in fluid or geophysical fluid simulations, the sgs term may be highly fluctuating and time - correlated , and this term may be inferred from observational data , or from fine mesh simulations .this further suggests for parameterizing the subgrid scale term as a time - correlated or colored noisy term .the increments of fractional brownian motion are correlated in time and hence its generalized time derivative is used as a model for colored noise . in the special case , we have the white noise .thus we parameterize the subgrid scale term , which is time - correlated , by colored noise as follows : where is the mean component of the subgrid scale term .moreover , the noise intensity is a non - negative deterministic function to be determined from fluctuating sgs data .the subgrid scale term may be inferred from observational data , or from fine mesh simulations as we do here .we represent the mean component in terms of the large eddy solution .the specific form for depends on the nature of the mean of .here we take , where coefficients s are determined via data fitting by minimizing ^ 2dx dt ] on both sides, we obtain = \int_0^t \sigma(x ) d{b}^h_t = \sigma(x ) b^h_t.\end{aligned}\ ] ] therefore , taking mean - square on both sides , )^2 & = & \sigma^2(x ) t^{2h}.\end{aligned}\ ] ] thus an estimator for is )^2 } \;\ ; , \label{ax}\end{aligned}\ ] ] which can be computed numerically . by the stochastic parameterization( [ noise ] ) on the sgs term , with determined from ( [ force ] ) and from ( [ ax ] ) , the les model ( [ les ] ) becomes a stochastic partial differential equation ( spde ) for the large eddy solution : with boundary conditions and filtered initial condition fine mesh simulations of the original system with memory ( [ heat ] ) are conducted to generate benchmark solutions or solution realizations , with initial conditions slightly perturbed ; see fig .these fine mesh solutions are used to generate the sgs term defined in ( [ ef ] ) at each time and space step . the filter size used in calculating taken as .the mean is calculated from ( [ force ] ) via cubic polynomial data fitting ( as discussed in the last section ) , and parameter function is calculated as in ( [ ax ] ) . the stochastic les model ( [ new ] )is solved by the same numerical code but on a coarser mesh .note that a four times coarser mesh simulation with no stochastic parameterization for the original system ( [ heat ] ) does not generate satisfactory results ; see fig .4 . the stochastic les model ( [ new ] )is then solved in the mesh four times coarser than the fine mesh used to solve the original equation ( [ heat ] ) .the stochastic parameterization leads to better resolution of the solution as shown in fig .5 . as in , it can be shown that when two stochastic parameterization terms are close in mean - square norm on finite time intervals , the solutions are also close in the same norm .l. arnold , hasselmann s program visited : the analysis of stochasticity in deterministic climate models . in j .-von storch and p. imkeller , editors , _ stochastic climate models_. pages 141158 , boston , 2001 .birkhuser .t. n. palmer , g. j. shutts , r. hagedorn , f. j. doblas - reyes , t. jung and m. leutbecher . representing model uncertainty in weather and claimte prediction .earth planet .sci . _ * 33 * ( 2005 ) , 163 - 193 .c. penland and p. sura , sensitivity of an ocean model to details " of stochastic forcing . in _ proc .ecmwf workshop on represenation of subscale processes using stochastic - dynamic models_. reading , england , 6 - 8 june 2005 .
model uncertainties or simulation uncertainties occur in mathematical modeling of multiscale complex systems , since some mechanisms or scales are not represented ( i.e. , unresolved " ) due to lack in our understanding of these mechanisms or limitations in computational power . the impact of these unresolved scales on the resolved scales needs to be parameterized or taken into account . a stochastic scheme is devised to take the effects of unresolved scales into account , in the context of solving nonlinear partial differential equations . an example is presented to demonstrate this strategy . * key words : * stochastic partial differential equations ( spdes ) ; stochastic modeling ; impact of unresolved scales on resolved scales ; model error ; large eddy simulation ( les ) ; fractional brownian motion * mathematics subject classifications ( 2000 ) * : 60h30 , 60h35 , 65c30 , 65n35
rowd simulation has found its way into computer science , computer visualizations and the computer simulation of oriented building construction and crowd management . with continuously growing population around the world and with enormous evolution in the different modes of transportation in the last decadea lot of paper have appeared with increasing interest in modelling crowd and evacuation dynamics .thus the simulation of pedestrian flows has become an important research area .pedestrian models are based on macroscopic or microscopic behaviour .the evolution and design of any pedestrian simulation model requires a lot of information and data .a number of variables and attributes arises from empirical data collection and need to be considered to develop and calibrate a ( microscopic ) pedestrian simulation model . for this reason we used different tools and developed different methods to collect the microscopic data and to analyse microscopic pedestrian flow .it is very important to mention that the pedestrian data collection especially in a dangerous situation is still very much in its infancy .an aim of this study is to establish more clearness and understanding about the microscopic pedestrian flow characteristics .manual , semi manual and automatic image processing data collection systems were developed .many published studies show that the microscopic speed obey a normal distribution with a mean of 1.38 m / second and a standard deviation of 0.37 m / second .the acceleration distribution also resemblances a normal distribution with an average of 0.68 m/ square second . for the evolution and development of pedestrian microscopic simulation models , a lot of datawas collected with the help of video recording and tracking of moving entities in the pedestrian flow using the coordinates of the head path was established through image processing .a large trajectory dataset has been restored . for the observation of pedestrian flows in public placesa sony camera was used .this observation was in different places where the pilgrims perform their rituals .many variables can be gathered to describe the behaviour of pedestrians from different points of view .this paper describes how to obtain variables from video taking and simple image processing that can represent the movement of pedestrians ( pilgrims ) and its variables .moreover in this work we try to understand several parameters influencing the pedestrian behaviour in riots or panic situations . for obtaining empirical data different methodswere used , automatic and manual methods .we have analysed video recordings of the crowd movement in the tawaf in mosque / mecca during the hajj on the 27th of november , 2009 .we have evaluated unique video recordings of a 105 154 m large mataf area taken from the roof of the mosque , where upto 3 million muslims perform the tawaf and say rituals within 24 hours .both microscopic video data collection and microscopic pedestrian simulation model generate a database called pedflow database . the properties and characteristics that are capable of explaining microscopic pedestrian floware illustrated .a comparison between average instantaneous speed distributions describing the real world obtained from different methods , and how they can be used in the calibration and validation of the simulation tools , are explained .typically , manual counting was performed by tally sheet or mechanical or electronic count board to collect density and speed data for pedestrian .pedestrian behaviour studies are collected by manual observation or video recording in different public places like corridors side walks and cross walks .the effectiveness of the data ( pedestrian speed ) collected on any observed area is strongly related to the number of pedestrians in the flow .the relationship between speed , flow , and pedestrian density for a crowd population or human group has been published in many fundamental diagrams developed by fruin and others .though for many reasons the method has been used to detect and count vehicles in automatic way can not be used to detect pedestrians , since this system has been evaluated through pneumatic tube or inductance loops .as we can deduce from later work on this technology the possibility of applying this method to reproduce trajectory and motion prediction is still in a discussion phase .other approaches use a neural network framework recursively to predict pedestrian motion and trajectory .however the pedestrian trajectories in this system are calculated with incorrect simplifications .in particular , only the nearest neighbour trajectories are considered .the main shortcoming of such an estimation is that there is no uncertainty in this prediction , moreover a comparison of different path prediction shows this is still far from the reality in order to predict that all objects will follow the same set of paths exactly . a method which allowed people counting based on video texture synthesis and to reproduce motion in a novel way was introduced by heisele and woehler .the method works under the assumption that people can be segmented from the moving background by means of appearance or motion properties .the scene image is clustered based on the color and position ( r , g , b , x , y ) of pixel .the appearance of each pixel in a video frame is modelled as a mixture of gaussian distributions .a algorithm is used that matches a spherical crust template to the foreground regions of the depth map .matching is done by a time delay neural network for object recognition and motion analysis .a significant task in video intelligence systems is the extraction of information about a moving objects e.g. detecting a moving crowd with pedcount ( a pedestrian counter system using cctv ) was developed by tsuchikawa .it extracts the object using the one line path in the image by background subtraction to make a space - time ( x - t ) binary image .the direction of each travelling pedestrian is realized by the attitude of pedestrian region in the x - t image .they reported the need of background image reconstruction due to image illumination change . an algorithm to distinguish moving object from illumination change is explained based on the variance of the pixel value and frame difference .the electronic and digital revolution in video techniques during recent years has made it possible to gather detailed data concerning pedestrian behaviour , both in experiments and in real life situations .the big challenge is to develop a new efficient method of defining and measuring basic quantities like density , flow and speed .basic quantities of pedestrian dynamics are the density [ 1/m in an area and the velocity [ m / s ] of persons or a group of persons , and the flow through a door or across a specific line [ 1/s ]. the measurements also yield mean values of these quantities .the task is to improve the given methods such that they allow to go fairly close to the real data of the crowd quantities .the methods presented here are based on video tracking of the head from above .note that tracking of e.g. a shoulder or the chest might be even better , though more difficult to obtain .the density distribution knowledge in a very crowded area allows us to draw a so called density map to show us congestion directly as regions of high density .the relationship between the pedestrian density and the pedestrian maximum walking speed are formalized into a graph known as the fundamental diagram .since pedestrians move slower in a region of high density , the simulated particles should update their speed with the surrounding circumstances to maximize their rate of progress towards their goals .tawaf observations at the haram mosque in mecca were made during hajj 2009 by mr .faruk oksay .the mataf area has 10 entrances / exits .the flow of the tawaf is controlled .all pilgrims begin and end their tawaf at the same place ( see fig.[fig : haramentrances-1 ] ) .the number of pilgrims during this period is sufficient to observe the behaviour of high density crowd dynamics .figure [ fig : haramentrances-1 ] . shows the main gate doors , side entrances , stairs to the mataf open air of the haram .all observations took place on friday november 27th 2009 corresponding to 10th of dhu al - hijjah1430 hijri in the afternoon . during the total observation period of three hours , three prayers ( midday , asr and maghreb ( sunset - prayer ) ) were performed , where in this time the mataf area comes to a standstill ( see fig . [fig : prayer-4 ] ) .our video observations show that the pilgrims have the desire to be near the kaaba .therefore approximately 70 percent ( visually detected on video ) of the pilgrims perform their tawaf movement near the kaaba wall , which causes a high density in this area . in figure[ fig : prayer-4 ] , one can see all of the pilgrims perform the prayer ritual in the holy mosque in mecca .the tawaf around the kaaba is a periodic movement for the time between two prayers .the observed number of pilgrims performing their tawaf ritual at the mataf area increases slowly after every prayer until the mataf attains it s maximum capacity ( see fig .[ fig : mataf3 ] ) .figure [ fig : mataf3 ] shows a typical pedestrian movement in the mataf area over daytime . during prayer times individuals stand still and therefore movement equals approximately zero .the fluctuations in the velocity flow are created by the turbulence in the pedestrian flux .note that the average local density in a specific location in the mataf area exceeded 8 persons / m during the hajj periods ( see fig .[ fig : densitydistribution-5 ] and [ fig : density-1 ] ) . our first goal is to identify new methods and create a test system capable of extracting pedestrian movement information from video , similar to that collected by our hd - cameras in the hajj-2009 , such that any movement can be analysed to spot suspicious activity .this task to collect pedestrian data and extract pedestrian motion from video sequences required an involvement and development of appropriate methods , followed by further analysis of this data to identify emergent motion or crossing trajectories .the secondary goal is to identify the limitations of the approach including the system and data requirements for the techniques to work more effectively .more specific , the project goals are : * develop a framework for video and image analysis , * develop an approach and relevant diagnostic software to collect movement data from video , * identify the requirements for such methods to work effectively , such as image quality , resolution and orientation , * identify how to interpret movement information , * interpret the movement data and examine abnormal behaviour , * design and produce a working implementation that demonstrates the above goals , * identify approaches that could further improve the system .there are different techniques developed to extract information describing the position of pedestrians in a location , but not all of them are appropriate for detecting and pursuing pedestrian movement under different and extremely weather conditions . in their published work , papageurgiou and poggio developed a system attempting to recognize human figures based on pixel similarities through a large training set of figures under various light and weather conditions . to identify the movement of the figures , the system analyses the similarity between matches of consecutive frames .this method works quite well when the training set is large , but requires a high computational efficiency which achieves processing rates of 10 hz .the study shows that accurate recognition can be done with coarse image data .another approach to estimate crowd density is based on texture analysis .velastin et al . assumed that crowds with high density possess texture properties .the proposed method , texture features were computed for the whole image and applied to crowd density estimation . in particular , all displayed textures , like wavelets and the gray level dependence matrix , were used to estimate crowd density .the results exhibit , how effective statistical analysis of texture display is compared to neural networks when measuring crowd density .unfortunately , this system examines only static images and can not cover crowd motion , but the techniques can be used to track pedestrian movements .other strategies based on image segmentation were pursued by heisele and woehler , where raw data is filtered to split the image into segments , which are then analysed .those images that match particular shapes are analysed further .this approach allows to distinguish different images with common color and luminescence .the required data on pedestrian behaviour ( e.g. density - effect , shock - waves - effect , ... ) in the haram can be done from our video recordings .all observed effects can be analysed by simply watching the recorded videos .but if we want to extract data like walking speeds from such observations we have to examine the videos frame by frame .this is very time consuming . as a result of this , andthe need for more efficient data , the idea arose to use an automatic detection system . at that timeno sufficient system was available for the detection of human bodies , therefore some essential requirements were formulated . from the requirements we derived an idea to formulate an image processing system with the help of other programs , such as optical flow with opencv ( http://opencv.org/ ) and quest3d ( http://www.quest3d.com/ ) .the materials used for this test are videos recorded at an outdoor piazza of the haram mosque in mecca where people congregated at different times during one day , simulating a surveillance application .the data content had a wide range of crowd densities , from very low to very high .three different data - sets , labelled morning observation , afternoon observation and combined observation ( before and after the prayer times ) were used .each data set had 20 selected images with high resolution .examples of images are shown in figure [ fig : muster-1 ] and [ fig : grid-1 ] . in order to collect pedestrian data and to study pedestrian traffic flow operations on a platform in detail , observations were also made from a platform of the haram mosque in mecca . these observations concerned pilgrim walking speeds and density distributions on the mataf area and ( individual ) walking times as functions of the distance from the kaaba wall .the estimation of crowd density is an important criterion for the validation of our simulation tools .processing is done in three levels .* existing footage is loaded on a 3d program as a backplate . * from several provided 2d- architectural drawings we build a 3d model of the mosque . * a virtual camera has to be matched in position , rotation and focal length to the original camera so that the features of the 3d - model match the features of the filmed mosque . as the dimensions of the mosque are known , we then establish a grid of regular cells on the mataf area , each one of which has a size of 5mx5 m ( see fig . [fig : raster-1 ] ) . through image editing software , we start a manual counting process . this regular grid is used to observe the density behaviour over all of the mataf area , from the nearest range to the kaaba wall up to outside of the mataf and the accumulation process ( by the black stone and maquam ibrahim ) .the results of this investigation are shown in figures [ fig : densitydistribution-5 ] ( a ) , ( b ) , ( c ) and ( d ) and illustrate us the behaviour of the pilgrim density on the mataf area at different times during the day .5 m . ] with a new computer algorithm developed within this investigation , where the mataf area is divided in regular cells . the number of pedestrians in every cell as function of time is determined through repeating the counting process many times .the average value is identified as local density .the data extracted from the videos allowed us to determine not only densities in larger areas , but also local densities , speeds and flows .as an example the density distribution on the mataf area is shown in figure : [ fig : densitydistribution-5 ] .the data was obtained by semi - manual evaluation .[ [ dependence - of - the - density - distribution - on - the - mataf - as - function - of - time ] ] dependence of the density distribution on the mataf as function of time + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + figure [ fig : densitydistribution-5 ] shows density decline curves for different distances from the kabaa in a specific time .the curves indicate that the local density amount vary strongly over the ( 0 < x < 40 m ) range .figure:[fig : density-1 ] shows the pedestrian density distribution on the mataf area as a function of the position and time .one clearly recognizes density waves , with maximum density package near the kaaba wall .there the average local density can reach a critical value of 7 to 8 persons / m .the congested area increases the local density to a critical and dangerous amount . as a consequencethe pedestrians begin to push to increase their personal space and create shock - waves propagating through the crowd , which can be seen as density waves , or density packages .the density map illustrates how the pedestrian density decreases from the inside to outside of the mataf area , ( see fig.[fig : densitymap-1 ] ) . as we have mentioned that in the mataf area pedestrians move in the restricted space , the layout is gradually painted in different colors .the color of every point of the space corresponds to the current density in this particular area .the density map is constantly repainted according to the actual values : when the density changes in some point , the color changes dynamically to reflect this change . in case of zero density the area is not painted at all ( see fig . [fig : densitymap-1 ] ( a ) , ( b ) , ( c ) and ( d ) ). during the rush hour in a hajj period the local density in the mataf area reaches the maximum as we can see in the following figures [ fig : density-1 ] ( d ) and [ fig : densitymap-1 ] ( d ) .the local density can reach 8 to 9 persons / m in a specific time during the day .the maximal density concentrates near the kaaba wall .[ [ densities - over - time - and - space ] ] densities over time and space + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we observe the density behaviour on the mataf area at different times during the day , before and after the prayer , and we compare this density with the simulation density results .the maximum registered density was 7 to 8 persons / m and this represents a high crowd density .the results of the estimation based on the statistical method , presented in figures [ fig : densitydistribution-5],[fig : density-1 ] and [ fig : densitymap-1 ] , reached a mean of 92 percent correct estimations .it is possible to verify that the results were quite good for all evaluated images except for the one made up of high density crowd images , which reached only 84 percent correct estimations . in the mataf area , near the black stone , the pilgrim density reached over 9 persons / m .for this reason it is very difficult to recognize and track every head and as a result , a 100 percent correct estimation would be very difficult .all statistical results illustrating the density distribution at the mataf area at different time intervals are demonstrated in the figure [ fig : densitydistribution-6 ] .this part of the dissertation considers the role of automatic estimations of crowd density and their importance for the automatic monitoring of areas where crowds are expected to be present .a new technique is proposed which is able to estimate densities ranging from very low to very high concentrations of people .this technique is based on the differences of texture muster on the images of crowds .images of low density crowds exhibits rough textures , while images with high densities tend to present finer textures .the image pixels are classified in different texture classes , and statistics of such classes are used to estimate the number of people . the texture classification and the crowd density estimationare based on self - organizing neural networks .results obtained estimating the number of people in a specific area of the haram mosque in mecca are presented in figure [ fig : colordensitydistribution-1 ] ) . in the latter paragraphs we focus on crowd density estimation for several reasons .according to the crowd disasters study by helbing and johansson , one of the most important aspects to keep a crowd safe is to predict and identify areas with high density crowds preventing large crowd pressures to be built up .areas where crowds are likely to build up should be identified prior to the event or operation of the venue .this is important as crowds usually exist in certain areas or at particular times of the day .places where crowd density rises up over time are likely to congest and need careful observations to ensure the crowd safety .basically , crowd density surveillance and estimation can be a good solution for management and controlling the crowds safety .the results of the estimations obtained during the tests allow us to consider both methods successfully .while the statistical method reached quite good estimation rates ( around 92 percent ) for most groups , the spectral method illustrated small deviations between the best and the worst estimations , reaching on average almost the same rates of correct estimation obtained by the statistical method .as speeds are hard to observe , walking times were measured , from which walking speeds were derived .in addition to walking times and pedestrian densities other variables needed to be considered to complete the input of the simulation model ( such as the number of in and out going pilgrims and the configuration of the structure during the rush hour at the hajj period ) .the observables are the walking time , velocities and the corresponding densities of the pilgrims performing their tawaf and say .the movements of the pilgrims going in and out of the haram give us data to calculate the flux related to the tawaf .the distribution of both in and out going pilgrims over the haram can be derived from this data . the second type of observation concerns individual walking times . in order to measure the pilgrims walking times in and out of the haram , pilgrims were recorded from the moment they started walking from one spot to another , either on the piazza or going up the stairs .the start and duration of activities , such as tawaf or say , were measured also .finally , locations of origin , destination and possible activities of the pilgrims were registered .to do this , the piazza is divided into small areas with a length of 5 meters .we also recorded the movements of the pilgrims at specific moments , such as prayer times when the number of pedestrians increases dramatically .therefore , cumulative flow curves can be constructed , out of which densities can be derived .these curves can be compared with the reference curves of predtetschenski - milinski .data was collected on a specific subject group of pedestrians who appeared to be 40 years of age or older . on the roof of the mataf area we selected our tracking subjects , consisting of adult men , women and people in wheelchairs .the following individuals were specifically not considered : * children under 13 years of age , * pedestrians carrying children , heavy bags , or suitcases , * pedestrians holding hands or assisting others across the mataf , * pedestrians using a quad pod cane , walker , two canes , or crutches . to accurately quantify the normal walking speeds of the various subject groups , pedestrians who exhibited any of the following behaviourwere also not considered : * crossing of the mataf path diagonally , * stopping or resting in the mataf area , * entering the roadway running ( anything faster than a fast walk ) , the pedestrian sex ( male or female ) of each individual in the mataf area was recorded , as well as whether he or she was walking alone or in a group .the group size was also noted when applicable .a group was defined by two or more pilgrims walking the mataf trajectory at about the same time , regardless of whether or not they were apparently friends or associates . in the mataf area, the pedestrian groups can reach 30 pilgrims walking together in the pedestrian stream .in addition , subjects paths were monitored to determine when they started and ended their tawaf .being inside the mataf was defined as being within or on the painted tawaf walking lines .other pedestrian behaviour was recorded when if occurred : * confusion ( hesitation , sudden change in direction of travel or change of point of interest ) exhibited before walking , * confusion exhibited after entering the mataf trajectory , * cane use , * following the lead of other pedestrians , * stopping in the walking path during the tawaf movement , * difficulty going into mataf , * difficulty going out of the mataf .several methods were developed to check the accuracy and performance of walking speed estimation abilities of the observers .first , the walking speed was measured at the same time by three observers , then correlations between the estimates of all observers were determined . in particular , the walking velocity of one pilgrim was measured by the three observers and the mean value was taken .the results of these verification procedures are discussed after the next section . from our video recordingswe choose places between two minarets as references , ( see fig .[ fig : minaret-1 ] ) .as the dimensions of the mosque were known , we then established a grid of regular cells covering all of the mataf area , each one having a size of 5mx5 m ( see fig .[ fig : grid-1 ] ) .the distance between the two minarets is known .pedestrian crossing times were measured with a digital timer and an electronic stopwatch was implemented and synchronized with the timer of the video recorder .the watch was started as the subject stepped off the first minaret and stopped when the subject stepped out on the opposite minaret after crossing all the distance between the two minarets .are measured .since the distance between the two minarets is known from the architectural plan of the haram , the average of the local pedestrian velocities can be determined . ] from the roof of the mosque every pedestrian can be identified . to establish the ability of the field observers to identify the fitness level or the age of pedestrians with high accuracya simple verification procedure was performed .the age estimation and the level of fitness of the pedestrians was based on their walking speed .it is a physio - medical fact that older pedestrians walk more slowly than younger ones ( this is easily supported by field data ) , however , the published or already existing data on walking speeds and start - up times ( i.e. the time from the beginning of a tawaf movement until the pedestrian steps off the mataf ) have many shortcomings .here we consider the complicated movement of the tawaf and the human error rate of the observer .the walking speed on the mataf area can be affected by many factors , one of the relevant factors is the age of the pedestrian .this demonstrates that the observations were quite good at identifying older pedestrians or pedestrians with fitness deficiency or physical health problems .a digital stopwatch was integrated with the video recording sophisticated for the measurements of pedestrian crossing times .the crossing times of the same pilgrims were measured during five rounds of the tawaf and the average value was determined . 5 m . with the help of the regular cells and the distance between two minarets in the haram ( see fig .[ fig : minaret-1 ] ) the ( individual ) walking times are determined and the average of the local speeds is calculated .the average walking speed for male pedestrians is 1.37 m / s , female 1.22 m / s and for people moving on wheelchairs 1.534 m / s . ]this research also examined the impact of the building layout on the pedestrian speed distribution and the pedestrian density of pilgrims performing the tawaf movement around the kaaba .the set of data of pedestrian walking speeds which were obtained through analysing video recording using a set of statistical techniques are displayed in figures [ fig : velocitydismen-1 ] ( a ) , ( b ) and ( c ) .the results revealed that walking speed seems to be following a normal distribution no matter of male , female , older or younger .the average speed of young people is dramatically larger than that of older people , and the average speed of male is slightly larger than that of female .the width of the obtained curves is related to the different standard deviations .the mean computed walking speed represents the speed that 85 percent of pedestrians did exceed .a total of 250 pedestrians were observed .included were 100 male pedestrians of about 60 years of age , 100 women pedestrians and 50 wheelchair pedestrians .this data describes all of the pedestrians observed : those walking in the center of the stream and those walking by the edge of the mataf trajectory .as is subsequently described , those who were walking by the edge of the mataf tended to walk more quickly . all observed pedestrians moved in a rotational motion around the kaaba counter - clockwise ( tawaf ) , in compliance with the pilgrim stream .the mean walking speed for male pedestrians was 1.37 m / s and 1.22 m / s for female pedestrians . in conjunction with pilgrims old , the mean walking speed for younger pedestrians was 1.48 m / s and 1.20 m / s for older male pedestrians .the results revealed that the average walking speed for young women are 1.32 m / s and 1.12 m / s for old women .this means * young male pedestrians had the fastest mean walking speeds [ 1.48 m / s ] and older females had the slowest [ 1.12 m / s ] . the differences between young men and young women [ 0.16m / s ] and between older men and older women [ 0.1 m / s ] , this result shows a little deviation that can be traced back to the fitness level of pedestrian or other factors , in the normal condition are approximately the same .the mean walking speed for the younger pedestrians ranged from 1.37 to 1.57 m / s across all conditions , with an overall mean speed of 1.48 m / s .the means for the older pedestrians range from 0.97 m / s to 1.26 m / s , with an overall mean speed of 1.18 m / s . for design purposes a mean speed of 1.33 m/ s appeared appropriate ; * locations by the edge of the mataf had faster walking speeds because such locations has a lower pedestrian density .it is clear that the pedestrians near the kaaba had a short walk path but in this places densities of 7 to 8 persons/ m can be exceeded , making the movement of pilgrims very slow and turbulent ; * places situated further away from the kaaba wall also tended to be associated with faster walking speeds .it is known from other fundamental diagrams , that pedestrians tend to walk faster along a free walkway . as might be expected the walking speeds associated with various factors .the motion of a single individual at any given time and the direction and speed result in a long list of possible ( and very likely conflicting ) forces and circumstances .the data taken show that each of the locations and surrounding factors have a significant effect on the behaviour and walking speed of the pilgrims on the mataf area , not forgetting that the age of the pedestrians play a significant role on the tawaf movement and density peaks and jams are caused by pilgrims of age 70 and more . for approximately one half of the location, the factors examined there also showed an important correlation between pedestrian age , the location and the mean walking speed of the pilgrims .this funding is consistent with results published by knoblauch .the walking speed of pilgrims shows statistically significant variations across a variety of sites , times and environmental conditions ( pedestrian density on the mataf area ) . on the roof of the mosquethe pilgrim density is low and every pedestrian can walk with his desired velocity .however , the mean walking speed data is explicit by clustered for both pedestrians sex , men and women , independent of the age of the pilgrims are considered . thereexist numerous methods that track the movement of single individuals by inspecting their orientation and limb positions .this section highlights a real - time system for pedestrian tracking from sequences of high resolution images acquired by a stationary ( high definition ) camera .the objective was to estimate pedestrian velocities as a function of the local density . with this systemthe spatio - temporal coordinates of each pedestrian during the tawaf ritual were established .processing was done through the following steps : * existing footage was loaded onto a 3d program as a backplate . * from several provided 2d- architectural drawings , a 3d model of the mosque was built . *a virtual camera was matched in position , rotation and focal length to the original camera so that the features of the 3d - model matched the features positioned on the filmed mosque .* individual features were identified by eye , contrast is the criterion * we do know that the pilgrims walk on a plane , and after matching the camera we also obtained the height of the plane in 3d - space from our 3d model . *a point object was placed at the position of a selected pedestrian . during the animation we set multiple animation - keys ( approx every 25 to 50 frames ( equals 1 to 2 seconds ) ) for the position , so that the position of the point and the pedestrian overlay nearly all the time .* by evolving the point with time we obtained the distance travelled , by measuring the distance from frame to frame .we also knew the time elapsed from the speed per frame , and hence the speed could be calculated . from figures [ fig : walkingspeed-1 ] and [ fig : walkingspeed-3 ]we see that the edge of the mataf moves faster than the center , this phenomenon being known as the edge effect . the edge effect occurs when the edges of a crowd move faster than the center of the crowd . the density becomes higher and higher as one moves from the edge of the mataf towards the center .this phenomenon is explained by the fact that all pilgrims want to be near the kaaba wall . as a result, we find the density near the kaaba to be the maximum density .this data can be used in validating of simulation tools . the mean walking speed for a group of pedestrians moving in the pilgrim stream around the kaaba was 1.0816 m / s at the edge of the mataf and it was 0.3267 m / s for the same pedestrians groups moving inside the mataf .these findings agree well with the statistical results discussed in a previous section .one of the must - have results is to compare the mean values and variances of walking speeds in both observations and simulation results .a distinction will be made for walking speeds inside and outside of the mataf platforms .we made a comparison between our plots derived from the video observation and the fundamental diagrams of ( cf .[ fig : comparison-1 ] ) : * walking speeds : * * on the edge of the mataf ( free flow speed ) where the pedestrian density is lower than 3 persons / m . * * on the center of the mataf . * * on the mataf inside near the kaaba wall where the pedestrian density attains extreme levels ( 8 - 9 persons / m ) . ) .average of the local speeds as a function of the local density .our own data are shown as red points .the blue points correspond to the data obtained by ( pm ) .the difference in velocity at lower densities can be explained by the fitness level of pedestrians . ]all well - known fundamental diagrams predict the same behaviour and have the same properties : speed decreases with increasing density .so the discussion above indicates there are many possible reasons and causes for the speed reduction .for example there is a linear relationship between speed and the inverse of the density for pedestrians moving in a straight way .however the pedestrian walking speed can be affected by internal and external factors ( such as the amount of pedestrian inflow and outflow as well as the configuration of the infrastructure ) not to forget the physiology of the human body .it is found that individuals walk faster in outdoor facilities than in corridors . according to predtechenskii and milinskii ( pm ) the average walking speed depends on the the walking facility . in other circumstancesweidmann confirmed a linear relationship between the step size length of walking pedestrians and the inverse of the density .the small step size means low pedestrian velocity , caused by reduction of the available space with increasing density .the discussion above shows that there are many possible factors influencing the fundamental diagram . to identify these factors ,it is necessary to exclude as many influences of measurement methodology and short range fluctuations from the data .figure [ fig : comparison-1 ] shows the average local speed as a function of the local density half - hour after mid - day prayer ( t = t ) .our own data is shown as red points .the blue points correspond to the milinski fundamental diagram .moreover investigation data analysing the mataf area represented by blue points in figure [ fig : comparison-1 ] and showed that a reduction of the available navigation space illustrates the causes responsible for the speed reduction with density in pedestrian movement .the small deviation in pedestrian walking speed at lower density can be explained by the fitness level of the pedestrian .in the literature , there is a large number of approaches on detection and tracking of moving objects from video images .spatio - temporal analysis has , in the past , been used to recognize walking persons , where subspaces in the video are treated as spatio - temporal volumes .application of a fourier transform to this data can then identify data relating to movement across the volume .this approach allowed pedestrian trajectories to be reconstructed from video with high precision , taking advantage from the methods and the high developed computational technology . the common approach to detect movementis to produce comparison images ( an image representing the different details between two images ) since this is computationally efficient .these comparison images can then be computed further to estimate movement vectors that describe the motion of drop - shaped objects captured in the respective images .murakami and wada demonstrate another method , filing the difference frame , and instead compare the properties of drops identified in consecutive frames .a drop that is close to the position of a drop in a previous frame , and shares similar dimensions , is likely to refer to the same figure .motion vectors are also used to find drop segmentation , which are subsequently merged or separated for the purpose of analysis .the same approach is applied to a 2d image to determine movement in 3d space .extrapolating the movement of pedestrians in 3d space from a 2d image allows for a far greater understanding of the interactions between entities , but does require exceptional calibrations of equipment for complete accuracy .the murakami and wada approach can be used to analyse low - quality video streams due to the frame - differencing algorithm and some trigonometry .determining 3d motion does require precise knowledge of the angle and position of the camera , in addition to the basic topology of the scene being analysed .but 2d paths are easy to identify without these details , ( see fig . [fig : pilgrimspaths-2 ] ) . in figure[ fig : pilgrimspaths-2 ] we show the path of individuals within the crowd .one clearly recognizes that the movement around the kaaba is not a circle movement .the tracking of a single individual in the pilgrim stream indicates some oscillation movement around the main path of the individual .it is caused by the physical repulsive and attractive forces acting on the individual .physical forces become important when an individual comes into physical contact with another individual / obstacle .when a local density of 6 persons per square meter is exceeded , free movement is impeded and local flow decreases , causing the outflow to drop significantly below the inflow .this causes a higher and higher compression in the crowd , until the local densities become critical in specific places on the mataf platform .in the mataf everything is dense and we have a compact state . the pilgrims have body contact in all directions and no influence on their movement ; they float in the stream .this forms structures and turbulences in the flow .these turbulences can be well observed in our video recording .density and velocity can also be seen .these observed hajj rituals , especially the mataf , showed some critical points in the motion of the pilgrims that we had not paid much attention to before .for example : the edge effect , density effect , shock - wave effect etc . , and phenomena like these influence the restraint of the motion and are very important to be considered .our video analysis shows that the pedestrian density decreases with the distance from the kaaba wall , cf .figures [ fig : densitydistribution-5 ] , [ fig : density-1 ] , and [ fig : densitymap-1 ] .it is the same as the real behaviour of pilgrims on the mataf ritual ( all pilgrims want to be near to the kaaba wall ) .our video analysis about the mataf area indicates that , even at extreme densities , the average local speeds and flows stay limited .this extremely high local density causes forward and backward moving shock - waves , which could be clearly observed in our video .we can see a kind of oscillation on the pilgrims paths around the kaaba , this oscillation is caused by shock - waves and is affected by the repulsive forces between the pedestrians in high density crowds ( see fig .[ fig : pilgrimspaths-2 ] ) .one of the significant challenges in the planning , design and management of public facilities subject to high density crowd dynamics and pedestrian traffic are the shortcoming in the empirical data .the collected data concerning crowd behaviour using different techniques ( image processing ) and analysis of ordered image sequences obtained from video recording is increasingly desirable in the design of facilities and long - term site management .we have investigated the efficiency of a number of techniques developed for crowd density estimation , movement estimation , critical places and events detection using image processing . in the above sections and within this investigationwe have presented techniques for background generation and calibration to improve the previously developed simulation model . even though extracting information about human characteristics from video recording may still be in its infancy , it is important to mention that the field of human motion analysis is large and has a history traced back to the work of hoffman and flinchbaugh . in the field of pedestrian detection techniques , moreover in the big area of computer vision , many problems have accumulated . in the human motion analysis , andalso in the problem of the detection of moving objects , remain other problems , namely to recognize , categorize , or analyse the long - term pattern of motion .the inspection of the literature in the last decade indicates increasing interest in event detection , video tracking , object recognition , because of the clear application of these technologies to problems in surveillance .recently many methods have been developed to extract information about moving object like speed and density .almost all these systems require complex intermediate processes , such as reference points on the tracked objects or the image segmentation .one limitation of this current system is that the detection failures for these intermediates will lead to failure for the entire system .improvement of an algorithm to be able to reproduce traffic flow and to help in the microscopic pedestrian data collection is very essential .moreover the automatic video data collection will highly enhance the achievement of a system for higher pedestrian traffic densities .i would like to express my sincerest thanks and gratitude to prof .g. wunner for a critical reading of the manuscript , for his important comments and suggestions to improve the manuscript .many thanks to dr .h. cartarius for his support during writing this work .
in this paper we present a number of methods ( manual , semi - automatic and automatic ) for tracking individual targets in high density crowd scenes where thousand of people are gathered . the necessary data about the motion of individuals and a lot of other physical information can be extracted from consecutive image sequences in different ways , including optical flow and block motion estimation . one of the famous methods for tracking moving objects is the block matching method . this way to estimate subject motion requires the specification of a comparison window which determines the scale of the estimate . in this work we present a real - time method for pedestrian recognition and tracking in sequences of high resolution images obtained by a stationary ( high definition ) camera located in different places on the haram mosque in mecca . the objective is to estimate pedestrian velocities as a function of the local density.the resulting data of tracking moving pedestrians based on video sequences are presented in the following section . through the evaluated system the spatio - temporal coordinates of each pedestrian during the tawaf ritual are established . the pilgrim velocities as function of the local densities in the mataf area ( haram mosque mecca ) are illustrated and very precisely documented . tracking in such places where pedestrian density reaches 7 to 8 persons / m is extremely challenging due to the small number of pixels on the target , appearance ambiguity resulting from the dense packing , and severe inter - object occlusions . the tracking method which is outlined in this paper overcomes these challenges by using a virtual camera which is matched in position , rotation and focal length to the original camera in such a way that the features of the 3d - model match the feature position of the filmed mosque . in this model an individual feature has to be identified by eye , where contrast is a criterion . we do know that the pilgrims walk on a plane , and after matching the camera we also have the height of the plane in 3d - space from our 3d - model . a point object is placed at the position of a selected pedestrian . during the animation we set multiple animation - keys ( approximately every 25 to 50 frames which equals 1 to 2 seconds ) for the position , such that the position of the point and the pedestrian overlay nearly at every time . by combining all these variables with the available appearance information , we are able to track individual targets in high density crowds . keywords : pedestrian dynamics , crowd management , crowd control , objects tracking . + 2
general - purpose quantum computers hold the promise of achieving quantum speed - ups in many problems of practical importance , unmatched by any known classical methods . while the prospect of such speed - ups is exciting , a growing realization is the extreme difficulty of achieving the levels of precision and control required for building truly scalable , fault - tolerant quantum hardware . as an intermediate step towards this goal ,several recent proposals have suggested the development of special - purpose quantum devices which achieve so - called quantum supremacy " in certain tasks . instead of solving general computational problems , these devices instead sample from probability distributions widely believed to be impossible to simulate efficiently using classical means .the recent explosion of proposals for such classically intractable sampling devices has begun to be matched by actual demonstrations of sampling in the laboratory , although so far still at small enough scales to allow for exact classical simulation .an important question regarding such proposals is how far , and in what manner , we can reduce the resources required to exhibit and certify a genuine quantum advantage in sampling .the boson sampling protocol shows that such quantum advantage can be achieved using simple linear optical devices and single - photon detectors . however , there are many challenges facing a realistic implementation of boson sampling , including the parallel generation of many single photons , the precise timing constraints on these photons , and the robust and accurate arrangement of the required beam splitters and phase shifters .an alternative proposal which circumvents this bottleneck is the family of instantaneous quantum polynomial - time ( iqp ) protocols , where sampling distributions arise from single - qubit measurements on the output of low - depth commuting quantum circuits . if a quantum device can prepare sampling distributions associated with any unitary within a circuit family , then that process would be classically intractable under reasonable conjectures from complexity theory .furthermore , the commuting nature of these quantum circuits means that they can potentially be engineered to run in constant time , maximally avoiding the threat of environmental noise and decoherence .however , a practical issue which arises here is the extreme difficulty of engineering the arbitrary long - range interactions needed for such a constant time implementation .while these long - range interactions can be simulated by bringing distant qubits together using gates before applying local entangling operations , this process would introduce a new bottleneck , the growing time required to shuttle qubits between local interaction regions . in the absence of quantum error correction , the growing influence of decoherencewould quickly degrade the quality of our sampling distributions , making this straightforward implementation likely untenable for practical demonstrations of quantum supremacy . in this paper ,we show how nonadaptive measurement - based quantum computation ( mqc ) can be used to sample from the distributions associated with iqp circuits , while at the same time verifying the classical intractability of this sampling process .our protocol uses a fixed resource state preparable by a constant - depth local circuit , which is then nonadaptively measured at each site in the pauli , , or bases .the setting of nonadaptive mqc allows us to replace the time complexity present in local iqp circuits ( with gates ) by a spatial overhead in our resource state , which results in a protocol with constant runtime and local interactions .the cost of this nonadaptivity is a fundamental randomness in the distributions prepared by our protocol , arising from random mqc byproduct operators .this leads each sample in our protocol to be obtained with high probability from a different sampling distribution every time .surprisingly , we show that this inherent randomness has no impact on the hardness of our protocol , which remains classically intractable under the same assumptions as in . what s more , we show that these random byproduct operators actually simplify our implementation relative to a direct circuit - based counterpart , revealing an inherent advantage of mqc for quantum sampling protocols .we further show that by simply changing the single - qubit pauli measurements used to obtain sampling statistics , we can rigorously verify the classical intractability of our sampling .our verification scheme is inspired by the ground state certification protocol of , but uses the special form of our iqp sampling distributions to replace the nonlocal operations required for general hamiltonian measurements with measurements of single - qubit pauli operators .this lets us switch between sampling and measurement by a simple change in single - qubit measurement bases , allowing our procedure to achieve a robust demonstration of quantum supremacy capable of efficiently detecting any errors which could potentially harm the correctness of our sampling distributions .our protocol is closely related to that of , as it constitutes a faithful translation of their circuit - based iqp sampling into the context of mqc .however , we show that this translation itself contains several surprises , ultimately revolving around the nontrivial interface of mqc byproduct operators with classically intractable sampling . at first glance , our protocol has much in common with , which also use nonadaptive mqc to perform classically intractable sampling and verification . upon further investigation however ,the different protocols are seen to utilize completely different mechanisms for demonstrating quantum supremacy . while using a more involved resource state than the ising - like states of , the design of our protocol allows for a convenient duality between sampling and verification , in which sampling and verification are both implemented using only single - qubit measurements on our output sampling state . in section [ sec : background ] , we review the relevant theory behind iqp sampling , verification , and mqc . in section [ sec : protocol ]we present our protocol for preparing , sampling from , and verifying different classically intractable sampling distributions using pauli measurements on a model resource state . in section [ sec : outlook ]we comment on the features unique to our protocol , and outline future directions for our work . a brief comparison of our proposal to other proposals within the rapidly growing field of classically intractable sampling can be found in appendix [ sec : comparison ] , with detailed proofs of the classical intractability and verification of our sampling protocol found in appendices [ sec : preparation ] , [ sec : sampling ] and [ sec : verification ] .in the iqp sampling protocols of , a sampling state is first prepared using an -qubit diagonal unitary circuit , and is then measured everywhere in the pauli basis to obtain a random outcome . in the above , denotes the eigenstate of , the single - qubit hadamard operator , a bit string of length , and the corresponding basis product state .if is chosen from an appropriate family of diagonal unitaries , then shows that the act of sampling from the distribution is impossible to perform in polynomial time using a classical computer , assuming the widely conjectured non - collapse of the polynomial hierarchy of complexity theory .more generally , we use the phrase classically intractable sampling to mean any sampling protocol which shares this property of being impossible to simulate classically ( given the non - collapse of the polynomial hierarchy ) , possibly in the presence of some allowable error and under the assumed truth of additional mathematical conjectures .we now choose the -qubit unitary gates above to be parameterized by -bit binary functions , where denotes the finite field of binary numbers .the functions set the eigenvalues of as where .when applied to , this results in the sampling state we can alternately describe as the unique state satisfying the ( nonlocal ) stabilizer relations for , where and the polynomial is equal to the difference because addition in is modulo 2 , it is easy to verify that is always independent of the value of .we now restrict our binary functions to be cubic polynomials , so that can be written in the form for some binary coefficients , , and .these are generated by linear , quadratic , and cubic monomials , whose associated diagonal unitary gates are , ( controlled- ) , and ( controlled - controlled- ) . in the following , any references to polynomials will be understood to refer specifically to binary polynomials .we will use , , and to denote homogeneous polynomials , for which the only nonzero coefficients are of the form , , or , respectively .similarly , and will denote polynomials for which all or all , respectively. it will be convenient in the following to interpret -bit vectors as linear polynomials of variables , which act as this is useful in giving the probability of different sampling outcomes , as the probability of obtaining any given when is measured in the product basis is refers here to the square of , the ( signed ) difference between the fraction of inputs yielding and . is known to be # p - hard to compute for arbitrary cubic polynomials , and we will see that this hardness underlies the classical intractability of our sampling protocol . it is shown in that estimating the quantity up to multiplicative error , so that for arbitrary cubic polynomials , is # p - hard , mirroring the difficulty of computing .this hardness leads to a similar finding as in , that exactly sampling from the cubic polynomial distributions defined in eq .( [ eq : distribution ] ) is classically intractable . in particular , assuming the existence of a classical randomized algorithm which can efficiently sample from any of the distributions lets a technique called stockmeyer approximate counting be used to estimate the probabilities up to multiplicative error , and thus to solve arbitrary # p problems .while stockmeyer counting is an unphysical process which can not be implemented with realistic classical or quantum computers , it can be carried out at a finite level of the polynomial hierarchy , and the hardness of # p problems for this hierarchy then leads to its collapse .details of this process can be found in appendix [ sec : sampling ] . on the other hand ,we have seen that these distributions appear naturally as the output distributions of the iqp sampling protocol described above , which allows us to interpret a concrete implementation of this protocol as a provable demonstration of quantum supremacy " . while straightforward and conceptually compelling , a major limitation of the above result is the impossibility of verifying that any realistic quantum protocol is sampling from _ exactly _ the ideal distribution . in order to demonstrate quantum supremacy in a more realistic setting , an alternate proof is given in which shows the classical intractability of sampling from any distribution which is variationally close to .variationally close means here that the statistical distance between and is bounded by a constant , so that in a value of was shown to be sufficient for classically intractable sampling , which in appendix [ sec : sampling ] we show can be relaxed to ( although both values rely on the particular value of appearing in conjecture [ conj ] below ) .this result is appealing from a practical standpoint , as the quantity can be efficiently estimated in experiments involving quantum sampling distributions . on the other hand , the above average - case " sampling result relies upon one additional complexity theoretic conjecture : [ conj ] let be an arbitrary cubic polynomial of the form given in eq .( [ eq : cubic_poly ] ) .then it is # p - hard to efficiently calculate an estimate of for which , on at least of polynomials .intuitively , this conjecture states that even when our estimates are allowed to fail with some finite probability , corresponding to realistic errors in our sampling distributions , the problem of estimating on the remaining instances is still # p - hard .while this reliance on an additional unproven conjecture is nt desirable , an analogous conjecture is required for every known average - case classically intractable sampling result , and thus is nt any special demerit of .the techniques of can be used to efficiently verify the condition when arises from measurements on experimentally prepared quantum sampling states , which approximate our intended .given , we can perform measurements of the nonlocal hermitian stabilizers defined in eq .( [ eq : hami ] ) , which will always yield the outcome in the ideal case where . in more general cases ,a sufficiently accurate empirical estimate of these observables can be converted into a bound on the statistical distance between the distributions and .if the average is sufficiently close to so as to guarantee , then we can confidently conclude that our quantum protocol is performing classically intractable sampling .we will soon show that the nonlocal measurements of can actually be replaced with single - qubit and measurements , which allows this verification to be done entirely in the setting of mqc .mqc is a means of carrying out computation using only single - qubit measurements on a fixed many - body resource state . in this framework, the choice of measurements made on local regions of our resource state determines logical operations which are applied to encoded logical qubits , while simultaneously teleporting these qubits to adjacent unmeasured sites .the randomness of quantum measurement leads the outcomes of these measurements to determine a so - called byproduct operator , which acts as a random correction to the overall logical operation .for example , in figure [ fig : gadgets]a we show the standard protocol for teleporting one logical qubit within the mqc quantum wire known as the 1d cluster state . given two successive measurements with outcomes and , the resultant logical operation is , showing the intended logical unitary to be the identity and the byproduct operator to be a random pauli . in figure [ fig : gadgets]b we show a gadget for performing the two - qubit operation on logical qubits , for which the byproduct operator is a random two - qubit pauli operator . in both of these examples ,the collection of operators appearing as byproducts for arbitrary measurement outcomes form a closed group ( up to global phase ) of finite size , referred to as a byproduct group .an mqc protocol is said to be adaptive if the choice of measurement in some region of our resource state depends on the outcome of measurements made in another region .adaptation can be seen as a means of ensuring that the byproduct group associated with a large computation remains finite ( for example , contained within the -qubit pauli group ) , whereas the use of nonadaptive mqc with arbitrary single - qubit measurements will generally lead to a byproduct group of unbounded size . on the other hand ,nonadaptive mqc computations can always be implemented in constant time by performing all measurements simultaneously , a serious advantage in the absence of quantum error correction . within the usual scheme for universal mqc using resource states built from gates ,nonadaptive single - qubit pauli measurements are associated with byproduct groups formed from pauli operators , and implement logical operations contained within the clifford group .the clifford group is defined as those unitaries which preserve the pauli group under conjugation , so that is a product of pauli operators whenever is .the evolution of pauli eigenstates under the clifford group is known to be efficiently simulable using classical means , which means that non - clifford operations are necessary for demonstrating quantum supremacy . per measurement (not shown ) .mathematically , this leads our measurement outcomes to occur uniformly randomly .( a ) 1d cluster state wire of length 2 , where solid lines indicate formation unitaries .measuring on two sites implements the identity , with a uniformly random pauli byproduct group .( b ) planar mqc gadget for implementing nonplanar wire crossings .measuring on 6 sites implements , with a byproduct group of uniformly random two - qubit pauli operators .( c ) non - clifford gadget for conditional , where blue triangles indicate gates used to form the gadget .measuring on 3 non - logical control sites ( red ) gives on sites , , and , whereas measuring on these sites instead gives the identity . in both cases , the teleportation is trivial ( output and input sites coincide ) , while the byproduct group is a product of uniformly random s between and , and between and .,scaledwidth=49.0% ] in figure [ fig : gadgets]c , we give an example of an mqc gadget which implements a non - clifford gate when nonadaptive pauli measurements are applied .this gadget , which will be utilized in our classically intractable sampling protocol below , is itself formed from non - clifford gates , and has a byproduct group containing non - pauli gates .a similar gadget was shown in to enable universal mqc using only pauli measurements , but with adaptation of measurement bases so as to avoid a byproduct group of unbounded size . in our mqc sampling protocolbelow , we will show that restricting our logical operations to those generating sub - universal quantum computation will allow us to avoid this use of adapation , while still maintaining a byproduct group of finite size .in fact , we will find that this non - pauli byproduct group actually leads to a simplification in our protocol relative to circuit - based counterparts .our mqc implementation of the classically intractable sampling protocol of uses nonadaptive pauli measurements to prepare , sample from , and verify the -qubit sampling states described above , for arbitrary cubic polynomials .our protocol uses a 2d resource state which is capable of preparing any sampling state using only single - qubit pauli measurements . is constructed from the teleportation , , and gadgets described in section [ sec : mqc ] , which are configured to implement any of the iqp circuits associated with arbitrary homogeneous cubic polynomials .the choice of is determined by the choice of pauli measurement basis applied to each gadget in . by virtue of the byproducts arising from our nonadaptive mqc implementation ,our output sampling states end up being random where is a sum of the intended , along with random quadratic and linear polynomials and . owing to this randomness in ,we are unable to deterministically prepare any fixed sampling state . despite this fundamental indeterminism, we will show how the act of sampling from randomly prepared with measurements at the final stage of our protocol remains classically intractable , even in the presence of realistic noise which leads our output sampling distributions to be some imperfect .we state the classical intractability of our protocol , and the precise conditions which guarantee this , as theorem [ thm : one ] .[ thm : one ] assume the validity of conjecture [ conj ] and the non - collapse of the polynomial hierarchy .if the distributions arising from our mqc sampling protocol are close on average to the distributions defined in eq .( [ eq : distribution ] ) , meaning that the average norm over all meets the experimental threshold , then our protocol is impossible to efficiently simulate using a classical computer , i.e. is classically intractable .our protocol for classically intractable sampling is divided into two stages : preparation of the random sampling state and sampling / verification measurements on ( see figure [ fig : protocol ] ) . in the preparation stage , we use single - qubit measurements of pauli , , and on with outcomes to prepare the -qubit state associated with a -dependent polynomial .these measurements are chosen to implement the unitary by means of a depth quantum circuit built from local and gates .the gates in this ideal circuit are applied conditionally as , depending on the coefficients of , with teleportation and gates used before each application to move qubits , , and into the same region .the application of these conditional s is structured within three nested levels of iteration , which together apply all three - body terms in the lexicographic order of the triples , where .loop i , the lowest level of iteration , involves fixing qubits and in a designated interaction region , then successively cycling the remaining qubits through this region . is applied in turn to each triple , until all triples with fixed and have been processed in this manner .loop ii , the next level of iteration , involves successively replacing qubit by qubit , then repeating loop i for all qubits until all triples with fixed have been processed .finally , loop iii involves successively replacing qubit by qubit , in the process shifting the location of the interaction region , and repeating loop ii for all qubits until has been applied to all triples of qubits .the resulting unitary operation is clearly .while the simple circuit described above is only capable of producing sampling states associated with homogeneous cubic , our mqc implementation utilizes random byproduct operators to implement the remaining quadratic and linear terms required for the preparation of arbitrary .this reveals a simplification within nonadaptive mqc compared to a direct circuit - based counterpart , which would require additional and gates to implement for arbitrary .each of the conditional operations is implemented using the gadget shown in figure [ fig : gadgets]c , which is measured in if and otherwise . for either choice of measurement, the non - clifford nature of these gadgets leads the resultant byproduct operators to consist of non - pauli gates , which generate random quadratic terms in the output . because our logic gates and byproduct operators are made up of and the diagonal , , and gates , which together form a closed ( non - universal ) gate set under multiplication , the byproduct group associated with our computationwill always remain finite .the gadgets used in our protocol are embedded in regular intervals in , and are then connected together using 1d cluster wires and gadgets , which simulate the movement of qubits utilized in our ideal quantum circuit described above .these cluster wires and gadgets are always measured in , which leads to a product of random pauli and byproduct operators .the byproducts eventually end up generating random linear terms in the output state , while the byproducts can be commuted backwards in our circuit , to eventually be annihilated on the initial which our logical quantum circuit is applied to .this commutation of byproduct operators induces conditional and byproduct operators arising from prior gadgets , which results in additional randomness in the overall byproduct group . despite this seeming complexity in the distribution of byproduct operators , we prove in appendix [ sec : preparation ]that the random outcomes of preparation measurements on lead the random quadratic and linear terms in the polynomial associated with to be uniformly random , simplifying our analysis . in the second stage of our protocolwe apply a final series of single - qubit pauli measurements to our output state which , while ideally equal to , will realistically be some mixed state .the choice of single - qubit measurement bases depends on whether we are implementing sampling or verification , which can be chosen randomly with probability . during sampling, we simply measure all qubits in the basis to generate a sample from the distribution , exactly as described in section [ sec : intractable ] .although the randomness in the associated with means that we will almost certainly obtain each sample from a different distribution , our mqc sampling protocol remains classically intractable nonetheless . to prove this classical intractability, we can treat the overall process of preparing a random and then sampling an outcome as itself a sampling process with probability .given this description , and our knowledge of the complete randomness of the byproduct contributions , stockmeyer approximate counting can then be used to estimate as a conditional probability which is directly proportional to .this suffices to proves theorem [ thm : one ] using the same arguments as in other classically intractable sampling proposals , the details of which are given in appendix [ sec : sampling ] .if we choose to perform verification instead of sampling , then we measure all qubits in the basis , except for a random qubit which is measured in .the outcome of this measurement is then fed into a parity function , where is defined in eq .( [ eq : partial ] ) .this process results in an output value of 0 or 1 , which we show in appendix [ sec : verification ] gives the same information as a measurement of the nonlocal stabilizer described in eq .( [ eq : hami ] ) , with outcome .because of our ability to characterize the closeness of to using measurements of , this means that we can interpret as a successful verification measurement , and as a deviation of from our intended . by obtaining many samples of for random , , and , the resultant estimate of lets us guarantee the classical intractability of our mqc sampling protocol to any desired statistical significance using only rounds of verification measurements , as stated in theorem [ thm : two ] .[ thm : two ] suppose that the empirical average of our parity function after verification measurements satisfies , for the appearing in theorem [ thm : one ]. then we can conclude with probability that our sampling distributions satisfy the assumptions of theorem [ thm : one ] , and thus generate classically intractable sampling .we give a detailed proof of theorem [ thm : two ] in appendix [ sec : verification ] .we should mention that another potential means of verifying the classical intractability of our sampling protocol would have been to directly measure the local stabilizers of our resource state , analogous to the technique used in .the idea behind this verification scheme is that , if we guarantee our mqc resource state to be the ideal , then performing our prescribed pauli measurements should always generate the ideal sampling states .unfortunately , this resource state verification scheme does nt detect errors occurring during preparation measurements , so that even when given an ideal mqc resource state , measurement imperfections during state preparation will still lead to logical errors which harm our output sampling state . in order for this verification scheme to rigorously guarantee the classical intractability of sampling in our setting , the single - qubit error rates for measurement must be less than , whereas our verification technique only needs errors rates of .since this latter rate is the maximum allowed for any kind of sampling to maintain a constant variational error , this shows our verification scheme to be optimal with regards to its soundness under measurement imperfections .we have demonstrated the use of mqc to perform classically intractable sampling and verification in a unified manner , with identical resource requirements for each task .this shows that verifying the hardness of a quantum sampling protocol does nt need to be any harder than the actual sampling , and in certain architectures comes essentially for free .this contrasts sharply with many existing quantum supremacy proposals , for which verifying the non - classical nature of sampling is significantly harder than the sampling itself , likely requiring exponential computational resources to ensure correctness . by using nonadaptive mqc to drive our protocol ,we have furthermore allowed both sampling and verification to be carried out in constant time , which minimizes the effect of environmental decoherence , and potentially allows us to avoid the use of quantum error correction . as an outlook , we expect that a hybrid mqc sampling platform combining the simple physical implementation of or with the convenient theoretical analysis and flexibility available here would represent an extremely appealing framework for implementing classically intractable sampling .in particular , a sampling protocol using nonadaptive mqc with non - clifford gadgets embedded in a 2d brickwork - type lattice could potentially demonstrate quantum supremacy in constant time using only qubits , and with entirely local interactions .such a protocol would implement the sparse " iqp circuits appearing in , which require only two - body interactions . while this can be implemented in our framework using a 2d lattice of qubits which generalizes our , the possibility of reducing resource requirements further , potentially to qubits , would require using local complementation operations on graph states .as these operations can quickly generate long - range entanglement using only local basis measurements , we consider such capabilities to represent a unique feature of mqc which are well - suited to reproducing the long - range , low - depth quantum circuits often utilized for quantum sampling .this work was supported in part by national science foundation grants phy-1314955 , phy-1521016 , and phy-1620651 . 99 p.w .shor , _ polynomial - time algorithms for prime factorization and discrete logarithms on a quantum computer _ , siam j. sci .* 26 * , 1484 ( 1997 ) .grover , _ quantum mechanics helps in searching for a needle in a haystack _ , phys .lett . * 79 * , 325 ( 1997 ) .d. deutsch and r. jozsa , _ rapid solution of problems by quantum computation _ ,london a * 439 * , 553 ( 1992 ) .terhal , d.p .divincenzo , _ adaptive quantum computation , constant depth quantum circuits and arthur - merlin games _ , quant .* 4 * , 134145 ( 2004 ) .bremner , r. jozsa , d.j .shepherd , _ classical simulation of commuting quantum computations implies collapse of the polynomial hierarchy _ , proc .a * 467 * , 459472 ( 2011 ) .s. aaronson and a. arkhipov , _ the computational complexity of linear optics _ , theory of computing , 9(4):143252 , ( 2013 ) .t. morimae , k. fujii , and j.f .fitzsimons , _ on the hardness of classically simulating the one clean qubit model _ , phys .lett . * 112 * , 130502 ( 2014 ) .bremner , a. montanaro , and d.j .shepherd , _ average - case complexity versus approximate simulation of commuting quantum computations _ , phys .* 117 * , 080501 ( 2016 ) .s. rahimi - keshari , t.c .ralph , c.m .caves , _ sufficient conditions for efficient classical simulation of quantum optics _ , phys .x * 6 * , 021039 ( 2016 ) .e. farhi and a.w .harrow , _ quantum supremacy through the quantum approximate optimization algorithm _ , arxiv:1602.07674 , ( 2016 ) .rohde , d.w .berry , k.r .motes , j.p .dowling , _ a quantum optics argument for the # p - hardness of a class of multidimensional integrals _ , arxiv:1607.04960 , ( 2016 ) ._ , _ characterizing quantum supremacy in near - term devices _ , arxiv:1608.00263 , ( 2016 ) .x. gao , s .-wang , and l .- m .duan , _ quantum supremacy for simulating a translation - invariant ising spin model _ , phys .lett . * 118 * , 040502 ( 2017 ) .k. fujii , _ noise threshold of quantum supremacy _ , arxiv:1610.03632 , ( 2016 ) .m. bremner , a. montanaro , and d. shepherd , _ achieving quantum supremacy with sparse and noisy commuting quantum computations _ , arxiv:1610.01808 , ( 2016 ) .b. fefferman , m. foss - feig , and a.v .gorshkov , _ exact sampling hardness of ising spin models _ , arxiv:1701.03167 , ( 2017 ) .f. shahandeh , a.p .lund , t.c .ralph , _ quantum correlations in nonlocal bosonsampling _ , arxiv:1702.02156 , ( 2017 ) .lund , m.j .bremner , t.c .ralph , _ quantum sampling problems , bosonsampling and quantum supremacy _ , arxiv:1702.03061 , ( 2017 ) .j. bermejo - vega , d. hangleiter , m. schwarz , r. raussendorf , and j. eisert , _architectures for quantum simulation showing quantum supremacy _ , arxiv:1703.00466 , ( 2017 ) .a. deshpande , b. fefferman , m. foss - feig , and a.v .gorshkov , _ complexity of sampling as an order parameter _ , arxiv:1703.05332 , ( 2017 ) .t. kapourniotis and a. datta , _ nonadaptive fault - tolerant verification of quantum supremacy with noise _ , arxiv:1703.09568 , ( 2017 ) ._ , _ boson sampling on a photonic chip _ ,science * 339 * , 798801 ( 2013 ) .m. tillmann , b. daki , r. heilmann , s. nolte , a. szameit , and p. walther , _ experimental boson sampling _ , nature photon .* 7 * , 540544 ( 2013 ) .crespi , a. _ et al ._ , _ integrated multimode interferometers with arbitrary designs for photonic boson sampling _ , nature photon .* 7 * , 545549 ( 2013 ) .broome , a. fedrizzi , s. rahimi - keshari , j. dove , s. aaronson , t. ralph , and a.g .white , _ photonic boson sampling in a tunable circuit _, science * 339 * , 794798 ( 2013 ) ._ , _ multi - photon boson - sampling machines beating early classical computers _ ,arxiv:1612.06956 , ( 2016 ) .r. raussendorf and h.j .briegel , _ a one - way quantum computer _ ,lett . * 86 * , 5188 ( 2001 ) .r. jozsa , _ an introduction to measurement based quantum computation _ , arxiv : quant - ph/0508124 , ( 2005 ) .briegel , d.e .browne , w. dr , r. raussendorf , and m. van den nest , _ measurement - based quantum computation _ , nature physics * 5 * , 1926 ( 2009 ) .d. hangleiter , m. kliesch , m. schwarz , and j. eisert , _ direct certification of a class of quantum simulations _ , arxiv:1602.00703 , ( 2016 ) .meyer and l.j .stockmeyer , _ the equivalence problem for regular expressions with squaring requires exponential space _ , in proceedings of the 13th ieee symposium on switching and automata theory , pp .125129 ( 1972 ) .stockmeyer , _ the polynomial - time hierarchy _ , theor .sci . * 3 * , 122 ( 1977 ) .a. ehrenfeucht and m. karpinski , _ the computational complexity of ( xor , and)-counting problems _ , technical report 8543-cs , ( 1990 ) .stockmeyer , _ on approximation algorithms for # p _ , siam j. comput . *14 * , 849861 ( 1985 ) .the proof actually allows for the existence of some multiplicative error , in the form of distributions which satisfy for all outcomes , with a fixed polynomial .while sampling from such a distribution is still classically intractable , this is unsatisfactory from a practical standpoint .for example , if any outcome satisfies , then we must have the probability be exactly 0 as well .this is clearly impossible to verify for any experimental distribution , leading exact classically intractable sampling results to have a more strained relationship with experimental realities than their average - case counterparts .d. gottesman , _ the heisenberg representation of quantum computers _, talk at international conference on group theoretic methods in physics ( 1998 ) , arxiv : quant - ph/9807006 .j. miller and a. miyake , _ hierarchy of universal entanglement in 2d measurement - based quantum computation _ , npj quantum information * 2 * , 16036 ( 2016 ) .d. gottesman and i. l. chuang , _ demonstrating the viability of universal quantum computation using teleportation and single - qubit operations _ , nature * 402 * , 390 ( 1999 ) .s. aaronson and l .- j .chen , _ complexity - theoretic foundations of quantum supremacy experiments _ ,arxiv:1612.05903 , ( 2016 ) .m. rossi , m. huber , d. bru , and c. macchiavello , _ quantum hypergraph states _ , new j. phys . * 15 * , 113022 ( 2013 ) .o. ghne , m. cuquet , f.e.s .steinhoff , t. moroder , m. rossi , d. bru , b. kraus , and c. macchiavello , _ entanglement and nonclassical properties of hypergraph states _ , j. phys .a * 47 * , 335303 ( 2014 ) . for comparison , the familiar complexity classes p and npare respectively contained in the zero and first levels of the polynomial hierarchy ( ph ) .while the randomized complexity class bpp has only been proven to lie in the second level of the ph , a proof of the widely conjectured p = bpp would place it in the zero ( lowest ) level as well . as a corollary , provingp = bpp would allow stockmeyer counting to be implemented in the second level of the ph , causing the hypothetical collapse invoked in classically intractable sampling results to occur at the second level of the ph , rather than the third .s. toda , _ pp is as hard as the polynomial - time hierarchy _, siam j. comput . *20 * , 865877 ( 1991 ) . on the other hand ,the effect of the measurements used in our verification scheme on the measured state is different from that of direct measurements of .for example , performing a genuine quantum nondemolition measurement of on the sampling state would leave it unchanged , whereas our scheme always collapses it to a tensor product of single - qubit and eigenstates .since we only care about measurement statistics and not the post - measurement state , this has no impact on our protocol .t. morimae , y. takeuchi , and m. hayashi , _ verified measurement - based quantum computing with hypergraph states _ , arxiv:1701.05688 , ( 2017 ) .we now discuss the relationship of our work to previous proposals for classically intractable sampling with qubits , the class of boson sampling protocols having a largely different flavor with regards to theoretical underpinnings and experimental implementations .as mentioned before , our work is most closely related to that of , as it implements their circuit - based iqp sampling in the context of mqc .we have seen that this translation has several practical advantages , mainly that it allows us to use constant depth quantum circuits generated by local interactions to perform classically intractable sampling in constant time .this translation also reveals the role of mqc byproduct operators in simplifying our protocol , with an associated randomness which ends up having no impact on the classical intractability of sampling .furthermore , the convenient verification scheme utilized in our protocol can be applied equally well in any classically intractable sampling implementation using the iqp sampling states associated with conjecture [ conj ] , revealing an inherent practical advantage of sampling from this class of states .this advantage more generally applies to any protocol which samples from output distributions defined by so - called hypergraph states .although our work does nt make use of the alternate conjecture 2 of , concerning the average - case hardness of estimating fully - connected ising partition functions , our techniques can be easily generalized to define a similar mqc sampling protocol which relies upon conjecture 2 . in this alternate protocol, our gadget would be replaced by gadgets for the non - clifford and gates , and our byproduct group would contain not only , but also gates . in terms of the clifford hierarchy of unitary operations ,the pattern which emerges here is that using gadgets which implement operations at the third level of the clifford hierarchy leads to a random byproduct group formed from clifford gates at the second level of the clifford hierarchy .just as with our protocol , this would eliminate the need to apply any clifford gates by hand " , reducing the physical resources needed for sampling .our work also has many similarities to the mqc sampling protocol of , which similarly runs in constant time using a fixed brickwork " resource state preparable by a constant depth quantum circuit , and also allows for verification . in our protocol, the average - case hardness of sampling relies on conjecture [ conj ] , while the average - case hardness in relies upon a conjecture regarding the estimation of output probabilities of random quantum circuits , argued in to be a stronger assumption . on the other hand, this latter conjecture is very similar to that used in . on a different note ,the simple byproduct group appearing in our protocol , which is necessary for our preparation measurements to always implement iqp circuits , allows us to carry out verification using only single - qubit measurements on our output sampling states .in contrast , the more general unitaries implemented in would likely preclude any simple verification schemes based on the ideas of .here we study the preparation stage of our mqc protocol , and show that the polynomials associated with our random output states contains uniformly random quadratic and linear coefficients , so that every and is an independent binary random variable with equal probability .we show this by first characterizing the distribution of preparation outcomes , where , then using this to characterize the distribution of byproduct polynomials " arising in our protocol . we show that is uniformly random , a fact which holds true in the presence of arbitrary noise with spatial correlations of a bounded distance .this result will be used in our proofs of sampling and verification in appendices [ sec : sampling ] and [ sec : verification ] .we calculate using the born rule , which in our ideal setting says that given -dependent preparation measurements on , the probability of obtaining an outcome ( where denotes the appropriate single - qubit eigenbases ) is the expression here denotes not a scalar , but a partial inner product on , consisting of an -qubit state which is nt measured until the sampling and verification stage of our protocol .consequently , eq . ( [ eq : t_prob ] ) says that is equal to the squared norm of this state .although we would expect this output state to be the sampling state , a careful calculation of the inner products arising in our protocol reveals an additional scalar factor per preparation measurement , as remarked in figure [ fig : gadgets ] .this shows that , where , which then proves the preparation measurement outcomes to be distributed as .we note that this independence of measurement outcomes is a generic feature of mqc state preparation protocols , as the implementation of norm - preserving unitary operations in every preparation measurement will necessarily force eq .( [ eq : t_prob ] ) to take a constant value for all , corresponding to every preparation outcome being uncorrelated and uniformly random .we now use the uniform randomness of preparation measurement outcomes to prove the uniform randomness of byproduct polynomials , which depend on as .these global byproducts arise from the local byproduct operators associated with random outcomes in each of the mqc gadgets shown in figure [ fig : gadgets ] , which are then commuted through our computation to contribute linear and quadratic terms to .each quadratic and linear coefficient in can thus be expressed as a sum ( mod 2 ) of many different measurement outcomes , and it is clear that the complete randomness of each measurement outcome will lead every byproduct coefficient in which contains even a single random to be itself completely random .it is clear that every quadratic coefficient contains contributions from at least one random , with the one exception of .because our gadgets only apply byproduct operators between nearest neighbor logical qubits , and since qubits 1 and are never adjacent to each other in the circuit diagram of figure [ fig : protocol ] , it remains possible that will always be 0 .a simple fix for this is to simply vary the ordering among each triple of qubits entering a non - clifford gadget using gadgets , so that all qubits are adjacent to all other qubits equally often . in this case , every quadratic coefficient in will receive random contributions from outcomes arising in gadgets , and every linear coefficient will receive contributions from outcomes arising in 1d cluster wires and gadgets .this clearly proves that the distribution of byproduct operators will be uniformly random as , where . the above analysis which counts the number of measurement outcomes contributing to each coefficient of is unnecessary in an idealized setting , but is useful in the presence of realistic noise and experimental imperfections .we can generally characterize this behavior as a trace preserving quantum operation which maps our mqc resource state to some imperfect .our measurement statistics in this setting are again set by the born rule , but now as \\ & = { \mathrm{tr}}\left[{\ensuremath{|{\psi_\mathrm{prep}}\rangle\langle{\psi_\mathrm{prep}}| } } \mathcal{e}^\dagger({\ensuremath{|{\mathbf{t}}_{{\mathbf{a}}}\rangle\langle{\mathbf{t}}_{{\mathbf{a}}}|}})\right],\end{aligned}\ ] ] where represents the quantum operation which is adjoint to . while may modify our measurement projectors so as to displace or correlate the probabilities of local outcomes , we noted above that the coefficients of byproduct polynomials are determined by at least different such measurement outcomes , any one of which is capable of completely randomizing the probability of that coefficient .consequently , in order for noise to alter the distribution of byproduct operators , the operator must induce correlations between at least different measurement outcomes in our system . in the presence of any noise with a finite correlation length , this is clearly impossible , which proves the uniform randomness of byproduct operators to be a robust property of our mqc protocol .here we give a detailed proof of the classical intractability of our mqc sampling protocol under constant variational noise in the output sampling distributions .we first discuss the general idea behind average - case classically intractable sampling protocols , so as to make clear what precisely needs to be demonstrated in our proof .we then describe the use of classical post - processing on our measurement records to implement coarse - graining " in the description of our protocol .this coarse - graining lets us simplify the analysis of failure probabilities required in our proof , and eventually lets us prove theorem [ thm : one ] , with its associated variational error threshold of .we note a certain duality between the proof given here and the proof of theorem [ thm : two ] given in appendix [ sec : verification ] , with the former using a guaranteed bound on as a starting point and the latter deriving such a bound on as an end result .any proof of classical intractability of quantum sampling requires adopting somewhat of a dual viewpoint . on the one hand, we recognize that our sampling procedure is an intrinsically quantum task , but at the same time assume that the sampling distributions arising from this quantum process can be exactly replicated using a probabilistic classical algorithm .this assumption , analogous to the assumption of a hidden variable model describing our quantum process , is made in order to derive a ( widely conjectured ) contradiction , the collapse of the polynomial hierarchy of complexity theory .even though the probabilities of individual sampling outcomes are exponentially small and would require exponential time to estimate empirically , if they arise from a classical sampling process , then the technique of stockmeyer approximate counting can be used to estimate these probabilities up to multiplicative error . in particular ,stockmeyer counting can be used to output an estimate which is related to our probability of interest by , for any desired polynomial .the use of an average - case complexity conjecture , like conjecture [ conj ] in our paper , is then required to connect the ability to estimate such probabilities in the presence of noise to the ability to solve # p - hard problems , from which a collapse of the polynomial hierarchy follows .stockmeyer counting is an unphysical process which can not be carried out efficiently using classical or quantum devices , but can be implemented with a hypothetical alternating turing machine " capable of efficiently solving problems in the third level of the polynomial hierarchy .furthermore , stockmeyer counting involves manipulations on a register of binary random variables underlying our random outcomes , and consequently can only estimate probabilities arising as outcomes of classical randomized algorithms .nonetheless , if we assume the existence of an efficient classical algorithm for exactly sampling from the distribution , stockmeyer sampling would then permit a device existing in the third level of the polynomial hierarchy to estimate any up to multiplicative error , and thus solve any problem in # p. because solving arbitrary problems in # p is known by toda s theorem to allow one to efficiently solve all problems in the hierarchy , assuming the existence of this efficient classical algorithm for sampling from distributions would necessarily collapse the polynomial hierarchy to its third level , a contradiction .hence , this proves the task of sampling from arbitrary to be classically intractable . a necessary ingredient in any _ average - case _ classically intractable sampling result is a mathematical problem whose estimation remains # p - hard even when our estimates have some finite probability of failing to be multiplicatively close to their actual value .in our setting , this problem is furnished by conjecture [ conj ] , which says that estimating up to multiplicative error is # p - hard , even when a fraction of our estimates fail to lie within this multiplicative bound . evidence in support of conjecture [ conj ]is given in .this failure probability ends up determining the allowed deviation of our quantum sampling distributions from their ideal .if this deviation is sufficiently small , as measured by the variational distance between and , the assumed computational hardness of estimating then guarantees that our quantum sampling task will be classically intractable .consequently , our main goal in this proof is to analyze the deviations in our distributions arising from deviations in our experimental states from their ideal , and to find sufficient conditions to guarantee that the failure probability in estimating using stockmeyer sampling on is below our threshold .we now introduce the idea of coarse - grained sampling distributions , which indeed we have already implicitly made use of in the description of our sampling protocol . in section [ sec :protocol ] , we described different preparation outcomes as giving rise to different ideal sampling states via the correspondence .this means that whenever different preparation outcomes generate the same byproduct polynomials , the resultant sampling states will be identical . in reality though , it is entirely possible that these preparation outcomes will generate different sampling states , leading our description of a single sampling state to represent a coarse - graining over equivalent preparation outcomes .in particular , if denotes the probability of obtaining a preparation outcome arising from our -dependent pauli measurements on , then we find to be given by represents a normalization factor which gives the total probability on input of obtaining any outcome associated with the byproduct polynomial .while the above coarse - graining might appear trivial , we will now show how this can be used to effectively mix the inequivalent states and when and differ only in their linear coefficients .if we describe our overall sampling process at this stage as first preparing a random state with , which is then sampled to obtain an basis outcome of , then we would record this in an experiment as yielding the outcome in some outcome space . from the layout of our sampling protocol ,the probability of this outcome is clearly . because of the degeneracy for all outcomes , we say that any such outcome samples from the polynomial .these exponentially many outcomes are precisely the ones which can be used to obtain an estimate of via stockmeyer counting , and we will choose our coarse - graining to eliminate this degeneracy , so that each is determined by a unique sampling outcome from a unique output sampling state .we note that this coarse - graining was used implicitly in , although interpreted there as an obfuscation " of output probabilities . in appendix[ sec : preparation ] we showed that the distribution of byproduct polynomials is uniformly random as , where . given this robust characterization of , we will use to indicate the conditional probability of obtaining any outcome which samples from , given that the quadratic portion of our byproduct polynomial is .this leads to be we use to indicate the basis outcome string corresponding to the linear terms of . in the above, we have also defined to be the state where indicates a product of operators . in the idealsetting where each , the result of applying to is to simply remove the linear components of , leaving the state . in this idealizedsetting , the result of averaging over all and applying the correction in each case is to leave the state , which contains only cubic and quadratic terms . while we ca nt literally implement these unitary corrections within the setting of mqc, we can simulate their action through classical postprocessing on our measurement outcomes .in particular , whenever we obtain an outcome of in our sampling experiment , we instead record this as a coarse - grained outcome lying in a simpler outcome space .this is equivalent to recording only the polynomial sampled by our experiment , and forgetting the relative contributions to from mqc byproduct operators and from sampling outcomes .the equivalence of this coarse - graining in our measurement records with the action of active unitary corrections arises from the equality used to derive eq .( [ eq : prob_1 ] ) . given this coarse - grained description of our experiment, we would like to bound the failure probability of obtaining an estimate which differs from the true by more than a multiplicative factor of . by requiring this probability to be less than the appearing in conjecture [ conj ] , we will arrive at concrete conditions on our coarse - grained output states in order for our mqc protocol to implement classically intractable sampling .while the stockmeyer counting used to obtain from our sampling probabilities technically introduces its own multiplicative error in this estimate , because this error can be reduced in our ( hypothetical ) stockmeyer counting algorithm to any inverse polynomial while still retaining a polynomial runtime , we will ignore this error in the following and simply set .we first use markov s inequality to bound the probability of our estimate failing to lie within an arbitrary constant distance of , , over arbitrary polynomials .we will later convert this into a failure probability for obtaining an estimate of outside of our allowed multiplicative error . since the approximate and exact values of can both be interpreted as probabilities in different distributions , and , we find that the distance between these values , averaged over with fixed , is proportional to the variational distance between these distributions as defining to be the variational distance between these distributions , markov s inequality then tells us that for any and for with a fixed , having this bound in hand , we now give an anticoncentration bound on the probability that , which lets us convert the above bound into a statement about the failure probability .we utilize a particular form of cantelli s inequality stating that for any non - negative random variable and constant in , this agrees with the more well - known paley - zygmund inequality at , but otherwise gives a more stringent upper bound .setting , , and using the result from , this lets us restrict the probability of being less than as we now define to be the average variational distance between distributions and , averaged over all . combining eq .( [ eq : markov ] ) with the average of eq .( [ eq : pz ] ) over all , this results in a bound on the multiplicative failure probability of which holds for every .we now require the failure probability to be at most , in line with conjecture [ conj ] , and numerically optimize over to find the largest allowed value of for which this can be achieved .this yields a maximum of , which has a rational lower bound of .this completes our proof of theorem [ thm : one ] .here we prove that the verification scheme occurring in the last stage of our mqc protocol does indeed guarantee the classically intractable of our sampling process .we first show that the local and measurements made on our sampling states during verification correspond to exact measurements of the nonlocal stabilizers , via the parity functions .this allows us to estimate the average with respect to random , which allows us to bound the average variational distance using results from .if our empirical estimate of remains sufficiently low , an application of hffding s inequality lets us show that verification measurements are sufficient to conclude that with any fixed statistical significance , proving theorem [ thm : two ] .we first briefly review our verification procedure .after preparation of a random , we choose with 50% probability to perform either sampling or verification measurements on .if verification is chosen , we further choose a random qubit of which is measured in , while all other qubits are measured in .we denote the measurement outcome string by , ignoring the fact that is associated with a different measurement basis .we then use our knowledge of the polynomial associated with to compute a parity function of , , where is the polynomial difference .it is easy to show that is independent of the value of .we show here that the process of measuring using single - qubit pauli measurements and then computing is exactly equivalent to measuring the nonlocal stabilizer as , where indicates the outcome corresponding to .both processes yield binary random variables as their output , and in order to prove that their probability distributions are identical , we can prove that both measurement schemes are associated with identical hermitian observables .while measurements of are clearly associated with the hermitian operator itself , it is nt immediately clear how we should interpret the measurements of as measuring any particular hermitian operator .the answer comes by recognizing that our relevant measurement statistics during verification consist only of the binary values , and forgets the specific outcomes which produced them .translating these outcomes into equivalent outcomes shows the expectation value of on to be \\ & = { \mathrm{tr}}\left[\rho_f \left ( x_i \sum_{{\mathbf{v } } } ( -1)^{\partial_{i}f({\mathbf{v } } ) } { \ensuremath{|{\mathbf{v}}\rangle\langle{\mathbf{v}}|}}\right)\right ] \\ & = { \mathrm{tr}}\left(\rho_f { h_f^{(i)}}\right).\end{aligned}\ ] ] in the last equality , we have used the definition of in eq .( [ eq : hami ] ) , while in the second to last equality we used .this reveals that the expectation value of is equal to that of on , and since we made no assumptions about , this shows that our verification scheme is exactly equivalent to measuring . as a concrete example , suppose we are working with the 3-qubit sampling state and wish to measure the stabilizer . in this case, we would perform our verification by measuring on qubit 1 , on qubits 2 and 3 , and then computing the polynomial .this process , which can be thought of as obtaining classical values and plugging them in to the stabilizer itself , would indicate a success when and , or when and at least one of or holds true . given the ability to measure arbitrary using single - qubit and measurements, we now note that the average over randomly chosen sites is equal to 1 on a given only when is the ideal sampling state .more generally , the techniques of show that this average can be used to bound the closeness of to , as measured by the fidelity . for our purposes , it will be more convenient to work with the square of this quantity , . when , can not be orthogonal to , and must have a fidelity squared of at least .if we average both sides of this equality over polynomials with random , then we find that the average fidelity squared of output states relative to their intended is bounded by the average as with eq .( [ eq : fidelity_bound ] ) in hand , we can now bound the average variational distance between the sampling distributions arising from and .we utilize the fact that the quantum 1-norm distance gives an upper bound on the variational distance of any output sampling distributions , where with the operator absolute value .we also use a well - known bound on the 1-norm distance , , which together yield in the above , we used the two bounds mentioned , as well as jensen s inequality for the concave function in eq .( [ eq : bound_two ] ) . using the relationship between the average of stabilizers and parity functions , , thisfinally lets us show that in order to verify that , it is sufficient for our parity function average to be below this gives the bound appearing in theorem [ thm : two ] .although any empirical estimate of obtained from finitely many measurements of is nt guaranteed to accurately reflect its true value , we can bound the closeness of this estimate with high probability using the uniformly random distribution of byproduct operators proved in appendix [ sec : preparation ] .in particular , this tells us that for any fixed , the average over output random byproducts is unbiased towards any fixed , and thus is an accurate indicator of the uniform closeness of sampling states .this lets us treat as a simple binary random variable , and use hffding s inequality to bound the probability of this estimate deviating too far from the true value of . hffding s inequality says that if we obtain an estimate of a binary random variable using independent samples , the probability of the true average lying above by more than is in our case , we choose to be our random parity function , and to be the difference between our specified tolerance , and the more numerically precise tolerance for classically intractable sampling derived in appendix [ sec : sampling ] , . setting , this gives a failure probability of converting this into a success probability then completes our proof of theorem [ thm : two ] .a final remark is given to our means of measuring the highly nonlocal , non - pauli stabilizers through single - qubit pauli measurements .this technique can actually be generalized to measure the stabilizers of any sampling state formed by starting with and applying an iqp circuit composed of , , , , and any higher multiply - controlled gates . as these states include all hypergraph states as special instances ,our verification scheme consequently extends that of , which requires the measured hypergraph stabilizers to be supported on a constant number of qubits .generalizing yet further , we see that the necessary and sufficient condition for a local measurement scheme to exactly replicate measurements of a nonlocal operator in this manner is that can be diagonalized in a basis which is a tensor product of single - qubit eigenbases .while this allows us to measure many different multi - qubit operators using only single - qubit measurements , a simple counterexample is given by the hermitian operator , which can not be measured in this manner owing to its unique eigenstate being the entangled .
while quantum speed - up in solving certain decision problems by a fault - tolerant universal quantum computer has been promised , a timely research interest includes how far one can reduce the resource requirement to demonstrate a provable advantage in quantum devices without demanding quantum error correction , which is crucial for prolonging the coherence time of qubits . we propose a model device made of locally - interacting multiple qubits , designed such that simultaneous single - qubit measurements on it can output probability distributions whose average - case sampling is classically intractable , under similar assumptions as the sampling of non - interacting bosons and instantaneous quantum circuits . notably , in contrast to these previous unitary - based realizations , our measurement - based implementation has two novel features . ( i ) verifying the classical intractability of our sampling is done using the same resource requirements as sampling itself , requiring only a change in the pauli measurements bases . ( ii ) our implementation involves no adaptation of measurement bases , leading output probability distributions to be generated in constant time , independent of the system size . thus , it could be implemented in principle without quantum error correction .
the computational power of quantum computers has threatened classical cryptosystems . for example ,public key cryptosystems , such as rivest - shamir - adleman public key cryptosystem , can be broken by quantum computers to be able to perform the fast factorization .on the other hand , quantum mechanical phenomena provide us a new kind of cryptosystems , called quantum key distribution ( qkd ) , from which we can in principle obtain perfectly random and secure key strings .the first quantum cryptographic protocol was presented by bennett and brassard and their protocol bore the acronym bb84 . in 1991 ,ekert proposed a qkd protocol using entangled particles .it was modified by bennett , brassard , and mermin .let us call the modified version the einstein - podolsky - rosen ( epr ) protocol .the epr protocol is a qkd between two persons using an epr pair of spin particles in the state . using the greenberger - horne - zeilinger ( ghz ) state the secret sharing protocolwas presented by hillery , bu and berthiaume . in this protocol, alice distributes the information on a key to bob and charlie . andthe key can be restored only when their information are collected by them . in this paper , applying the secret sharing protocol , we generalize the epr protocol on noiseless channels by the properties of several cat states and then obtain qkd protocols between group and group . in each group the information of a secret key is distributed to all members . after the process for recovery of the key, the two groups get the secret key . and the protocols require each member s approval and cooperation .furthermore , when some members try to affect the shared bit adversely , if the shared key does not have the correct correlation ( or anti - correlation ) then it should be revealed to others in the test step .any external eavesdropper should also be detected even if several members assist the eavesdropper .this paper is organized as follows : in section 2 , we investigate some properties of several cat states .the qkd protocol between two groups and its modification are presented in section 3 .we analyze the security for the protocol in section 4 .let us begin with reviewing cat states .the t - particle cat state is defined as a entangled state of the type whereby stands for the binary variable in , and .furthermore , equation ( [ eq : cat ] ) becomes one of the bell states when and one of the ghz states when . from now on , we use the following several cat states : we define , , , and . for , we notice the states in equation ( [ cat state1 ] ) and ( [ cat state2 ] ) have the following relations : when is a group of t persons , assume that , for one of the above four cat states , each person takes its one particle and measure in the - or -direction .firstly we let be the number of members modulo 4 who measure in the -direction , , and the sum of the measurement outcome of all members modulo 2 .then the following results are obtained .* suppose is even .then is for , and it is for , where ( mod 2 ) for any .* suppose is odd .then is for , and it is for . also , it is noticed that if the above suppositions of are not satisfied , becomes 0 or 1 with probability i.e. it has no rules . using an induction on the proof of such facts is given . to begin with , for is trivial .assume that these statements are true for .the cat state is considered .let be even .equation ( [ equ : decomposition ] ) implies if any one member takes measurement in the -direction and obtains 0 then and will be even , where g is the group of all members except that member .from equation ( [ equ : catphi ] ) and . thus .otherwise , by ( [ equ : catphi ] ) and .thus . on the other hand , for the casethat the member takes a measurement in the -direction , the proof is similar to the above case .hence , we hold that . that is , the previous assumption holds for . for other cat states ,all of the proofs are similar .now , we consider two parties , and , that consist of members and members , respectively . applying the previously described properties of the cat states , we obtain the table [ table : exencoding ] . [ cols="^,^,^,^,^,^ " , ] if and , it follows from the condition that the number of required test bits is not less than 270 .however , for fixed and the fewer number of test bits are required .the change of the error rate according to the number of test bits is exemplified in tables [ table : pro1,6,6 ] for the case that and the probability in the modified protocol has a little difference from the primary protocol because there are no discarded bits and is determined by and . by removing the strategy to use we can easily get the following error rates of the modified protocol . in the case , in the case , as in the case of the first protocol, we analyze just the case .the error rate in equation ( [ eq : pro2ab ] ) has no connections with the values of and , and depends just on and .for , we require only that satisfies . for , it becomes . in the modified protocol, we can notice that two groups require remarkably smaller test bits than the first protocol , and furthermore errors can be detected from the test step even in the cases or , and , while errors can not be detected for the case in the first protocol .we remark that if just a collector can take the information on or only one member plays the collector for all shared bits then a fewer test bits would be required .we assume that eve intercepts particles travelling from to , and call this state ` the intercepted state ' .she chooses an -particle cat state and resends particles of this cat state to ( ) .we refer to the remainder ( )-particle state as ` the remainder state ' . before announcement of ( or ) the measurement of the intercepted state ( or the remainder state ) can not give her the information on ( or ) .so she should measure on the intercepted state and the remainder state , after and are announced .even though she measures in the way , she should have the information on to obtain , since is completely determined by and . in order to take information on , she needs any conspirator in . on the other hand , she wants to change or into the values she desires to prevent errors from occurring .thus she needs collectors assistance in or . without any conspirators the test induces errors with probability in the first protocol and probability in the modified protocol , respectively .for these facts eve s strategy makes at least the following error rate in the first protocol . in the case , in the case , in the modified protocol the error rates are similar to the first protocol , because eve can not have a different strategy . from these equationswe can know that this strategy is not optimal to eve .applying the properties of cat states and the secret sharing , we proposed two generalized qkd protocols between two groups and showed that the protocols are secure against an external eavesdropper using the intercept / resend strategy .the importance of these protocols is that any member in the two groups can not obtain the secret key strings without cooperation , that is , the secret key strings can be obtained only under all member s approval . *acknowledgments * s.c . acknowledges the support from ministry of planning and budget and thanks s.lee for discussions .d.p.c . acknowledges the support from korea research foundation ( krf-2000 - 0150dp0031 ) .
in this paper , we investigate properties of some multi - particle entangled states and , from the properties applying the secret sharing present a new type of quantum key distribution protocols as generalization of quantum key distribution between two persons . in the protocols each group can retrieve the secure key string , only if all members in each group should cooperate with one another . we also show that the protocols are secure against an external eavesdropper using the intercept / resend strategy .
the control of chaos in nonlinear systems has been an active field of research due to its potential applications in many practical situations where chaotic behaviour is not desirable . since the time of grebogi - ott - yorke methods were proposed to control chaotic dynamics to desired periodic states or to stabilize unstable fixed points of the system .recently these techniques have been extended to control spatio - temporal chaos in spatially extended systems .they are useful in many applications to control dynamics in plasma devices and chemical reactions and to reduce intensity fluctuations in laser systems .we note that control is important for emergence of regulated and sustainable phenomena in biological systems , for example to ensure stability of signal - off state in cell signaling networks which is desirable to prevent autoactivation .also in general , chaotic oscillations can degrade the performance of engineered systems and hence effective and simple control strategies have immense relevance in such cases .the dynamics of spatially extended systems can be modeled in a very simple but effective manner by using coupled map lattices ( cml ) introduced by kaneko .this approach forms an efficient method to coarse grain the local dynamics in such systems and has been used to understand complex natural phenomena .the dynamical states in such systems are extremely rich and varied , including spatiotemporal chaos , regular and irregular patterns , travelling waves , spiral waves etc. . in the case of coupled map lattices , methods for control and synchronization reported earlierare mostly based on nonlinear feedback control , constant and feedback pinning , etc . .recently coupled chaotic maps have been shown to stabilize to a homogeneous steady state due to time delay in coupling and in the presence of random delays .similarly , using multiple delay feedback , unstable steady states are shown to stabilize in chaotic electronic circuits . under threshold activated coupling at selected pinning sites , chaotic neuronal maps are reported to stabilize to regular periodic patterns . in one way coupled map lattice decentralized delayed feedbackcan introduce control of chaos .we note that most of the control schemes like feedback and pinning , the control units are often derived from the dynamics of the system and as such must be designed specific to each system . also delay feedback makes the system higher dimensional from the analysis point of view and requires careful choice of delay and its implementation for achieving control .however in many applications for the practical implementation of the control scheme , it is desirable to have a general scheme requiring minimum information about the system to be controlled . in this paper, we introduce a coupling scheme , which can be applied externally to the system and does not require a priori information about the system. as such it is very general and effective and can be implemented easily for control of spatio - temporal dynamics and patterns on coupled discrete systems .we note that in the context of continuous systems , one of the methods recently reported to induce amplitude death or steady state in coupled systems is coupling to an external damped system referred to as environmental coupling or indirect coupling .this method has been successfully implemented using electronic circuits and applied for controlling dynamics of single systems and systems with bistability as linear augmentation .these methods in general involve a single external system to control the dynamics of coupled continuous systems .the present paper extends this particular method in two ways : the method proposed is for controlling dynamics in discrete systems and the external control system is a spatially extended system .we find that the spatial extension has its own advantages and relevance as a method for suppressing dynamics .the control system in our scheme is designed as an external lattice of damped discrete systems such that without feedback from the system , this control lattice stabilizes to a global fixed point . when system and control lattices are put in a feedback loop ,the mutual dynamics works to control their dynamics to a homogeneous steady state or periodic state .we illustrate this using logistic map and henon map as site dynamics .the analysis is developed starting with a single unit of the interacting system and the control , which is then extended to connected rings of systems , interacting 2-dimensional lattices and random networks .this ` bottom up approach ' is chosen , not only because it gives clarity in describing the mechanism but also because the cases of even single system or rings are not studied or reported so far for discrete systems .we analyze the stability of the coupled system and control lattices using the theory of circulant matrices and routh - hurwitz s criterion for discrete systems .thus we obtain regions in relevant parameter planes that correspond to effective control .we also report results from detailed numerical simulations that are found to agree with that from the stability analysis . in particular cases ,we obtain control even when the units in the system lattice are uncoupled .so also , uncoupled units in the control lattice can control the coupled system lattice .moreover by tuning the parameters of the control lattice , we can achieve control to regular periodic patterns on the system lattice .recently , chimera states in a 1-d lattice with nonlocal couplings have been realized using liquid crystal light modulator .we show how our scheme can control the chimera states in this system .the extended and external control system introduced here can model an external medium effectively in controlling the dynamics of real world systems . as an example, we show how the dynamical patterns and undesirable excitations produced by coupled neurons can be controlled due to interaction with the extra cellular medium . in the endwe extend the scheme to control the dynamics of discrete time systems on a random network . in this case, the stability of the controlled state is analyzed using master stability function method and supported by direct numerical simulations .in this section we introduce our scheme for controlling a 1-d cml of size by coupling with an equivalent lattice of damped systems .the dynamics at the node of the system lattice constructed using the discrete dynamical system , , with time index , is given by : here is the state vector of the discrete system , is the nonlinear differentiable function . also , represents the strength of diffusive coupling and is a matrix , entries of which decide the variables of the system to be used in the coupling .the dynamics of a damped discrete map in the external lattice is represented as : where k is a real number with .thus for positive values of , this map is analogous to over - damped oscillator while for negative values of it is analogous to damped oscillator .the dynamics at the node of the 1-d cml of size of these maps with diffusive coupling can be represented by the following equation : we have , and represents the strength of diffusive coupling among the nodes of the external lattice .we consider periodic boundary conditions for both the lattices .when the system lattice is coupled to the lattice of external systems , node to node , with feedback type of coupling , then the resulting system can be represented as : here is the matrix which determines the components of the state vector which take part in the coupling with external system . in the next sectionwe start with the analysis of control for one unit of this coupled system. then we study the case of 1-d cml and 2-d cml and extend the scheme to random networks .in all these cases , for direct numerical simulations , the typical discrete systems considered are the logistic map and the henon map .the dynamics of a single unit of the model introduced in eq.([coupled1dcml ] ) is given by a discrete system coupled to an external system in a feedback loop as : to proceed further we consider logistic map as system dynamics to get : the parameter whose value determines the intrinsic nature of dynamics of logistic map is chosen as , so that the map is in the chaotic regime .time series obtained from eq.([singleunitlogistic ] ) numerically is plotted in fig .[ logisticts ] for two different values of coupling strength with .the control is switched on at and we find that depending on the coupling strength , the chaotic dynamics can be controlled to periodic or steady state . of the system in eq.([2dlogistic ] ) for 2 different values of feedback coupling strength with and .initially the logistic map is in the chaotic regime and the coupling with external system is switched on at time .(a ) for , chaotic dynamics is controlled to periodic dynamics .( b ) for , the controlled state is a fixed point .] we find that there are fixed points for the system in eq.([singleunitlogistic ] ) given by the following equations : and to analyze the stability of these fixed points , we consider jacobian of the system in ( [ 2dlogistic ] ) evaluated at the fixed point . the characteristic equation for this matrix is given by : where the corresponding fixed point will be stable if the absolute values of all the eigenvalues given by eq.([characteristiclogistic ] ) are less than .using routh - hurwitz s conditions for discrete systems , this would happen when the following conditions are satisfied : using these conditions , we find that the fixed point is unstable . the other fixed point given by eq.([logisticfixednontrivial ] ) is stable for a range of parameter values and these ranges can be obtained using the analysis described above .the numerical and analytical results thus obtained are shown in fig .[ logisticplanes ] . from fig .[ logisticplanes](a ) it is clear that the steady state region ( red ) is symmetric about the line and the line .[ logisticplanes](b ) shows the region of steady state behavior in - plane where . in both these plots ,black boundary is obtained analytically and green region , the region in which system is oscillatory , is obtained numerically .it is clear that curves in black on the boundaries of the region that correspond to steady state , obtained analytically using eq.([stabilityconditions2d ] ) , agree well with the results of numerical analysis .( color online ) parameter planes for a chaotic logistic map coupled to an external map : ( a ) - plane for , ( b ) - plane where . here, the region where the fixed point of coupled system is stable is shown in red ( dark gray ) , in the green ( light gray ) region system is oscillatory while in the white region system is unstable .the black boundary surrounding the red region , in both the plots , is obtained analytically . ]we observe that the transition to steady state behaviour for this system , for a given value of as is varied , happens through reverse doubling bifurcations .this would mean that by tuning , it is possible to control the system to any periodic cycle or fixed point .as a second example we consider the henon map as the site dynamics .it is a 2-d map given by : then the dynamics of the coupled system is given by : the intrinsic dynamics of henon map is chaotic with parameters and .it is clear from time series given in fig .[ henonts ] that the coupled system in eq.([3dhenon ] ) , can be controlled to periodic state or fixed point state by varying the parameters , and .time series , of the system in eq.([3dhenon ] ) for different values of coupling strength with and .initially henon map is in the chaotic regime and the coupling with external system is switched on at time . ( a ) , chaotic dynamics is controlled to periodic dynamics of period two , ( b ) , the system is controlled to a fixed point . ] the -d dynamical system in ( [ 3dhenon ] ) has a pair of fixed points given by : \nonumber\\ y^{\ast}&=&\frac{b}{2a}\left[-c\pm \sqrt{c^{2}+4a}\right]\\ z^{\ast}&=&\frac{\varepsilon}{2a(1-k)}\left[-c\pm \sqrt{c^{2}+4a}\right]\nonumber\end{aligned}\ ] ] where , the stability of these fixed points is decided by the jacobian matrix evaluated at that fixed point : the characteristic polynomial equation for the eigenvalue , in this case , is given by : where similar to the case of logistic map , the absolute values of all eigenvalues given by eq.([characteristichenon ] ) are less than if the following conditions are satisfied : here also , one of the solutions in eq.([3dhenonfixedpoints ] ) ( with negative sign ) is unstable while the other is stable for regions in parameter planes - and - as shown in fig . [ henonplanes ] .the color codes and details are as given in fig .[ logisticplanes ] . for any particular value of parameter ,the nature of transition in this case is through a sequence of reverse period doubling bifurcations similar to the case of logistic map .( color online ) parameter planes for a coupled system of henon map and external map .( a)- plane for , ( b)- plane .color code is same as that for fig .[ logisticplanes ] ] in this section , we consider control of dynamics on a ring of discrete systems . for this , as mentioned in section ii , we construct a ring of external maps which are coupled to each other diffusively and then couple it to the ring of discrete systems in one - to - one fashion with feedback type of coupling . with logistic map as on - site dynamics , the dynamics at the site of the coupled system is given by : where numerically , we find that the system in eq.([coupledlogisticrings ] ) can be controlled to a periodic state and a state of fixed point or amplitude death by adjusting the coupling strengths and . fig .[ space - timelogisticring ] shows two such cases , where in ( a ) lattice is controlled to a temporal -cycle and in ( b ) temporal fixed point state .( color online ) space - time plots for a ring of logistic maps coupled to an external ring .the coupling is switched on at time .parameter values of the system are : , , , .( a ) for ; , the dynamics is controlled to a -cycle state and ( b ) for ; , the dynamics is controlled to a fixed point state .the color code is as per the value of ] to analyze the stability of the controlled and synchronized fixed point , we consider the system in eq.([coupledlogisticrings ] ) as a 1-d lattice of single units considered in section iii coupled diffusively through x and z. because of the synchronized nature of the fixed point and periodic boundary conditions , jacobian of this system is block circulant matrix as given below . for the case of logistic maps , and are matrices and denotes zero matrix . in explicit form , these matrices are given by : \end{aligned}\ ] ] and \end{aligned}\ ] ] following the analysis given in for eigenvalues of the general block circulant matrix as given below : we construct blocks each of order as follows : where is a complex root of unity : with + for our problem , and .thus we get , where thus , in this case , is given by : where , now we make use of the fact that eigenvalues of jacobian when evaluated at the synchronized fixed point are the same as eigenvalues of these matrices since the jacobian is block circulant matrix .this means that , to check the stability of the fixed point of the two coupled rings , instead of directly calculating eigenvalues of jacobian matrix of size , we can calculate eigenvalues of all matrices , each of which is matrix , which is a much simpler task .the characteristic equation for is given by : where in this case , the corresponding fixed point with coordinates given by eq.([logisticfixedtrivial ] ) or eq.([logisticfixednontrivial ] ) is stable when the conditions given in eq.([stabilityconditions2d ] ) are satisfied for each .these conditions then give us the regions in parameter space where the fixed point state of the whole system is stable i.e. the regions where the control of the original dynamics to the steady state is possible .similar to the case of single unit , the fixed point with coordinates given by eq.([logisticfixedtrivial ] ) turns out to be unstable for all values of control parameters .the other fixed point is stable for a range of parameter values .these amplitude death regions obtained numerically in - and - planes for this system of coupled rings are shown in fig .[ logisticringplanes ] . in each of these plots ,the amplitude death region is shown with red color while the black boundary around that region is obtained analytically . from fig .[ logisticringplanes](b ) , it is clear that the amplitude death can happen even when the logistic maps are not coupled to each other directly ( corresponding to ) or when external maps are not coupled to each other ( corresponding to ) .( color online ) parameter planes for a ring of logistic maps coupled to an external ring : ( a ) - plane when and , ( b ) - plane when and .color code is same as that for fig .[ logisticplanes ] ] as our next example , we illustrate the control scheme for a ring of henon maps .the coupled system in this case can be represented as follows : where here also the system can be controlled to a synchronized fixed point and because of the periodic boundary conditions and because of the synchronized nature of fixed point , jacobian becomes block circulant as given in eq.([jacobian ] ) . in this case , and are matrices and represents zero matrix . in explicit form , and are given by : and let then , the characteristic equation for is given by : where using these parameters , we define and as in the case of single unit in eq.([stabilityconditions3d ] ) .this allows us to check stability for each of the matrices which in turn gives us regions in parameter space where the control of the dynamics is possible for a ring of henon maps . in fig .[ epskhenonring ] we show these regions in - parameter plane , for and for .( color online ) - parameter planes for a ring of henon maps coupled to an external ring : ( a ) when , , ( b ) when , .color code is same as that for fig .[ logisticplanes ] ] as an application of our scheme , we present the control of chimera states produced on a 1-d lattice with non - local couplings .the chimera states have been observed in many spatially extended systems and coupled oscillators with non - local couplings .these states , which correspond to the coexistence of spatial regions with synchronized and incoherent behavior , may not be desirable in many systems like power grids .here we specifically consider the chimera states experimentally realized using liquid - crystal spatial light modulator .this system corresponds to a 1-d lattice with each node diffusively coupled to of its nearest neighbours . in this casethe phase of each node is the dynamical variable whereas a physically important quantity is the output intensity i , related to phase as : the coupled map lattice is then represented by the following set of equations : \right\}\end{aligned}\ ] ] for the coupling radius and as shown in , the spatial profile of intensity has two distinct synchronized domains which are separated by small incoherent domains confirming the existence of chimera states. the coupling with external lattice in this case can be as : \right\}+\varepsilon_{1}z_{i}(n)\nonumber\\ z_{i}(n+1 ) & = & kz_{i}(n ) + d_{e}(z_{i-1}(n)+z_{i+1}(n)-2z_{i}(n ) ) + \varepsilon_{2}\phi_{i}(n ) \end{aligned}\ ] ] we find that with this control scheme , spatial as well as temporal control of chimera state can be achieved .we plot numerically the chimera states obtained numerically from eq.([slmcml ] ) and the controlled state from eq.([slm_control ] ) in fig .[ slm](a ) . fig .[ slm](b ) shows time series of a typical node indicating a control to steady state .( a ) chimera state developed in a 1d coupled map lattice of size represented by eq.([slmcml ] ) shown with red color . after the control is applied , chimera state is replaced by a spatially synchronized state as shown in black .( b ) the time series of a typical node of the cml .when chimera state is present , individual node is in 2-cycle state . at time control is switched on with , which decreases the intensity fluctuations though 2-cycle is still present .further increase in coupling with at time gets rid of all fluctuations and the intensity goes to a constant value spatially and temporally . ]now we consider controlling the dynamics on 2-d lattice of discrete systems by coupling to an external lattice as follows : here , is the matrix which determines the components of state vector which take part in the coupling with an external system . in fig .[ logisticlatticets ] , we show the numerically obtained time series from a typical node of a lattice of logistic maps coupled to an external lattice for different coupling strengths . in all three cases , coupling with an external latticeis switched on at time .the time series of a typical node from a lattice of logistic maps coupled to an external lattice when .parameter values are : , and .in all the cases coupling with external lattice is switched on at time .( a ) for , the chaotic dynamics is controlled to 4-cycle state .( b ) for , it is a 2-cycle state .( c ) for , the dynamics is quenched to fixed point state or amplitude death state . ] in fig .[ logisticlatticest ] , we show space - time plots of 2-d lattice of logistic maps with and without coupling with an external lattice . here the time series from nodes along one of the main diagonals of the lattice are plotted on the y - axis and time on the x - axis .the coupling strength with the external lattice is adjusted so that lattice of logistic maps is controlled to a 2-cycle state temporally .( color online ) ( a ) space - time plot of patterns formed on 2-d lattice of logistic maps when the individual maps are in chaotic regime , and ( b ) when this lattice is coupled to the external lattice with , , , the dynamics is controlled to a 2-cycle state temporally .also , spatially pattern on the lattice becomes regular .color coding in the plot is according the values of .,height=226 ] the stability of the synchronized steady state of coupled 2-d lattices can be analyzed in exactly similar fashion to the coupled rings discussed in detail in previous section by re - indexing lattice site with a running index defined as .then jacobian for the whole system becomes block circulant matrix with each block s order being .the intersection of regions satisfying the stability conditions for all the matrices will then give the region of amplitude death in the corresponding parameter planes .the results thus obtained are qualitatively similar to the case of the ring and therefore not included here . in this sectionwe show how the present scheme can provide a phenomenological model for the role of external medium in controlling the dynamics of coupled neurons .it is established that neurons can cause fluctuations in the extra cellular medium which when fed back to the neurons can affect their activity via ephaptic coupling . in this context , we take the 2-d control lattice as the discrete approximation to the medium and the dynamics of individual neuron , to be the rulkov map model given by the following equations : here , represents the membrane potential of the neuron and is related to gating variables . , and are intrinsic parameters of the map .we take and for , there exists a stable fixed point for the map given by and . in the range to , the map shows periodic behaviour .we model the system of coupled neurons as a 2-d lattice which is in a feedback loop with the external lattice as given by : for a realistic modeling of the system , we take the values of the parameter distributed uniformly and randomly between and among all the maps and the extra cellular medium as the external lattice of same size with a spread in values distributed randomly in ] .when coupling with the external lattice is switched on , the patterns get suppressed giving steady and synchronized states . in fig .[ pattern](b ) we show the space time plot for the lattice with and , and and the coupling is switched on at time .the horizontal axis represents the systems on the main diagonal of the lattice and vertical axis is time .as is clear from the plot , the external lattice suppresses the pattern on the lattice .we note that dynamical spatio - temporal patterns can arise in the neuronal lattice because of defect neurons that are different from others .we illustrate this by considering a lattice of neurons in which all neurons have in the range to except one neuron which is in the excited state with .the patterns produced are shown in fig .[ defect](a ) .such patterns of propagating excitation waves can be pathological unless suppressed .coupling with the external medium can effectively suppress them .[ defect](b),(c),(d ) display the snapshots of the lattice after coupling with the external lattice .now we generalize the control scheme given above to control the dynamics on random networks of discrete systems . in this case, we construct external network with the same topology as the system network and then we couple these networks in one to one fashion with feedback type of coupling .this coupled system of two networks , when the coupling between the nodes of the same network is of diffusive type , can be represented by : in this equation , intrinsic -dimensional dynamics on the nodes of the network is given by and the same function is used to couple nodes in the network diffusively . also , is the entry of the adjacency matrix of the network , is the matrix which determines the components of state vector to be used in the coupling with the other nodes and is the matrix which determines the components of state vector to be used in the coupling with nodes of the external network .we perform numerical simulations for a network of nodes for different random topologies which include three realizations of erds - rnyi network with average degree equal to and two realizations of barab ' asi - albert scale - free network .the dynamics on the nodes is of logistic map in all these simulations with .we find that for all topologies , as the coupling strength is increased both networks reach synchronized fixed point of the single unit given in eq.([singleunitlogistic ] ) .this synchronized steady sate or fixed point can be considered as amplitude death state of the network and to analyze its stability , we invoke the master stability analysis for this fixed point .since the external network has exactly the same topology as the original network and since nodes of two networks are coupled in one to one fashion , this system of two coupled networks can be considered as a single network of the single units considered in section iii .then if represents component of perturbation to the synchronized fixed point of the network at node , we expand this perturbation as : where is component of eigenvector of laplacian matrix of the network . following the analysis given in then get : {\bm c}^{r}(n)\ ] ] for logistic map as nodal dynamics and for diffusive coupling among the nodes , matrices and are given by : and this gives , \ ] ] then for the synchronized fixed point state of the system of coupled networks to be stable , for every eigenvalue of the laplacian matrix of the network , absolute values of all the eigenvalues of the matrix should be less than .we now consider in as a parameter and for a range of , find eigenvalue with largest absolute value and use this eigenvalue as master stability function ( msf ) .the results are shown in fig .[ msf ] where for different values of , the master stability curves are shown .we see that all the curves are continuous over the whole range of .the region of the curve for which is the stable region for that curve .if all the eigenvalues of laplacian matrix for a given topology fall inside this region , then the state of synchronized fixed point is stable . since the smallest eigenvalue of laplacian matrix for unweighted and undirected network is always , we need to worry about the largest eigenvalue only .hence if both and are inside this region then fixed point for the network is stable .the red curve ( thick continuous ) in fig . [ msf ] is for which is the minimum coupling strength required for any topology for fixed point of the whole network to be stable . for any coupling strength less than this , absolute value of is greater than and hence fixed point of the network can not be stable . to illustrate this ,we plot the master stability curve for shown in the figure as dotted blue curve .the third curve ( thin continuous ) is for . in the figure we mark largest eigenvalues of laplacian matrices of different networks for which numerical simulations are performed .( color online ) master stability functions for a network of logistic maps coupled to an external network for three values of coupling strengths .the red curve ( thick curve ) is for critical value of which is the minimum coupling strength necessary for the fixed point state of the network to be stable when all other parameters are kept constant .the blue curve ( dotted ) is for and it is clear from the figure that for all topologies , fixed point state is unstable at this coupling strength .the black curve ( thin continuous ) is for . for a network of nodes , with different random topologies ( erds - rnyi topologies and scale - free topologies ) ,the largest eigenvalue is marked on the graph . ] to support the master stability analysis , we present numerical results for one particular realization of erds - rnyi network .we calculate an index to characterize the synchronized fixed point by taking the difference between global maximum and global minimum of the time series of at each node of the network after neglecting the transients and averaging this quantity over all the nodes . for parameter values corresponding to the controlled state of amplitude death, this index has value . in fig .[ adindexlogisticnetwork ] , we show variation of with and it is clear that goes to at indicating that amplitude death happens as predicted by master stability analysis .the index obtained numerically as a function of coupling strength for a system of logistic network coupled to an external network .the topology of both networks in this case is of erds - rnyi type . at approximately ,control to steady state happens . ]the suppression of unwanted or undesirable excitations or chaotic oscillations in systems is essential in a variety of fields and therefore mechanisms for achieving this suppression in connected systems are of great relevance . in the specific context of neurons , it is known that exaggerated oscillatory synchronization or propagation of some excitations can lead to pathological situations and hence methods for their suppression are very important .methods to avoid oscillations by design of system controls such as power system stabilizers are important in engineering too . in this paper we present a novel scheme where suppression of dynamics can be induced by coupling the system with an external system .our study is mostly on extended systems like rings and lattices in interaction with similar systems with different dynamics .however the method is quite general and is applicable to other such situations also where control and stabilization to steady states are desirable . even in the case of single systems , as illustrated here for henon and logistic maps , this method can be thought of as a control mechanism for targeting the system to a steady state .we develop the general stability analysis for the coupled systems using the theory of maps and circulant matrices and identify the regions where suppression is possible in the different cases considered .the results from direct numerical simulations carried out using two standard discrete systems , logistic amp and henon map , agree well with that from the stability analysis .our method will work even with a single external system as control ; however , extended or coupled external system enhances the region of effective suppression in the parameter planes .our study indicates that by tuning the parameters of the external control system , we can regulate the dynamics of the original system to desired periodic behavior with consequent spatial order .we also report how the present coupling scheme can be extended to suppress dynamics on random networks by connecting the system network with an external similar network in a feedback loop . in this contextthe stability analysis using master stability function , gives the critical strength of feedback coupling required for suppression .our analysis leads to the interesting result that control to steady state occurs even when the systems are not coupled among themselves but coupled individually to the connected external systems .so also , individual external systems , which are not connected among themselves but each one connected to one node in the system , can suppress the dynamics of the connected nodes in the system .this facilitates flexibility in the design of control through connected or isolated external systems depending on the requirement .the scheme of external and extended control , introduced here , has relevance in understanding and modeling controlled dynamics in real systems due to interaction with external medium .we illustrate this in the context of coupled neurons where the suppression of dynamical patterns is achieved by coupling with the extra cellular medium .we also indicate how the same scheme can be applied to control chimera states in non - locally coupled systems . until recently ,cml was considered as an idealized theoretical model for studies related to spatio - temporal dynamics .however the recent works on a physically realizable cml system using liquid - crystal spatial light modulator and based on digital phaselocked loop have opened up many interesting possibilities in studying their control schemes also experimentally . in this contextwe note that the scheme presented in the paper is suitable for a physical realization using optical instruments or electronic hardware .one of the authors ( s.m.s . ) would like to thank university grants commission , new delhi , india for financial support .00 ott e , grebogi c , yorke ja. controlling chaos .lett . 1990 ; 64:1196 - 1199 .boccaletti s , grebogi c , lai y - c , mancini h , maza d. the control of chaos : theory and applications .phys . rep .2000 ; 329:103 - 197 .schll e , schuster hg , editors .handbook of chaos control .wiley - vch ; 2007 .klinger t , schrder c , block d , greiner f , piel a , bonhomme g , et al .chaos control and taming of turbulence in plasma devices .plasmas 2001 ; 8:1961 - 1968 .crdoba a , lemos mc , jimnez - morales f. periodical forcing for the control of chaos in a chemical reaction .2006 ; 124:014707 .roy r , murphy tw jr , maier td , gills z , hunt er . dynamical control of a chaotic laser : experimental stabilization of a globally coupled system .1992 ; 68:1259 .wei m - d , lun j - c .amplitude death in coupled chaotic solid - state lasers with cavity - configuration - dependent instabilities .phys . lett . 2007 ; 91:061121 .kondor d , vattay g. dynamics and structure in cell signaling networks : off - state stability and dynamically positive cycles .plos one 2013 ; 8(3):e57653 .kaneko k. spatiotemporal chaos in one- and two - dimensional coupled map lattices .physica d 1989 ; 37:60 - 82 .kaneko k. overview of coupled map lattices .chaos 1992 ; 2:279 - 282 .kaneko k , editor .theory and applications of coupled map lattices .wiley ; 1993 .fang j , ali mk .nonlinear feedback control of spatiotemporal chaos in coupled map lattices .discrete dynamics in nature and society 1997 ; 1:283 - 305 .parmananda p , hildebrand m , eiswirth m. controlling turbulence in coupled map lattice systems using feedback techniques .phys . rev .e 1997 ; 56:239 - 244 .parekh n , parthasarathy s , sinha s. global and local control of spatiotemporal chaos in coupled map lattices .1998 ; 81:1401 - 1404 .gang h , zhilin q. controlling spatiotemporal chaos in coupled map lattice systems .1994 ; 72:68 - 71 .konishi k , kokame h. time - delay - induced amplitude death in chaotic map lattices and its avoiding control .physics letters a 2007 : 366:585 - 590.s masoller c , marti ac .random delays and the synchronization of chaotic maps .2005 ; 94:134102 .ahlborn a , parlitz u. stabilizing unstable steady states using multiple delay feedback control .2004 ; 93:264101 .shrimali md .pinning control of threshold coupled chaotic neuronal maps .chaos 2009 ; 19:033105 .konishi k , kokame h. decentralized delayed - feedback control of a one - way coupled ring map lattice .physica d 1999 ; 127:1 - 12 .resmi v , ambika g , amritkar re .general mechanism for amplitude death in coupled systems .e 2011 ; 84:046212 .resmi v , ambika g , amritkar re , rangarajan g. amplitude death in complex networks induced by environment .e 2012 ; 85:046211 .sharma a , sharma pr , shrimali md .amplitude death in nonlinear oscillators with indirect coupling .lett . a 2012 ; 376:1562 - 1566 .banerjee t , biswas d. synchronization in hyperchaotic time - delayed electronic oscillators coupled indirectly via a common environment .nonlinear dyn 2013 ; 73:2025 - 2048 .sharma pr , sharma a , shrimali md , prasad a. targeting fixed - point solutions in nonlinear oscillators through linear augmentation .e 2011 ; 83:067201 .sharma pr , shrimali md , prasad a , feudel u. controlling bistability by linear augmentation .lett . a 2013 ; 377:2329 - 2332 .hagerstrom am , murphy te , roy r , hvel p , omelchenko i , schll e. experimental observation of chimeras in coupled - map lattices . nature physics 2012 ; 8:658 - 661 . sonis m. critical bifurcation surfaces of 3d discrete dynamics .discrete dynamics in nature and society 2000 ; 4:333 - 343 .eigenvectors of block circulant and alternating circulant matrices .inf . math .sci . 2005 ; 8:123 - 142 .dudkowski d , maistrenko y , kapitaniak t. different types of chimera states : an interplay between spatial and dynamical chaos .e 2014 ; 90:032920 .panaggio mj , abrams dm .chimera states : coexistence of coherence and incoherence in networks of coupled oscillators .arxiv:1403.6204v2 [ nlin.cd ] 2014 .yao n , huang z - g , lai y - c , zheng z - g .robustness of chimera states in complex dynamical systems .sci . rep .2013 ; 3:3522 .anastassiou ca , perin r , markram h , koch c. ephaptic coupling of cortical neurons .nature neuroscience 2011 ; 14:217223 .rulkov nf .modeling of spiking - bursting neural behavior using two - dimensional map .e 2002 ; 65:041922 .pecora lm , carroll tl .master stability functions for synchronized coupled systems .phys . rev .1998 ; 80:2109 - 2112 .newman mej . networks .oxford university press ; 2010 .banerjee t , paul b , sarkar bc .spatiotemporal dynamics of a digital phase - locked loop based coupled map lattice system .chaos 2014 ; 24:013116 .
we present a new coupling scheme to control spatio - temporal patterns and chimeras on 1-d and 2-d lattices and random networks of discrete dynamical systems . the scheme involves coupling with an external lattice or network of damped systems . when the system network and external network are set in a feedback loop , the system network can be controlled to a homogeneous steady state or synchronized periodic state with suppression of the chaotic dynamics of the individual units . the control scheme has the advantage that its design does not require any prior information about the system dynamics or its parameters and works effectively for a range of parameters of the control network . we analyze the stability of the controlled steady state or amplitude death state of lattices using the theory of circulant matrices and routh - hurwitz s criterion for discrete systems and this helps to isolate regions of effective control in the relevant parameter planes . the conditions thus obtained are found to agree well with those obtained from direct numerical simulations in the specific context of lattices with logistic map and henon map as on - site system dynamics . we show how chimera states developed in an experimentally realizable 2-d lattice can be controlled using this scheme . we propose this mechanism can provide a phenomenological model for the control of spatio - temporal patterns in coupled neurons due to non - synaptic coupling with the extra cellular medium . we extend the control scheme to regulate dynamics on random networks and adapt the master stability function method to analyze the stability of the controlled state for various topologies and coupling strengths . control of dynamics , amplitude death , chimera , coupled map lattice , random network 05.45.xt , 05.45.gg , 05.45.ra
in a previous article ( see ) , by introducing the `` similarity '' relation and applying it to the restricted three - body problem , the `` similar '' equations of motion were obtained .these equations were connected with the classical equations of motion by some coordinate transformation relations ( see equations ( 17 ) in ) . in this paperwere also defined similar parameters and physical quantities , and similar initial conditions and some trajectories of the test particles into the physical and respectively planes were ploted . denoting and the components of the binary system ( whose masses are and ) , the equations of motion of the test particle ( in the frame of the restricted three - body problem ) in the coordinate system are ( see equations ( 11)-(13 ) in ) : where in the similar coordinate system the equations of motion of the test particle are ( see equations ( 14)-(16 ) in ) : where it can be easily verify that the equations of coordinate transformation are : one can observes that equations ( 1)-(3 ) and ( 5)-(7 ) have singularities in , , and .these situations correspond to collision of the test particle whith or in a straight line .the collision is due to the nature of the newtonian gravitational force ( ) .if the test particle approaches to one of the primaries very closely ( ) , then such an event produces large gravitational force ( ) and sharp bends of the orbit .the removing of these singularities can be done by regularization .( remark : the purpose of regularization is to obtain regular equations of motion , no regular solutions . )euler seems to be the first ( in 1765 ) to propose regularizing transformations when studying the motion of three bodies on a straight line ( see ) .the regularization method has become popular in recent years ( see ) for long term studies of the motion of celestial bodies .these problems have a special merit in astronomy , because with their help we can studied more efficient the equations of motions with singularities . at the collisionthe equations of motion possess singularities .the problem of singularities plays an important role under computational , physical and conceptual aspects ( see ) .the singularities occurring at collisions can be eliminated by the proper choice of the independent variable .the basic idea of regularization procedure is to compensate for the infinite increase of the velocity at collision .for this reason , a new independent variable , fictitious time , is adopted .the corresponding equations of motion are regularized by two transformations : the time transformation and the coordinate transformation .the most important part of the regularization is the time transformation , when a new fictitious time is used , in order to slow the motion near the singularities .the regularization can be local or global .if a local regularization is done , then the time and the coordinates transformations eliminate only one of the two singularities .an example for the local regularization is the birkhoff s transformation ( see ) .the global regularization eliminates both singularities at once ( see ) . because our singularities are given in terms of , , , , in this paper a global regularizationwill be done . in order to do this ,we need to replace the cartesian equations ( 1)-(3 ) and ( 5)-(7 ) with the corresponding canonical equations of motion .the canonical coordinates are formed by generalized coordinates and generalized momenta .the hamiltonian , defined by the equation : will becomes ( see p. 266, for the generalized momenta , when the coordinates system rotates ) : here \;,\ ] ] with here the generalized coordinates and the generalized momenta were : then , the canonical equations have , in the coordinate system , the explicit forme : it is easy to verify that using the relations ( 14 ) , the explicit canonical equations become the cartesian equations ( 1)-(3 ) . in order to write the canonical equations in the similarcoordinate system , we have in view the theoretical considerations from the article .the index _ s _ refers to similar quantities .then , the similar hamiltonian will be : ( see p. 266 ) : where \;,\ ] ] with here the generalized coordinates and the generalized momenta were : then , the canonical equations have , in the coordinate system , the explicit forme : it is easy to verify that using the relations ( 27 ) , ( 28 ) , ( 29 ) , the explicit canonical equations ( 30 ) , ( 31 ) , ( 32 ) , become the cartesian equations ( 5)-(7 ) . _remark _ : from equations ( 13 ) and ( 24 ) it is easy to observe that and ( see also figure 1 in ) .the equations of motion ( 19)-(21 ) and ( 30)-(32 ) have singularities in and , respectively in and . we shall remove these singularities by regularization .several regularizing methods are known ( see stiefel et al . 1971 ) . in this paperwe shall use the levi - civita s method , applied when the bodies are moving on a plane .the two steps performed in the process of regularization of the restricted problem are the introduction of new coordinates and the transformation of time .the combination of the coordinate ( dependent variable ) transformation and the time ( independent variable ) transformation have an analytical importance and increase the numerical accuracy . for simplicity we shall consider that the third body moves into the orbital plane .for the regularization of the equations of motion in the coordinate system , we shall introduce new variables and , conected with the coordinates and by the relations of levi - civita ( see ) : let introduce the generating function ( see , p.196 ) : a twice continuously differentiable function . here and are harmonic conjugated functions , with the property the generating equations are with as new generalized momenta , or explicitly where let introduce the following notation : where represents the transpose of matrix . the new hamiltonian with the generalized coordinates and and generalized momenta and is : + \frac{q}{1+q}f- \nonumber \\ & - & \frac{1}{1+q}\cdot \frac{1}{\overline{r}_1 } - \frac{q}{1+q } \cdot \frac{1}{\overline{r}_2}-\frac{q^2}{2(1+q)^2}\end{aligned}\ ] ] where , and and the explicit canonical equations of motion in new variables become : \nonumber\\ \frac{dq_2}{dt}&=&\frac{1}{2d } \left [ 2p_2-\frac{\partial } { \partial q_1 } ( f^2+g^2 ) \right ] \nonumber\\ \frac{dp_1}{dt}&=&- \frac{p_1}{2d } \cdot \frac{\partial } { \partial q_1\partial q_2 } ( f^2+g^2 ) + \frac{p_2}{2d } \cdot \frac{\partial } { \partial q_1 \partial q_1 } ( f^2+g^2 ) - \frac{q}{1+q } \frac{\partial f}{\partial q_1}+ \nonumber \\ & + & \frac{1}{1+q}\cdot \frac{\partial } { \partial q_1 } \left ( \frac{1}{\overline{r}_1 } \right ) + \frac{q}{1+q } \cdot \frac{\partial } { \partial q_1 } \left ( \frac{1}{\overline{r}_2 } \right ) \nonumber \\ \frac{dp_2}{dt}&=&- \frac{p_1}{2d } \cdot \frac{\partial } { \partial q_2\partial q_2 } ( f^2+g^2 ) + \frac{p_2}{2d } \cdot \frac{\partial } { \partial q_2 \partial q_1 } ( f^2+g^2 ) - \frac{q}{1+q } \frac{\partial f}{\partial q_2}+ \\ & + & \frac{1}{1+q}\cdot \frac{\partial } { \partial q_2 } \left ( \frac{1}{\overline{r}_1 } \right ) + \frac{q}{1+q } \cdot \frac{\partial } { \partial q_2 } \left ( \frac{1}{\overline{r}_2 } \right)\nonumber\end{aligned}\ ] ] using levi - civita s transformation ( see relations ( 33 ) ) , the equations ( 39 ) becomes : where , , + with the new hamiltonian to resolve the hamiltonian equations ( [ eq1 - 14 ] ) , we introduce the fictitious time , ( see ) , and making the time transformation , the new regular equations of motion are the explicit equations of motion may be written _ remark _ : it is easy to see that now , the equations of motion have no singularities . for the application of the above problem in a binary system , we can obtain the solution in the form the coordinate transformation in the coordinate system , we introduce the generating function in the plane , in the following form where and are harmonic conjugated functions .the generating equations are or explicitly where let introduce the following notation , ( , p. 373 ) the new hamiltonian for the case 2 may be written + \frac{q'}{1+q'}f_s- \nonumber \\ & - & \frac{1}{1+q'}\cdot \frac{1}{\overline{r}_{s1 } } - \frac{q'}{1+q ' } \cdot \frac{1}{\overline{r}_{s2}}-\frac{q'^2}{2(1+q')^2}\end{aligned}\ ] ] where , and and the hamiltonian equations in new variables become \nonumber\\ \frac{dq_{s2}}{dt}&=&\frac{1}{2d_s } \left [ 2p_{s2}+\frac{\partial } { \partial q_{s1 } } ( f_s^2+g_s^2 ) \right ] \nonumber\\ \frac{dp_{s1}}{dt}&= & \frac{p_{s1}}{2d_s } \cdot \frac{\partial } { \partial q_{s1}\partial q_{s2 } } ( f_s^2+g_s^2 ) - \frac{p_{s2}}{2d_s } \cdot \frac{\partial } { \partial q_{s1 } \partial q_{s1 } } ( f_s^2+g_s^2 ) - \frac{q'}{1+q ' } \frac{\partial f_s}{\partial q_{s1}}+ \nonumber \\ & + & \frac{1}{1+q'}\cdot \frac{\partial } { \partial q_{s1 } } \left ( \frac{1}{\overline{r}_{s1 } } \right ) + \frac{q'}{1+q ' } \cdot \frac{\partial } { \partial q_{s1 } } \left ( \frac{1}{\overline{r}_{s2 } } \right ) \nonumber \\ \frac{dp_{s2}}{dt}&= & \frac{p_{s1}}{2d_s } \cdot \frac{\partial } { \partial q_{s2}\partial q_{s2 } } ( f_s^2+g_s^2 ) - \frac{p_{s2}}{2d_s } \cdot \frac{\partial } { \partial q_{s2 } \partial q_{s1 } } ( f_s^2+g_s^2 ) - \frac{q'}{1+q ' } \frac{\partial f_s}{\partial q_{s2}}+ \nonumber \\ & + & \frac{1}{1+q'}\cdot \frac{\partial } { \partial q_{s2 } } \left ( \frac{1}{\overline{r}_{s1 } } \right ) + \frac{q'}{1+q ' } \cdot \frac{\partial } { \partial q_{s2 } } \left ( \frac{1}{\overline{r}_{s2 } } \right).\end{aligned}\ ] ] because the singularity of the problem is given by the terms and , we will made a global regularization using the levi - civita s transformation the similar hamiltonian equations are given by where , , + with the new hamiltonian introducing the fictitious time and making the time transformation , the new regular equations of motion are obtained in the form the explicit equations of motion are given by _ remark _ : it is easy to see that the similar equations of motion have no singularities .for the application of the above problem in a binary system , we can obtain the solution in the form transformation of the independent variable is necessary to achieve regularization .it is a slow - down treatment of the physical problem , a new time scale in which the motion slows down .for the numerical integration ( earth - moon binary system ) , considering that the third body moves into the orbital plane ( see ) , we used the initial values : , \;\;q=0.0123 \;.\ ] ] for the numerical integration ( earth - moon binary system ) in the `` similar '' coordinate system we use the initial values ( see eqs . ( 25 ) ) : ,\;\ ; q'=81.30\;.\ ] ] for the numerical integration ( earth - moon binary system ) in the regularized coordinate system ( equations ( 52 ) ) , we use the initial values ( see also eq . ( 33 ) ) : ,\;\ ; q=0.0123 \;,\ ] ] and in the similar regularized coordinate system ( equations ( 54 ) ) : ,\;\ ; q'=81.30\;.\ ] ] in figure 1 we can compare the trajectories of the test particle in the coordinate systems with origin in ( figures _ a _ , _ c _ , _ e _ ) , and ( figures _ b _ , _ d _ , _ f _ ) .the point correspond to the initial conditions .we consider the trajectories given in in figure 6 ( in the coordinate systems and ) and we represented them in the coordinate systems and ( see figure 1 _ a _ and _ b _ ) . in this purposewe obtained the initial conditions as follows : and from eqs .( 16)-(17 ) : and and from eqs .( 27)-(28 ) : in order to obtain the initial conditions , when we make the coordinate transformation , we solve the systems : , + ( see eqs .( 33 ) and ( 36 ) ) for the trajectory in coordinate system ( figure 1c ) and , + ( see eqs .( 44 ) for the trajectory in coordinate system ( figure 1d ) .obviously , the initial conditions remain the same if we change the real time _t _ to the fictitious time , but the motion is slowed . in figure 1e and figure 1fwe represented the motion in real time _t _ with thin line and the slowed motion with thick line ( corresponding to the same period of time ) .let us analyze the figures 1a and 1c .for this purpose we consider a point a( ) on the graphic show in figure 1a , and b( ) its corresponding point in figure 1c .we have ( see figure 2a and 2b ) : and it results : .we used the counterclockwise directions for measuring the angles .the levi - civita geometrical transformation originate in the conformal transformation ( see , p.164 ) : where is the physical plane and is the parametric plane . from this relationwe have : , , and .it means that the geometrical transformation squares the distances from the origin and doubled the polar angles .if , having the trajectory in the physical plane , we want to draw the trajectory into the parametric plane , we have to choose a point on the trajectory in plane , measure the angle and the distance , and then draw a half - line in the plane , so as . on this half - line , we have to measure the distance , and obtain the point . than we have to repeat the procedure for .of course the computer will do this better and faster than we can do it , but the above considerations help us to understand what it happened .the vertex of the polar angles have to be centered into the more massive star , so the angles and and respectively and are similar polar angles .so , if we intend to study the regularization of the circular restricted three - body problem using similar coordinate systems , we have to add to similar parameters postulated in section 3 in ( roman , 2011 ) , _ the similar polar angles _ , measured between the abscissa and the half - line passing through the center of the most massive star and the test particle .+ in order to see what is the effect of geometrical transformation , let us analyze the graphics from figure 3 . in figure 3a thereare represented some circles ; their equations are : , where . in figure 3bthere are represented the circles having the equations , where , like geometrical transformation of levi - civita s regularization postulated .one can see that the circles in figure 3b go away from the center and draw near the circle having radius .if in the center of the circles there is a problem ( a singularity ) , it can be easily examined . in figure 3c and figure3d there are represented some half - lines , having equations : , respectively , where , and , like geometrical transformation of levi - civita s regularization postulated .one can see that the half - lines in figure 3d go away from the abscissa s axis .if there is a problem ( a singularity ) on the abscissa s axis , it can be easily examined .there are only two points invariant with respect to the geometrical transformation of levi - civita s regularization : , and , respectively in the similar coordinate systems , and .then , the geometrical transformation go away the trajectory from the points where there are singularities . in what concern the time transformation , as one can see in figure 1e and 1f ,the role of this transformation is to slow - down the motion of the test particle . with thin lineis represented the trajectory of the test particle when the time integration is , and with thick line is represented the trajectory of the test particle when the time integration is .as one can see , after we are still far away from the point where it is possible to have a singularity , if the coordinate system has the origin in , but not so far away if the origin of the coordinate system is in .this paper continue the study of the relation of similarity , postulated in , by applying it to the levi - civita s regularization of the motion s equations of the test particle , in the circular , restricted three - body problem .many papers in the last decade have studied the restricted three - body system in a phase space . during these studies ,difficulties have arisen when the system approaches a close encounter .using the regularization method in the similar coordinates system , we give explicitly equations of motion for the test particle .we study numerically the regular equations of motion , we written in canonical form , and obtained that the integrator using regularized equations of motion are more efficient .the similar hamiltonian ( see eq . ( 22 ) ) give us the similar canonical equations ( 27)-(32 ) , which have some different signes than the canonical equation ( 16)-(21 ) .the coordinate transformation used in the levi - civita s regularization create a new form of the similar hamiltonian s equations ( eqs .finally , the time transformation used in the levi - civita s regularization gives us the regularized equations of motion ( 51 ) in the coordinate system with origin in , and ( 54 ) in the similar coordinate system .birkhoff , g.d .cir . mat .palermo , 39,1 ( 1915 ) boccaletti , & d. , pucacco , g. : theory of orbits , vol.1 , springer - verlag berlin heidelberg new york ( 1996 ) castilho , c. , & vidal , c. : qual .theory dyn .syst . , 1 , 1 ( 1999 )celletti , a. , stefanelli , l. , lega , e. , & froeschl , c. : celest ., 109 , 265 ( 2011 ) csillik , i. : regularization methods in celestial mechanics , cluj : house of the book of science ( 2003 ) rdi , b. : celest .astron . , 90 , 35 ( 2004 ) jimnez - perez , h. , & lacomba , e. : j. phys . a , 44 , 265 ( 2011 ) kopal , z. : dynamics of close binary systems , reidel , dordrecht ( 1978 ) levi - civita , t. : acta math . 30 , 305 - 327 ( 1906 ) mikkola , s. , & aarseth , s.j .astron . , 64 , 197 ( 1996 )mioc , v. , & csillik , i. : roaj , 12 , 167 ( 2002 ) roman , r. : , doi : 10.1007/s10509 - 011 - 0747 - 1 ( 2011 ) stiefel , l. , & scheifele , g. : linear and regular celestial mechanics , berlin : springer ( 1971 ) szebehely , v. : theory of orbits . academic press , new york ( 1967 ) szebehely , v. : regularization in celestial mechanics , in lecture notes in mathematics , vol .461 , p.257 - 263 ( 1975 ) waldvogel , j. : celest .astron . , 6 , 221 ( 1972 )waldvogel , j. : celest ., 28 , 69 ( 1982 ) waldvogel , j. : celest .astron . , 95 , 201 ( 2006 )
the regularization of a new problem , namely the three - body problem , using similar coordinate system is proposed . for this purpose we use the relation of similarity , which has been introduced as an equivalence relation in a previous paper ( see ) . first we write the hamiltonian function , the equations of motion in canonical form , and then using a generating function , we obtain the transformed equations of motion . after the coordinates transformations , we introduce the fictitious time , to regularize the equations of motion . explicit formulas are given for the regularization in the coordinate systems centered in the more massive and the less massive star of the binary system . the similar polar angle s definition is introduced , in order to analyze the regularization s geometrical transformation . the effect of levi - civita s transformation is described in a geometrical manner . using the resulted regularized equations , we analyze and compare these canonical equations numerically , for the earth - moon binary system .
polarimetric wide - field imaging is a recent and important trend in radio astronomy .many new and planned radio telescopes such as lofar , lwa , mwa , and ska have a wide field - of - view ( fov ) as a major feature .technologically , this has been made possible by the development of phased dipole arrays and focal plane arrays , both of which have an inherently wider fov than traditional single - pixel radio telescopes .the motivation for wide fov polarimetric radio telescopes in astronomy is that they facilitate the study of polarized phenomena not restricted to narrow fields such as the large , highly - structured , polarized features discovered in recent polarimetric galactic surveys . with lofar now producing its first _ all - sky _ images , a new era of polarimetric wide - field imaging is starting in radio astronomy . despite this trend ,a complete theory for wide - field , polarimetric astronomical interferometry is lacking . ultimately , classical interferometry is based on the far - zone form of the van cittert - zernike ( vc - z ) theorem which, in its original form , is a scalar theory and only valid for narrow fields .such restrictions are obviously unacceptable in astronomy where polarization often provides crucial astronomical information , and sources are distributed on the celestial sphere , not necessarily limited to small patches . in radio astronomy , a polarimetric , or vector ( i.e. non - scalar ) , extension of the narrow - field vc - z theoremwas given by , and more recently this was given a consistent mathematical foundation by in the form of the so - called _ measurement equation _( m.e . ) of radio astronomy. on the other hand , a wide - field extension of the scalar vc - z theorem for radio astronomy was given by , and more recently , imaging techniques for scalar , wide - fields have been an active field of research .however , a generalised vc - z theorem for radio astronomy that is both polarimetric and wide - field has not been derived. this may be because it has been assumed that such a theorem would be a trivial vector- or matrix - valued analogue of the wide - field , scalar theory . as we will see , however , the final result is not that simple .the purpose of the present work is therefore to derive a vc - z type relation by generalising the standard m.e .formalism to allow for arbitrarily wide fields .this is achieved by generalising the two - component jones formalism to a three - component wolf formalism . inwhat follows , we will derive a vc - z relation that is valid for the entire celestial sphere and is fully polarimetric .we will show that the standard m.e . can be recovered through a two - dimensional projection .we will also show that a dual - polarized interferometer of short electric dipoles ( hertzian dipoles ) is inherently aberrated polarimetrically .we also extend our vc - z relation to include not only the full electric field , but also the full magnetic field , and thereby establish the complete second - order statistical description of the electromagnetic relation between source terms and interferometer response .our wide - field vc - z relations should be of relevance to the subject of _ direction - dependent effects _ in radio interferometric imaging , which has attracted recent attention .the vc - z theorem as used in astronomy is different from its original use in optical coherence . in astronomy one images sources on the celestial sphere based on localised measurements of their far - fieldsthis is possible because the vc - z theorem provides an explicit relationship between the visibility measured directly by an interferometer and the brightness distribution of the sources ( * ? ? ?* chap 14 ) .this makes the vc - z relationship the foundation of synthesis mapping and interferometric imaging . despite its importance, the vc - z relations in use in astronomy are all either based on the paraxial approximation ( narrow fov ) or they take the source emissions to be scalar .these simplifications are questionable when the sources cover wide fields or are highly polarized , as is often the case in radio astronomy . herewe derive the full vector electromagnetic analogue to the wide - field , scalar vc - z relation derived in ( * ? ? ? * chap 14 ) , and , as we will see , the final result is not simply a matrix- or vector - valued version of the scalar relation .we seek a relationship between the electromagnetic field coherence of a distribution of radio astronomical sources and the resulting electromagnetic field coherence at a radio astronomical interferometer . to simplify the discussion, we will use the term _interferometer _ as a shorthand for radio astronomical interferometer .the treatment is intended for earth - based interferometer observations , but it also has space - based radio astronomical interferometry in mind . consider the problem illustrated in fig .[ fig : coordsys]a ) .radio emissions from far away sources , such as the point source , are measured by an interferometer located in the domain .we want to establish a relationship between the coherence of the electric field emanating from a source distribution and the coherence of the electric field in .let us first consider a single point source located at .the source is thus at a distance in the direction given by the unit vector ( direction cosines ) with respect to the origin approximately at the centre of .the source is within the interferometers fov which is centred on point on the celestial sphere .the electric field from is measured by the interferometer at pairs of positions and in . is assumed to be bounded , so the maximum distance between any two measurement points ( maximum baseline ) is finite .although the source emission may be broadband we split it into narrow , quasi - monochromatic , spectral bands and consider a typical narrow ( bandwidth much smaller than centre frequency ) band centred on frequency .the assumption that is very far away , which quantitatively we take to mean that for all , implies that the entire interferometer is in the far - field of the source .this means that the electric field at point at time is where and is the complex electric field amplitude vector emitted by the source at in the direction of . furthermore , in the far - field , is approximately transverse so that if we further assume that the angular extent of as seen from is small , that is then the interferometer is in the fraunhofer far - field of the sources . in this casewe can use the approximations and to simplify the expression for the electric field in to where here we have dropped the first argument of since we are assuming that the angular variation of the sources emissions is small enough so that the approximation implies that . a similar expression to equation ( [ eq : diselcontribvecapprox ] ) for the field at obtained by replacing with and with .the electric coherence matrix ( tensor ) can then be found by taking the outer product of the electric fields where denotes complex conjugation , denotes time averaging , and the subscripts label cartesian components , which we will define more precisely later .we have written the electric coherence matrix , equation ( [ eq : ecohmat ] ) , as a complex matrix even though , for the single point source we are considering here , its rank is two and could therefore be expressed as a complex matrix .we keep the electric coherence matrix as a complex matrix since it is valid even when there are more than one point source .we now move to the case of a finite number of point sources .the total electric field measured at points and is now the sum of the fields from sources in directions for .the electric coherence matrix is therefore for . in going from the double to the single sum we used the usual vc - z assumption that the sources are spatially incoherent , that is , sources in different directions are statistically independent . until now we have considered only discrete sources .we can make the transition to the more general continuum source distribution by introducing the three - dimensional brightness matrix as a function of direction in a continuous source distribution where is the infinitesimal area of the source distribution .the superscript is to highlight the fact that is three - dimensional as opposed the usual two - dimensional brightness matrix .the reason that the brightness matrix here is three - dimensional is simply because the full electric field amplitude is three - dimensional .however , due to equation ( [ eq : transcondapprox ] ) , not all the components of are arbitrary . in fact , we will show that it can be recast as one two - dimensional matrix . by making the replacement ( [ eq : defbrithree ] ) in equation ( [ eq : discretesrcecorr ] ), we obtain where is an infinitesimal solid angle of the source distribution in . in this expressionwe see that the dependence on the pairs of position is only relative , that is , the coherence matrix depends only on , and so we recast the expression in terms of the vector also known as the baseline vector measured in wavelengths ( ) .in practice , rather than use directly , it is convenient and conventional to put the phase reference point at the centre of the fov given by the direction .the result of the change in phase , is the 3 generalisation of the standard 2 visibility matrix as defined by , for instance , . is a general complex matrix except for where it is hermitian .it fulfill the symmetry relation where stands for hermitian transpose .if we use in equation ( [ eq : continumize ] ) we arrive at equation ( [ eq : vcz3dimpl ] ) is a matrix version of the wide - field , scalar vc - z relation and so includes the partial polarization of the ( non - scalar ) source distribution . however , it is not very useful in this form since it does not automatically fulfill the constraint ( [ eq : transcondapprox ] ) .when applied to , this constraint becomes for all directions , where is understood to be a column vector .the constraint can , however , easily be removed if we express in terms of spherical base vectors and set the radial component to zero .this leads us to introduce two coordinate systems : a cartesian and a spherical .we use the angular , or tangential , basis set of a spherical polar coordinate system as the basis for the polarization of the transverse field of the source distribution on the celestial sphere , see fig .1b ) . to simplify the results , the spherical coordinate system is taken relative to the phase reference position of the interferometer assumed to be at . in other words , the zero point of the spherical system , , ( intersection of the equator and central meridian ) , is taken to coincide with is the angle from the equator ( positive in the hemisphere with the pole and negative in the other hemisphere ) , and is the position angle from the central meridian around in the anticlockwise sense looking along . in lieu of any other reference directions ,the orientation of the spherical system around is arbitrary , but for earth - based measurements could be directed towards the north pole or zenith .one should note that are consistent with ludwig s second definition as detailed in with the understanding that the antenna boresight in is here at , that ludwig s reference polarization unit vector is here , and ludwig s cross polarization unit vector is here .see for the use of ludwig s third definition in a vc - z relation .the cartesian system , with base vectors , is defined so that is in the direction of , and is in the direction of the pole . in terms of the cartesian system , we can explicitly write the components of the vectors in equation ( [ eq : vcz3dimpl ] ) as where and where the superscript stands for vector transpose .all these vectors are unit vectors with real - valued components ranging between and .these definitions of the and spaces are the same as the usual definitions for earth - based observations , see e.g. , .also the matrices in equation ( [ eq : vcz3dimpl ] ) are to be considered in what follows as being expressed in the cartesian system .the relationships between the spherical and cartesian systems base vectors are using these spherical and cartesian systems we can express the three - dimensional transverse electric field as where is the jones vector in spherical ( rather than the usual cartesian ) components and is the transformation matrix between the components given by the equations ( [ eq : phixyz ] ) and ( [ eq : thetaxyz ] ) .note that this transformation is possible for all directions on the celestial sphere except for , i.e. , the poles of the spherical coordinate system .if one wishes to use to a polar spherical system in which the centre of the fov is not on the equator , as it is here , but rather at some declination , then one simply replaces with defined in appendix [ sec : coordtrans ] , equation ( [ eq : sph2cart_gen ] ) .this assumes that the pole is towards the earth s north pole .it is easy to show that so the transverse electric field expressed according to equation ( [ eq : cartastrans ] ) does indeed fulfill equation ( [ eq : transcondapprox ] ) .so if we use rather than in equation ( [ eq : vcz3dimpl ] ) we would have an unconstrained vc - z equation .we can introduce this replacement by rewriting using equation ( [ eq : cartastrans ] ) , so where , suppressing the dependence on , is the 2 brightness matrix , but in spherical rather than cartesian coordinates . by this we mean that , for an arbitrary direction , is locally equivalent to the usual paraxial brightness matrix in cartesian coordinates . from its definition it easy to see that is a hermitian matrix . by using the and spaces , as spanned by the vectors , and , we can write equation ( [ eq : vcz3dimpl ] ) in a more explicit form . the exact form , though , depends on the extent of .if it is entirely in the hemisphere , then we write and }}{n } \,\mathrm{d}l\mathrm{d}m , \label{eq : vcz3d}\ ] ] where we have used , and where the matrices depend implicitly on and .if , however , part of is in the hemisphere , then we must add to equation ( [ eq : vcz3d ] ) the contribution from this hemisphere given by the integral }}{|n| } \,\mathrm{d}l\mathrm{d}m , \label{eq : vcz3dmin}\ ] ] where is the subset of in the hemisphere .the image horizon , , can also be included by reparametrising the integral in terms of rather than and using the replacement .the poles can also be imaged , but one must then stipulate the orientation of the and vectors at these singular points .now , by extending to cover the entire hemisphere and extending to cover the entire hemisphere , the entire celestial sphere can be imaged in a single telescope pointing .an assumption here is that is a proper three - dimensional volume , or in other words , the baselines should be non - coplanar . if is just a plane ( coplanar baselines )then only one hemisphere can be mapped uniquely .equation ( [ eq : vcz3d ] ) is our main result .it says that the full electric visibility matrix on the three - dimensional space is given by the brightness matrix on the two - dimensional plane .the fact that this is a relationship between two matrices with different matrix dimensions is a fundamental feature of our vc - z , and makes it clear that it is not just a matrix - valued generalisation of the wide - field , scalar vc - z .mathematically , this is ultimately due to the ranks of the fundamental matrices , of which we will speak more in section [ sec : stokesparams ] .the remaining vc - z relationships that form a complete characterisation of the electromagnetic coherence response of a radio astronomical interferometer are given in appendix [ emvcz ] .in the previous section we derived a vc - z relation , equation ( [ eq : vcz3d ] ) , in which the visibility matrix is determined from the brightness matrix .it is well known that the original vc - z theorem ( far - zone form ) for narrow - fields states that there is a two - dimensional fourier transform relationship between visibility and brightness . in astronomical interferometry the transform aspect of this relationship is exploited to produce brightness images from measured visibility .the wide - field vc - z , equation ( [ eq : vcz3d ] ) , is not a two - dimensional fourier transform , but , as we will now show , it is still possible invert it and thereby establish a sort of generalised transform .first we should state that there are several ways of expressing in terms of , even though the wide - field vc - z relation , equation ( [ eq : vcz3d ] ) , is a one - to - one relationship in general .this is because has redundancies , as one can expect considering the asymmetry in the respective matrix dimensions of and .so although the rank of is three in general , it is overdetermined if is known .we will first derive a solution that is valid for the entire celestial sphere based on the full .the case when only a projection of is available will be discussed in section [ sec : dual - pol - int ] .consider that we are given the full and that we would like to solve equation ( [ eq : vcz3d ] ) for . by extending the solution of the scalar problem described in to the three - dimensional matrix relationship in equation ( [ eq : vcz3d ] ) we find that one approximate solution is } \ , \mathrm{d}u\mathrm{d}v\mathrm{d}w \mathrm{d}n .\label{eq : vcz3dinv}\end{aligned}\ ] ] where is the space spanned by for and . from , we can find a solution for the two - dimensional brightness matrix , an analogous expression applies for but with the integration over running from to rather than to . for ,an expression can be obtained by using the parametrised vc - z mentioned in the previous section .we can now state the generalised transform as where reads `` is wide - field , polarimetric vc - z related to '' .the relation from brightness matrix to visibility matrix is given ( for ) by equation ( [ eq : vcz3d ] ) , and the relation from visibility matrix to brightness matrix is given by equations ( [ eq : bri2frombri3 ] ) and ( [ eq : vcz3dinv ] ) .note that the relation is not simply a two - dimensional fourier transform as in the narrow - field case .furthermore , the difference in the dimensionality of and make it clear that equation ( [ eq : vcz3drel ] ) can not simply be a matrix generalisation of the scalar vc - z relation , as is sometimes assumed .in practice it is common to use stokes parameters to characterise the brightness and visibility matrices over narrow fields . let us see how stokes parameters can be applied to the wide - field vc - z relation , equation ( [ eq : vcz3d ] ) .let us first consider brightnesses .the difference between the two - dimensional brightness matrix in equation ( [ eq : vcz3d ] ) and the usual two - dimensional brightness matrix in the paraxial approximation is that the former is defined on a spherical domain while the latter is defined on a cartesian plane .locally , for a source at some , the two brightness matrices can be made equal .thus we can define the stokes parameters in terms of the components of with respect to a spherical basis in analogy with cartesian basis as when the spherical system is aligned as it is in section [ sec : wolfvcz ] such that the intersection of its central meridian and its equator is located at the centre of the fov and the pole is towards the earth s north pole , then in a sufficiently narrow field around the centre of the fov the stokes brightnesses are approximately equal to the stokes parameters of the iau . if one wishes to conform with iau stokes parameters over the entire celestial sphere , then one can use equation ( [ eq : stokesbright ] ) with replaced by , equation ( [ eq : sph2cart_gen ] ) , where is the declination of the centre of the fov .the correspondence is then that is the iau s , and is the iau s .note however that the iau basis set for polarimetry is not the same as the cartesian basis set widely used in radio interferometry for defining source directions and baselines , which in this paper is denoted . in adopting both these systems , therefore, a sacrifice must be made , and we have chosen to slightly modify the usual pauli matrices that are used to relate the brightness matrix to the stokes parameters . explicitly , the brightness matrix in terms of the stokes parameters in equation ( [ eq : stokesbright ] ) is in this paper as can be seen , the expansion of this expression into unitary matrices , as in ( * ? ? ?6 ) , leads to matrices equivalent to pauli matrices but with a change of overall sign for the matrices associated with and . in terms of equation ( [ eq : stokesbright ] ) , we can write the three - dimensional brightness matrix in equation ( [ eq : vcz3d ] ) as where it is easy to see that the for depend only on , and that they become the three - dimensional analogs of the equivalent pauli matrices in equation ( [ eq : britwoinstokes ] ) . equation ( [ eq : b3instokes ] ) shows that for every look - direction , the three - dimensional brightness matrix has four degrees of freedom , here expressed as the four stokes parameters . in other words ,the four stokes parameters completely characterise the partially polarized brightness also for wide fields , under the assumptions made in the derivation of equation ( [ eq : vcz3d ] ) ) . now let consider if stokes visibilitiescan be extended to wide fields .a consequence of the three - dimensionality of in equation ( [ eq : vcz3d ] ) is , however , that the stokes parameters can not in general fully characterise the electric visibility .this is because , in contrast to , which has a rank of at most two due to the transversality condition ) implies that has rank two comes from the rank - nullity theorem of linear algebra , since equation ( [ eq : b3constr ] ) implies that dimension of the null space is one and the matrix dimension of is three , so is the rank of . ] , equation ( [ eq : b3constr ] ) , has no similar constraint .indeed one can convince oneself of the full rank of by considering two distinct point sources : the weighted sum of their matrices at some point according to equation ( [ eq : vcz3d ] ) , will in general be rank three .as there are only four stokes parameters , albeit complex - valued in the case of visibilities , they can not fully parametrise a rank three matrix .alternatively , rather than use the standard four stokes parameters , one could also use complexified versions of the nine , real , generalised stokes parameters .these parameters are analogous to the standard stokes parameters but can completely describe the coherence of the full three - dimensional electric field . in light of the discussion above , these generalised stokes parameters are particularly suitable for parametrising , but we will not discuss them here any further .the vc - z theorem is a basic , fundamental physical relationship is independent of technology .the measurement equation ( m.e . ) of radio astronomy , on the other hand , includes practical aspects of telescope measurements , in particular the instrumental response of the telescope .usually it is a relationship between a 2 brightness matrix and a 2 cross - correlation matrix of the output - voltage of a dual - polarized interferometer . since in the past such two - dimensional m.e .have been tacitly based on the paraxial approximation valid only for narrow fields , it is important to verify that it can be recovered from the wide - field , polarimetric vc - z , equation ( [ eq : vcz3d ] ) , for which the paraxial approximation is not used . although it is possible to do this in a simple , straightforward way , we choose to do it in a more detailed way , introducing a formalism that extends the usual two - dimensional , electric field based model of radio astronomical antenna response . to obtain a 2-d m.e .we must first introduce a formalism for converting the full electric field to a voltage in the interferometer antenna .an electric field at an antenna excites an open circuit voltage .assuming linearity , these two quantities are related as where is the antenna effective length vector . in generalit is a function of incidence direction , i.e. , but here we will only use ideal , hertzian dipole antennas ( short electric dipoles ) .these have the important property that their effective length does not depend on incidence direction , is just a constant unit vector , and so it directly samples the component of the electric field along its length .if we have co - located antennas that have no mutual coupling , their output voltages can be written in a matrix form where the -th row in the matrix contains the components of antenna effective length vector .the matrix of antenna effective lengths , denoted , has physical dimension length and is a -dimensional extension of the matrix in ( * ? ? ? * eq .( 2 ) ) , for which and thus models dual - polarized antennas .when and the antennas are linearly independent , then they sample the full three - dimensional electric field .such an antenna system is called _ tri - polarized _ antenna in general , and tripole antenna if the three dipoles are approximately mutually orthogonal .one can also include antennas that sample the magnetic field , and an arrangement of electric and magnetic antennas can be constructed so as to sample the full electromagnetic field at a point .such antennas are called electromagnetic vector - sensors . in what follows, we will only be interested in dual- or tri - polarized antenna systems . in particular ,let us consider a dual - polarized antenna that consists of two co - located , non - mutually coupled dipole antennas , one aligned along and the other along .the response of such a dual - polarized antenna is where is the antenna effective length .we have changed the subscripts on the voltages to reflect the right - hand side of the equation , in other words , for this dual - polarized antenna , each voltage component is directly proportional to a unique cartesian component of the electric field regardless of the radiations incidence angle .because of this property , it can be regarded as an ideal dual - polarized polarimeter element . associated with each dual - polarized antenna elementis a two - dimensional plane in which the polarization is defined and measured .if this plane is the same for all of the elements in a dual - polarized interferometer or if all the planes are mutually parallel ( common design goal for polarimetric interferometers ) we will say that such an interferometer is _ plane - polarized _ , in analogy with plane - polarized waves .note that a plane - polarized interferometer is not necessarily a co - planar interferometer , and that a non - plane - polarized interferometer may be co - planar .if the plane of a plane - polarized interferometer is to be specified explicitly , we will say , e.g. in the case of equation ( [ eq : ideal2dpolmeter ] ) , that it is -polarized .although most existing polarimetric interferometers in radio astronomy are intended to be plane - polarized , it is possible to have more general antenna elements such as electromagnetic vector - sensor arrays or electric tripole arrays .an real - life example of the latter is the lois test station , see .the prime motivation for such a tri - polarized system is that it samples the full electric field in a single telescope pointing rather than just a projection .now that we have introduced a model formalism for antenna response we can derive the 2-d m.e .for the special but important case of the -polarized ( hertzian dipole ) interferometer , that is , the polarization plane is normal to the centre of the fov .the output voltages from the -polarized elements located at points and are cross - correlated and the result can be expressed as the correlation matrix for , where the arguments 1 and 2 refer to baseline points and . is matrix multiplied from the left and its transpose from the right with , but since it does not depend on in this case , it can be pulled out of the integral in ( [ eq : vcz3d ] ) , and so the correlator output can be written where is the -polarized antenna elements , effective length matrix .as one can see , the effect of is equivalent to projecting the field vectors into the -plane and multiplying by . in terms of the brightness matrixthe correlator output is }}{\sqrt{1-l^{2}-m^{2 } } } \,\mathrm{d}l\mathrm{d}m \label{eq : meuv}\end{aligned}\ ] ] where we have introduced the -projected transformation matrix to simplify the final result , which is now clearly two - dimensional .equation ( [ eq : meuv ] ) is a wide - field m.e .but with the novel jones matrix that physically represents a projection of the three - dimensional electric field vector onto the -plane . in the narrow fov limit ,so we can approximate in equation ( [ eq : meuv ] ) up to first order in and by the two - dimensional unity matrix , and so this is the basic m.e . of astronomical interferometry ,see ( * ? ? ?* eq . ( 14.7 ) ) .thus we have shown that the wide - field , polarimetric vc - z , equation ( [ eq : vcz3d ] ) indeed reduces to the usual two - dimensional , jones vector based m.e . in the paraxial limit .the result , equation ( [ eq : meparaxial ] ) , depended on the particular the spherical coordinate system used as default in this paper .only this particular choice reduces directly to the standard 2-d m.e . in the paraxial limit .to see what the dual - polarized m.e .equation ( [ eq : meparaxial ] ) misses by not including the third dimension along , let us go back to equation ( [ eq : vcz3d ] ) and let assume the paraxial approximation , . in this case where we have kept only terms of first order in and . in equation ( [ eq : vcz3d ] ) is identical to in equation ( [ eq : meparaxial ] ) for , but for the components , and so the component of the three - dimensional visibility matrix does not provide anything , but the ( and ) and the ( and ) components do .thus even for a narrow fov , a dual - polarized interferometer does not measure the full set of generally non - zero electric visibilities .note that if the antenna array is not exactly plane - polarized , these additional visibility components will contribute to the output - voltage of such an array .this leads to the important question of how serious the loss of visibility information is in a dual - polarized interferometer .more specifically , we ask whether a plane - polarized interferometer , given by some matrix , can in general perform full polarimetry of an arbitrary source distribution .the answer is that the source brightness matrix in some direction can be determined fully only if since in this case is invertible .for the special but important case this condition is equivalent to thus , an -polarized interferometer can recover the full polarimetry except on the great circle orthogonal to , which we may call the imaging horizon .so , in wide - field imaging with -polarized interferometers the image horizon can not be measured with a single telescope pointing .under noise - free conditions , this would pose little problem , but if we add in the effects of noise then the great circle of directions for which full polarimetry is not feasible broadens as a function of the signal - to - noise ratio .although the discussion above was mainly focused on plane - polarized interferometers , the three - dimensional formalism developed here can also be applied to more general dual - polarized interferometers .in particular it can model the situation when the polarization planes of dual - polarized antennas in an array are not all parallel .since several large arrays of dual - polarized antenna based interferometers are currently being planned for , this would address the important question of whether such arrays should strive to be plane - polarized or whether they should purposely not be plane - polarized to minimize the inversion problems mentioned above .we now show that the m.e . for the -polarized , hertzian dipole interferometer , equation ( [ eq : meuv ] ) ,exhibits distortions that depend on the look - direction , that is , the images based on these brightnesses contain polarization aberrations .this agrees with the general understanding in observational radio astronomy that the polarimetry of a telescope is worse off - axis than on - axis .say we have measured the correlation matrix .we can not use the formal solution ( [ eq : vcz3dinv ] ) directly since it is for the 3 visibility , and it is not clear how to obtain the remaining , unmeasured components from the and components of . on the other hand , for the scalar case , the formal solution for producing a synthesized image is the following scalar , wide - field imaging equation : } \mathrm{d}u\mathrm{d}v\mathrm{d}w\mathrm{d}n , \label{eq : sclwfimage}\ ] ] where is the scalar brightness and is the scalar visibility , see equation ( 13 ) in who call it the 3d method of synthesis imaging .let us apply ( [ eq : sclwfimage ] ) to each component of as if it were a scalar , thus creating a matrix analogue of the scalar , wide - field imaging equation .the resulting brightness matrix ( polarized image ) is } \mathrm{d}u\mathrm{d}v\mathrm{d}w\mathrm{d}n .\label{eq : image2d}\ ] ] however , this is not the true brightness matrix because we see from equation ( [ eq : meuv ] ) that it follows that is actually the projection of the three - dimensional brightness matrix into the -plane .the stokes -projected brightnesses are based directly on in analogy with equation ( [ eq : stokesbright ] ) , that is , the relationship between these stokes brightnesses and the true stokes brightnesses , which are based on can found by recasting equation ( [ eq : briprojrel ] ) as where is a mueller matrix that quantifies the distortion of true stokes brightnesses based on the imaging equation ( [ eq : image2d ] ) . for a very narrow fov , and is approximately unity . in general , however , is not the unit matrix with the effect that the perceived stokes vector is a distortion of the true stokes vector .thus , without further processing a plane - polarized interferometer of short dipoles will exhibit polarization aberrations over wide fields .by contrast , a tripole array interferometer is , at least in theory , polarimetrically aberration - free over a wide field , since the scalar wide - field imaging equation ( [ eq : sclwfimage ] ) applied to the components of its visibility matrix as if they were scalars gives , which can be interpreted as the exact in cartesian ( rather than spherical ) components . examples of these wide - field distortions are displayed in fig .[ fig : poldist4by4 ] .it shows aberration effects for source distributions that are constant over the entire hemisphere , by which we mean that , so the brightnesses do not explicitly vary with direction . ] .fortunately , these distortions can be compensated for in the image plane since is invertible and well - conditioned as long as is not close to one .distortion of various polarized source distributions across the hemisphere for an -polarized interferometer .all distributions are such that the stokes brightnesses are constant , that is , they do not vary explicitly with direction although they may vary implicitly due to variation of the reference system for the stokes brightnesses , and , with direction .the plots are of the polarization ellipses corresponding to the normalized stokes parameters , where are the -projected stokes brightnesses as a function of the direction .the plots clearly show polarization aberration , that is , a direction dependent distortion in the observed polarization .the source distributions state of polarization ( shown in red ) can be seen at the centre of the fov , , where there is no distortion .the upper - left panel shows , i.e. completely unpolarized radiation , the upper - right panel shows circularly polarized radiation , the lower - left shows radiation linearly polarized along , , and the lower - right panel is for radiation linearly polarized along , . ] another aspect of equation ( [ eq : stokespdist ] ) that is important , is that it shows , not surprisingly , that partially polarized , wide fields can not be treated as scalar , wide fields .in fact , even for the supposedly ` scalar ' case , that is , when we only consider the scalar visibility of an unpolarized source distribution , so , equations ( [ eq : stokespdist ] ) and ( [ eq : image2d ] ) imply that where stands for two - dimensional fourier transform .this should be compared with for the isotropic , scalar antenna case ( see , e.g. , equation ( 1 ) in ) .this expression differs even for narrow fields since , to lowest , non - vanishing order in and , equation ( [ eq : myscalarvcz ] ) becomes approximately , \ ] ] while for the isotropic , scalar antenna case .thus short dipoles are less aberrated than isotropic , scalar antennas .the conclusion here , that scalar theory is not sufficient for the description of the general vc - z relations , agrees with those detailed in the field of optical coherence , see e.g. .also , equation ( [ eq : myscalarvcz ] ) corresponds to an analogous equation in .consider an unpolarized point source at , so , and express as ( where is the angle between the point source position and , and is equivalent to in ) , then the intensity of the unpolarized point source , as measured by single - pixel telescope , is .this corresponds to equation ( 47 ) in .although the results will be different for other types of antennas , the hertzian dipole is an important special case as it is the simplest polarimetric antenna and they are directly proportional to the cartesian coordinates of the electric visibility matrix .the ultimate reason for the aberration is of course that the incident , transverse field is being projected onto the polarization plane of the hertzian dipoles thereby distorting the field . indeedthis projection is identical to the orthographic projection of a hemisphere in cartography . in comparison to other effects encountered in instrumental calibration , such as beam - shape, these effects may seem small .they are , however , important in that they determine the ultimate limits of polarimetry since these effects are of a intrinsic , geometric nature .inspection of reveals that around the aberrations are . in the case of the aperture array of the ska, the preliminary specification has a mean fov of 250 square degrees , so the aberration error is in the order of ~2% , or -16 db , at the edge of the fov .this should be compared with the requirements from the key scientific projects that in some cases call for -30 db in polarization purity over wide fields .thus , these polarization aberrations will need to be considered in wide - field imaging with ska .it is not difficult to show that these polarization aberrations occurring at the edges of wide - field images also occur for off - axis imaging with fixed mount telescopes .this occurs , for instance , in phased arrays of crossed dipoles such as lofar and the low frequency part of ska , where imaging at scan angles away from boresight ( zenith ) will be achieved by electronically steering the beam - form .since the crossed dipoles are fixed to the ground and are not on a mechanically rotating mount , a situation geometrically analogous to the wide - field imaging consider above occurs .this leads to similar aberrations in polarization for short electric dipole arrays .scan angles of 45 have been proposed for which the aberrations will of the order of -3 db .we have derived the full set of electromagnetic vc - z relations , which are the basis of radio astronomical interferometry , without invoking the paraxial approximation .these relations allow all - sky imaging in a single telescope pointing .we have achieved these relations by generalising the usual 2-d cartesian jones vector based m.e . to a 3-d wolf coherence matrix formulation on the ( celestial ) sphere .the derived wide - field vc - z relations are not simply trivial matrix ( or vector ) analogues of the wide - field , scalar vc - z , they exhibit direction dependent polarimetric effects . indeed even in the scalar limit ( that is , unpolarized radiation ) , our m.e . is not the same as the m.e . derived from scalar theory , in the case of hertzian dipoles .furthermore , we found that , for an arbitrary wide - field of sources , the electric visibilities generally have nine complex components for an arbitrary baseline .this implies that the standard stokes ( cartesian ) visibilities do not provide a full description of electric coherence for wide fields .we have also shown that our vc - z relation ( for the electric field ) reduces to the standard 2-d m.e .after a 2-d projection .we showed that a consequence of this projection is that plane - polarized , hertzian dipole interferometers are aberrated polarimetrically .fortunately , these aberrations can be corrected for in the image plane for sources sufficiently far away from the plane of the dual - polarized antenna elements . besides its use in the derivation of the wide - field vc - z relation, we believe that the 3-d wolf formalism can be useful in constructing more general 3-d m.e . in cases requiring the full set of electric components . as examples ,we mention the modelling of tri - polarized element arrays , the modelling of dual - polarized element arrays that are not strictly plane - polarized ( due to manufacturing errors or the earth s curvature ) , and the modelling of propagation that is not along the line - of - sight ( due to refraction or diffraction in the ionosphere , e.g. ) .we thank the referee , j. p. hamaker , for valuable and constructive comments .this work is supported by the european community framework programme 6 , square kilometre array design studies ( skads ) , contract no 011938 , and the science and technology facilities council ( stfc ) , uk .bergman j. , carozzi t. d. , karlsson r. , 2003 , international patent publication , wo03/007422 bergman j. e. s. , hln l. , stl o. , thid b. , ananthakrishnan s. , wahlund j .- e ., karlsson r. l. , puccio w. , carozzi t. d. , kale p. , 2005, in dglr int .symposium `` to moon and beyond '' , bremen , germany elvis - electromagnetic vector information sensor j. e. s. , carozzi t. d. , 2008 , preprint ( arxiv:0804.2092 ) bhatnagar s. , cornwell t. j. , golap k. , uson j. m. , 2008 , a&a , 487 , 419 brouw w. n. , 1971 , ph.d .thesis , sterrenwacht leiden carozzi t. , karlsson r. , bergman j. , 2000 , phys .e , 61 , 2024 carter w. h. , 1980 , j. opt ., 70 , 1067 cornwell t. , golap k. , bhatnagar s. , 2005 , in ieee international conference on acoustics , speech , and signal processing , 5 , pp 861864 cornwell t. j. , perley r. a. , 1992 , a&a , 261 , 353 guthmann a. w. , thid b. , 2005 , aip conference proceedings , 745 , pp 770773 hamaker j. p. , bregman j. d. , 1996 , a&as , 117 , 161 hamaker j. p. , bregman j. d. , sault r. j. , 1996 , a&as , 117 , 137 hamaker j. p. , 2000 , a&as , 143 , 515 jouttenus t. , setl t. , kaivola m. , friberg a. t. , 2005 , phys .e , 72 , 046611 the lofar team , july 2007 , astronnews , 12 ludwig a. c. , 1973 , ieee trans .ant . & prop ., 21 , 116 mcconnell d. , carretti e. , subrahmanyan r. , 2006 , apj , 131 , 648 mcguire j. p. j. , chipman r. a. , 1990 , j. opt .soc . am . a , 7 , 1614 mandel l. , wolf e. , 1995 , optical coherence and quantum optics . cambridge university press morris d. , radhakrishnan v. , seielstad g. a. , 1964 , ap .j. , 139 , 551 nehorai a. , paldi e. , 1991 , proc .25th asilomar conf . on signals ,syst . and comput . , pacific grove , ca , 566 piepmeier j. r. , simon h. k. , 2004 , ieee geoscience and remote sensing letters , 1 , 300 schilizzi r. t. , alexander p. , cordes j. m. , dewdney p. e. , ekers r. d. , faulkner a. j. , gaensler b. m. , hall p. j. , jonas j. l. , kellermann k. i. , 2007 , technical report , preliminary specifications for the square kilometre array .ska program development office saastamoinen t. , tervo j. , turunen j. , 2003 , opt .commun . , 221 , 257 sault r. j. , bock d. c .- j . , duncan a. r. , 1999 , a&as , 139 , 387 taylor r. , bredeson c. , dever j. , guram s. , deshpande a. , ghosh t. , momjian e. , salter c. , gibson s. , 2006 , national astronomy and ionosphere center newsletter , 39 , pp 13 thid b. , 2004 , in nilsson b. , fisherman l. , eds . , mathematical modelling of wave phenomena . vxj university press , 315 thompson a. r. , moran m. m. , swenson g. w. j. , 2001 , interferometry and synthesis in radio astronomy. john wiley & sons , inc .wolf e. , 1954 , nuovo cimento , 12 , 884in the previous sections we considered only the electric field and its auto - correlation , but now we look at the full electromagnetic field . one motivation to do this is that , especially for low radio frequencies , it is possible to sample both the electric and the magnetic fields and thereby measure electromagnetic coherence fully , see , or .such sensors have been deployed in some radio interferometers and will possibly provide unique and novel astronomical measurements . for completenessthen , we present the rest of the electromagnetic correlations analogous to the electric vc - z relation in equation ( [ eq : vcz3drel ] ) .we found previously that the electric visibility matrix was related to the electric brightness matrix as where denotes the vc - z relationship given by equation ( [ eq : vcz3d ] ) .note that we have now changed the notation of to and to to indicate that these quantities represent auto - correlations of visibility electric field and the brightness jones vectors , respectively .a full electromagnetic vc - z relationship between brightnesses and visibilities requires also the auto - correlations of the magnetic field and the cross - correlation between the electric and magnetic fields , see .we define the electromagnetic visibilities for baseline with respect to the phase reference direction as for , where is the magnetic field at .the electromagnetic brightnesses in direction are defined as for , where is the magnetic field from the source distribution in direction .the magnetic field associated with the visibilities can be derived by applying faraday s law , where is the impedance of free space , to the electric field used in derivation in section [ sec : wolfvcz ] .the result is that the magnetic analogues of the electric vc - z expressions involve a magnetic jones vector that is related to the electric jones vector through where or in other words this says that the magnetic jones vector is directly determined from the electric jones vector , and so no other independent electromagnetic source coherence statistics exist ( in the far - field zone ) other than the electric brightness matrix . repeating the derivation in section [ sec : wolfvcz ] but for the magnetic field we findwe can write the rest of the vc - z relations as thus one can see as a jones - like matrix that switches between electric and magnetic coherencies .these relations provide a complete description of the second - order coherence of the electromagnetic radiation .so , for instance , one can easily compute the poynting visibility vector using the above relations , \right\rangle = \nonumber \\ & \quad -\frac{1}{z_{0}}\iint i\mathbf{s } \mathrm{e}^{-\mathrm{i}2\pi\left[ul+vm+w\left(n -1\right ) \right ] } \ , \mathrm{d}\omega\end{aligned}\ ] ] where .this says that the power flux visibility vector is vc - z related to the stokes brightness propagating from sources .in this paper we have used the standard definitions of the - and -spaces , see , and we have used a spherical basis .both these coordinate systems are relative to the centre of the fov .often one wants to transform to some other system , and some details can be found in the standard texts .however , what is not usually done explicitly is the transformation of the non - scalar brightnesses and visibilities .we will now show how to rotate the matrix brightnesses and visibilities found in the polarimetric , wide - field vc - z , equation ( [ eq : vcz3d ] ) . equation ( [ eq : vcz3d ] ) uses the cartesian system as a basis for , , and the rows of .it uses the spherical base vectors as a basis for and the columns of . under a rotation given by the 3 orthogonal matrix ,the vectors and matrices in the cartesian system transform in the usual manner : note that a transformation analogous to equation ( [ eq : vis3trans ] ) for the matrix , discussed in section [ sec : recover2dme ] , does not exist since lacks one of the dimensions necessary for a general coordinate transformation . only the correlator output from a tri - polarized antenna arraycan be fully transformed in general .actually , need not be the same in equations ( [ eq : vis3trans ] ) and ( [ eq : bri2trans ] ) , since the spherical and cartesian systems can be rotated separately. this can be used to change the relative alignment ( rotation ) of the spherical system relative the cartesian system .the net result is a change in matrix that relates the spherical components to the cartesian components .consider the special case when the spherical system is rotated in the positive sense around the -axis through angle relative the cartesian system .the effect on the vc - z relations is that is replaced by , where obviously , for we obtain , which is the matrix used in most of this paper . at the field centre , , so for all .however , the and components of its first derivatives are zero only for , that is only for .thus , the special case we have used in this paper , , can be said to possess a projection ( ) that is locally flat at the field centre .this is a reason for choosing the spherical system with as a default , since all other cases would lead to a 2-d m.e .( [ eq : meparaxial ] ) with additional first - order terms that account for the coordinate system curvature within the fov .the matrix can be used to adapt the vc - z relations given in this paper , such as equation ( [ eq : vcz3d ] ) , to standard celestial coordinate systems such as the equatorial system or the azimuth - elevation system .in essence , assuming that the pole is towards the earth s north pole ( in the case of equatorial coordinates ) or zenith ( in the case of az - el coordinates ) , one simply performs and then interprets as either the declination ( equatorial case ) or the elevation ( az - el case ) of the centre of the fov .note however that the cartesian coordinates , in which the and spaces are expressed , would still need to transformed for complete agreement with these standard celestial systems , but for this common task we refer to standard texts such as .
we derive a generalised van cittert - zernike ( vc - z ) theorem for radio astronomy that is valid for partially polarized sources over an arbitrarily wide field - of - view ( fov ) . the classical vc - z theorem is the theoretical foundation of radio astronomical interferometry , and its application is the basis of interferometric imaging . existing generalised vc - z theorems in radio astronomy assume , however , either paraxiality ( narrow fov ) or scalar ( unpolarized ) sources . our theorem uses neither of these assumptions , which are seldom fulfilled in practice in radio astronomy , and treats the full electromagnetic field . to handle wide , partially polarized fields , we extend the two - dimensional electric field ( jones vector ) formalism of the standard `` measurement equation '' of radio astronomical interferometry to the full three - dimensional formalism developed in optical coherence theory . the resulting vc - z theorem enables all - sky imaging in a single telescope pointing , and imaging using not only standard dual - polarized interferometers ( that measure 2-d electric fields ) , but also electric tripoles and electromagnetic vector - sensor interferometers . we show that the standard 2-d measurement equation is easily obtained from our formalism in the case of dual - polarized antenna element interferometers . we find , however , that such dual - polarized interferometers can have polarimetric aberrations at the edges of the fov that are often correctable . our theorem is particularly relevant to proposed and recently developed wide fov interferometers such as lofar and ska , for which direction - dependent effects will be important . [ firstpage ] telescopes ; techniques : interferometric ; techniques : polarimetric ; instrumentation : interferometers ; instrumentation : polarimeters .
trees are ubiqutous structures which appear naturally in a large number of physical , chemical , biological and social phenomena , such as river networks , diffusion limited aggregation , pulmonary arteries , blood vessels and tree species , social organizations , decision structures , etc. they also play an important role in computer science ( use of registers and computer languages ) , in graph theory , and in various methods of statistical physics such as cluster expansions and renormalization group . in spite of their apparent structural simplicity , and the large body of scientific work on trees( a sample of which is found in - , ,- and references therein ) , they still offer challenges even related to the quantitative description of their topological structure . at the dawn of the science of complex networks , it is therefore rather important to have a complete understanding of all the tree structures and their properties .a tree is defined as a set of points ( vertices , nodes ) connected with line segments ( branches , or edges ) such that there are no cycles or loops ( a connected graph without cycles ) . for the simplest ( unlabeled ) rooted plane binary tree ,each vertex has exactly three connecting branches , except for one vertex which is distinguished from all the others by having only two connecting branches coined as the root ( r ) of the tree , and a certain number of vertices with a single connecting branch called the ` leaves ' .the height of the tree is defined by the maximum number of levels starting from the root ( which has height 0 ) , and it can be calculated as the maximum number of branches one has to pass to reach the root from its vertices ( since the leaves have only one branch , it means that this longest excursion must start from one of the leaves ) .the paths from the leaves to the root define a natural direction on the tree ( similarly to the river flow ) which is always towards the levels of lower height . a tree of height we call _ complete _ , if it has leaves each being a distance from the root .let us now mention three applications of the mathematics of trees which are directly connected to the so - called horton - strahler index of the tree , which is the subject of interest of the present paper .originally , the horton - strahler index of a binary tree was introduced in the studies of natural river networks by horton and later refined by strahler , as a way of indexing real river topologies , since river networks are topologically similar to binary trees . by definition ,a leaf has a rank of 0 ( some authors associate the value of 1 ) , and a vertex has a rank of where is the index function with and being the ranks of the two connecting vertices from the level above . when the index is called the horton - strahler index ( hs ) .the quantity of particular interest is the hs index of the root which thus categorizes the topological complexity of the whole tree .several other quantities can be introduced in relation to the hs index .segment _ of order , or a _ stream _ of order is a _maximal _ path of branches connecting vertices of hs index , ending in a vertex with index .let denote the number of segments of order of a tree with leaves , and is the average physical length of a segment of order ( the average is taken on the tree ) .the _ bifurcation ratios _ are defined as , and the length ratios via . horton and strahler have empirically observed that for river networks both the and tend to approximate a geometric series , with and with .such networks are called _ topologically self - similar _ .the notion of hs index is further refined by introducing the _ biorder _ of a vertex , representing the hs indices of its two children , , , and then studying the ramification matrix , with elements related to the number of vertices with a given biorder .another interesting application of the mathematics of binary trees and the hs index , is in the description of the branched structure of diffusion - limited aggregates see ref . and references therein . in this casethe structures are grown on a substrate ( which can be a point or a plane ) by letting small particles diffuse towards the aggregate where they stick indefinitely at their point of first contact with the cluster .this creates complex and involved branched structures , whose topological complexity still remains a challenging problem to describe .finally , the last application we would like to mention is known as the _ word bracketing problem _ which has obvious implications in computer science .let us consider an alphabet of letters , and a word , .a 2-bracketing of the word is a partition of its letters ( by keeping their order ) in groups of two units enclosed in brackets , where a unit can be a letter or a subpartition enclosed in brackets , such as , or , etc .the bracket between two units may be associated with a multiplicative composition law .for example let the alphabet be all the positive integers , and the composition law be the regular multiplication of numbers .then a bracketing of the multiple product corresponds to one particular way of calculating .a one - to - one correspondence can be made immediately to trees : let the letters of the word be associated with the leaves of a binary tree . to a particular bracketing of it corresponds a particular tree constructed by associating a lower level vertex to a bracketing ( one may think of the brackets as representing the branches of the tree ) . the main question is how many ways are there to calculate such a product . if one assumes that the multiplication law is neither associative nor commutative , then the problem is refered to as the catalan problem , see ref . for a number of solutions .the number of such bracketings is given by the catalan numbers , .the corresponding set of trees ( see fig.1 for ) is in fact the set of rooted , unlabeled binary plane trees according to this bijection .= 5.0 in for later reference , we mention that the generating function of the catalan numbers obey the equation , with , so the power series converges within a disk of radius .the problem of enumerating trees becomes more difficult if the composition law is commutative , which was first studied by wedderburn and etherington ( we ) , , . in the tree language , this means that two trees are considered identical if after a number of successive reflections with respect to the vertical axes passing through the vertices they can be transformed into each other and in this case they are said to be homeomorphic .for the example shown in fig .1 , there are only two such trees , since trees 1 ) , 2 ) , 4 ) and 5 ) can be transformed into each other .the trees that can not be transformed into each other are called non - homeomorphic .the set of non - homeomorphic trees is called the set of _ ambilateral _ trees , , .let the number of such trees with leaves be denoted by .the generating function ( gf ) defined as obeys the nonlinear functional equation : which has extensively been studied by wedderburn .otter studying a more general counting problem where the vertices can have at most branches comes to the conclusion that for the ambilateral trees , if is large we have : where .the method developed by otter gives an iterative approach to and .for example is where , so that for one already obtains an extremely close value of .later , bender developed a more general approach deriving the same results .the coefficient in ( [ wen ] ) can also be computed : .the more practical application of the bracketing problem within computer science is the computation of arithmetic expressions by a computer .a general arithmetic expression involving only binary operators can simply be mapped onto a binary tree , called the syntax tree , which has as leaves the operands and the inner vertices the operators .a computer traverses this tree from the leaves towards the root and it uses registers to store the intermediate results .in general there are many ways of traversing such a tree , and the program that uses the _minimal _ number of registers is the most efficient , or optimal one .ershov has shown that the optimal code will use exactly as many registers to store the intermediate results as the hs index of the associated syntax tree . in the present paperwe investigate how the hs index is distributed on both the rooted , unlabeled , plane binary set of trees , and on the ambilateral set of binary trees .we first answer this question on the rooted , unlabeled , plane binary set , since it is simpler , but it will also provide us with a technique that can be extended to tackle the problem for the ambilateral set .for this set , the question was first answered by flajolet , raoult and vuillemin , with a method somewhat similar to the one presented here .the enumeration problem of the hs index on the ambilateral set is , however , inherently more difficult since it involves functional equations with nonlinear dependence in the argument similar to eq .( [ nlf ] ) , and therefore an explicit solution in a closed form becomes impossible to attain .the derivation of an approximant formula for the number of ambilateral trees sharing the same hs index at the root is the main result of this paper .the paper is organized as follows : first we present our derivation of the enumeration problem for the hs index on the unlabeled set in section ii , and then use this method of derivation from this case to develop a technique that can be used to attack the enumeration problem on the ambilateral set in the asymptotic limit , presented in section iii .section iv is devoted to conclusions and outlook .let us observe that the root of the tree has always two subtrees attached to it via the two branches , with and leaves , respectively , .let denote the number of unlabeled trees with leaves that share the same hs index at the root .a recursion is found for this number in the light of the observation above : with the conventions , , .if the generating function for the variable is defined as , then it obeys : next we give an exact solution to ( [ recc ] ) .let us introduce the sum then , and after rearranging the terms , eq .( [ recc ] ) becomes , where .this means that , i.e. , : note that the left hand side of ( [ b ] ) remains invariant to which is another solution of ( [ b ] ) .however , since in case of the hs index , this latter solution has to be dropped .if we make , ( [ b ] ) simplifies to . which , after dividing on both sides by , and introducing , becomes : let us now write , such that . then ( [ z ] ) becomes which leads to , , and which in turn is solved easily .thus , , so one finally obtains : eq .( [ ux ] ) is the exact solution to ( [ recc ] ) in the complex plane . on the real axis , within the radius of convergence above expression takes the form : ] , where is the finite difference operator . for a different method ,see ._ scaling limits ._ next we briefly present the results of an asymptotic analysis on the numbers .since is an enumeration result , it typically contains several scaling limits . in physical processes , during the growth of branched structures , usually only one of these limits is selected , and in frequent cases this limit has self similar properties ( such as for dla , or for random generation of binary trees , ) . by definition , the family of trees that obey } ) / r = const .\equiv \ln{{\cal b}} ] .the rate of the exponential growth is a number between and .2 ) , , . herethe first term in ( [ final1 ] ) is still dominant ( the rest being exponentially small corrections ) and yields : .if diverges with slower than exponential , we have topological self similarity with .+ 3 ) , , , with some . in this casethe rest of the terms in ( [ final1 ] ) ( after the first has been factored out ) are of the type and the final expression is : .the topological self similarity is obvious with .the factor is given by .+ 4) , , , and . in this casethe analysis is performed easier from the combinatorial expression of and yields : .let us now analyze the same question on the set of ambilateral trees , and denote the number of ambilateral trees with leaves and hs index by .we certainly must have the relation = 3.0 in the table in fig .[ tbl1 ] gives the distribution of the hs index for up to 32 and .we can check easily that , and , so for simplicity these are not represented in the table . the numbers obey slightly more complicated recurrence relations since now the counting has to be done on a more restricted set .we must distinguish between odd and even values .however , the two cases can be combined into one , if the convention for non - integer is adopted .the corresponding recurrence relation becomes : + m^{(r)}_{n/2 } \sum_{s=0}^{r-1 } m^{(s)}_{n/2 } + \frac{1}{2 } m^{(r-1)}_{n/2}\big ( 1+m^{(r-1)}_{n/2}\big)\end{aligned}\ ] ] the generating function will thus obey : ^ 2 + v_{r-1}(\xi^2 ) } { 1-\sum\limits_{s=0}^{r-1 } v_s(\xi ) } , \;\;r\geq 1 , \label{rec2}\ ] ] and . as a check for the correctness of ( [ rec2 ] ) ,let us see if we recover the identity ( which follows from ( [ closure ] ) ) .( [ rec2 ] ) is equivalent to ^ 2 + v_{r-1}(\xi^2 ) ] . using the identity , one finds ^ 2 + \frac{1}{2 } g(\xi^2) ] , , which can be checked to hold , see the table in fig .the result from the inversion of is already so complicated that it is not worth presenting .as the index increases , the polynomial expressions become more and more involved .figure [ fig4 ] shows the function in the interval ], from the assumption it would follow that the equation _ can not _ have any solutions ( is analytic within the circle of convergence ) in the interval .( note that in the interval , the numerator can not be zero , since the power series has only positive coefficients ) .the equation is equivalent to .however , from ( [ rec2 ] ) /v_r(x)we have shown previously , that ( it is the limit of the monotonically decreasing series ) , therefore we have : since , and where , , just as in the introduction . the convergence is double - exponential , very fast .as in section ii , the asymptotic behavior of the numbers for relatively large and is governed by the innermost singularity of on the real axis .the graph of shown in figure [ fig4 ] suggests , that the generating function is in fact well behaved in a certain interval to the right of the radius of convergence , , see also figure [ fig5 ] .the existence of this interval comes from the fact that the singularities of the term with nonlinear argument in the numerator of ( [ rec2 ] ) kick in only beyond the circle of convergence of , which is .thus , in the interval the term with the nonlinear argument is analytic , which ultimately is responsible for this nice behaviour . because , , for convenience we shall define the interval of this nice behaviour to be .in order to exploit this observation , we shall first rewrite the recurrence relation ( [ rec2 ] ) .let us denote . with this notation , ( [ rec2 ] )takes the form , , where .this leads to the new recurrence : .this would be exactly solvable if it were not for the dependence on the nonlinear argument .note the resemblance to ( [ b ] ) .let , which is an analytic function in .we also have , the latter equality being shown previously .this shows , that in the interval , the -dependence _ weakens extremely fast _ , double - exponentially with increasing . as a matter of fact, an upper estimate is in particular , , , , , , etc . therefore , from the point of view of the asymptotic behavior ,the functions can be replaced by their asymptotic expression ( as ) : figure [ fig6 ] shows the functions on the interval for .= 4.0 in thus , instead of eq .( [ rec3 ] ) we will consider : the recurrence ( [ app ] ) in turn is easily solved in the way shown in section i. the result is : where for the moment is an arbitrary ( positive integer ) index .recurrence ( [ app ] ) will become a good approximation to the recurrence ( [ rec3 ] ) from an index on .the larger is the more accurate the approximation .recurrence ( [ app ] ) is applied then with initial condition , which for modest values can be obtained by iterating ( [ rec3 ] ) times . what is the error we make when one replaces with on ? summing the differences ( [ diffu ] ) from to infinity , one obtains the estimate : .thus , for example , is smaller than , is smaller than , etc .therefore , we can finally write on : in fig .( [ figxy ] ) we plot the rhs of ( [ apo ] ) and the function from iterating ( [ rec2 ] ) .note that the approximation is very good , and it becomes virtually indistinguishable from the true function the closer is to .larger values will also give better approximations , since the approximation is only applied from the index on .however , can not be taken too high for approximation purposes , since it assumes that the exact expression of ( or ) is known .this makes only the modest values ( less than 5 ) useful .on the other hand , expression ( [ apo ] ) is very practical in analysing the singularities of and give rather close approximant expressions to these singularities .in particular , we see that within the interval , ( [ apo ] ) preserves the property that if is a singularity of ( or a zero of ) then it is a singularity of ( or a zero of ) , whenever .if one is interested in the asymptotic behavior , then a more tractable expression can be derived for the rhs of ( [ apo ] ) : the function is analytic on the interval , and since already for modest values , the innermost singularity of ( denoted ) is extremely close to , one can safely replace in this neighborhood by : .= 4.0 in this leads to the approximant : for sufficiently large ( here `` large '' means ) where next , we compute . one can use a very similar method to the one employed to obtain ( [ afa ] ) , to give : so , . if one computes for , we have , and thus .if we were to use , then one would obtain , so and slightly improve the approximation on .no significant improvement will be obtained with larger values .figure [ fig9 ] shows the agreement of the form given in ( [ apo1 ] ) . for clarity , we defined the function given by : here we use the true function using numerical iteration of ( [ rec2 ] ) , and evaluate it in the points .if the approximation ( [ apo1 ] ) is good , then one should have . as seen from fig .[ fig9 ] the approximation is already excellent for close to ( which corresponds to the point in this plot ) .the interval in these transformed corrdinates corresponds to .there are no fitting parameters , we used for and the values derived above . in order to obtain the approximation to the _ number _ of ambilateral trees with the same hs index at the root , we will have to invert ( [ apo1 ] ) .the singularities of the rhs of ( [ apo1 ] ) are given by : ( at the moment we do not care whether some of the singularities will fall outside the interval , we just simply want to invert ( [ apo1 ] ) , and then at the end keep only those terms from the final expression that were generated by the singularities within ) .= 4.0 in in a similar manner to the previous section , we first bring to an inverted polynomial form : ^{2^r } } { 2^{r+1 } \theta^{2^{r+1}-1 } q_r(\xi)}\ ] ] where is the polynomial : .the case from the previous section ii corresponds to and .thus , if we denote by the numbers coming from the inversion of , then : ^{2^r } } { 2^{r+1 } q_r(\xi)}\ ] ] we have : after performing the integrals , one obtains : ^{-n-1 } \sum_{m=0}^{\min\{n,2^r\ } } { 2^r \choose m } ( 1-\alpha \theta^2)^{2^r - m } \left[\theta^2 \xi_j^{(r ) } \right]^{m } \label{ems}\ ] ] this expression shows that the may only _ approximate _ the numbers in a certain limit .this is seen from the fact that while one must have for , and , this is not respected by ( [ ems ] ) ( it would only be respected if , however , this is not the case , and the reason behind this discrepancy is the neglected nonlinearity from the calculations ) .the limit , in which the approximation becomes good is for large ( it means ) and . in this casethe sum over can be performed , and one obtains : ^{-n-1 } \left [ 1+\theta^2 ( \xi_j^{(r)}-\alpha ) \right]^{2^r } \label{ems1}\ ] ] the numbers can be calculated in exactly the same way we did in the previous section .this leads to : ^{2^r-1}}\;.\ ] ] inserting it into ( [ ems1 ] ) it yields : \left(\xi_j^{(r ) } -\alpha \right ) } { [ \xi_j^{(r)}]^{n+1 } } \label{mbar}\ ] ] as a check to the correctness of ( [ mbar ] ) we can take and from the unlabeled case , to obtain ( [ explicit ] ) . equation ( [ mbar ] ) explicitely shows the contribution of each singularity .however , if we want to approximate the numbers , we should also account for the condition . using the expression ( [ sing2 ] ) , this leads to , where : thus , using again ( [ sing2 ] ) : } ( -1)^{j+1 } \frac{\tan^2\!{\left(\frac{j\pi}{2^{r+1}}\right ) } \left [ 1 + \tan^2\!{\left(\frac{j\pi}{2^{r+1}}\right)}\right ] } { \left [ \alpha + \theta^{-2 } \tan^2\!{\left(\frac{j\pi}{2^{r+1}}\right)}\right]^{n+1 } } \label{mbart}\ ] ] when the asymptotic limit is generated by the innermost root , i.e. , by the first term in ( [ mbart ] ) , one obtains for the topologically self similar ambilateral trees , the scaling behaviour : and therefore .let us now see how well formula ( [ mbart ] ) approximates the numbers . to do this , we shall define the error \cdot 100\% ] , and thus .further error values : , , .combinatorial enumeration of trees is typically difficult to solve when the set under enumeration obeys symmetry - exclusion principles , such as for the ambilateral case treated here .these symmetry - based constraints may arrise in realistic situations and thus forces us to enumerate _classes _ of subsets of trees . in the ambilateral case a class is defined as being formed by those binary trees that have the same number of leaves and hs index at the root and can be obtained one from another via successive reflections with respect to the nodes of the tree .certainly , the symmetry operation defining the class must be an invariant transformation of the topological index ( hs in our case ) .an other example of such symmetry - operation - generated class - enumeration is the case of the `` leftist trees '' playing an important role in the representation of _ priority queues _ , first shown by crane , followed by knuth , who gives their explicit definition .an elegant enumeration for the leftist trees , using generating function formalism was only given very recently by nogueira .the existing solutions to such class - enumerations on trees ( such as ours and that of flajolet et .al . and of nogueira ) are obtained via methods taylored for the particularities of the set and symmetry operation in question .it is desirable to have , however , at least on a formal level , a general encompassing theory of class - enumerations of topological indices . in this direction ,powerful methods such as that of the antilexicographic order method developed by erds and szkely , or the method of bijection to schrder trees developed by chen may turn to be effective after a suitable extension to include topological indices such as the horthon - strahler index .this , however , stands as an open problem .i am especially thankful to eli ben - naim for introducing this problem to me , and for the many constructive suggestions while i was working on it .useful discussions and comments from i. benczik , t. brown , w. y. c. chen , p. l. erds , m. hastings , g. istrate and r. mainieri are also gratefully acknowledged .this work was supported by the department of energy under contract w-7405-eng-36 .
the horton - strahler ( hs ) index has been shown to be relevant to a number of physical ( such at diffusion limited aggregation ) geological ( river networks ) , biological ( pulmonary arteries , blood vessels , various species of trees ) and computational ( use of registers ) applications . here we revisit the enumeration problem of the hs index on the rooted , unlabeled , plane binary set of trees , and enumerate the same index on the ambilateral set of rooted , plane binary set of trees of leaves . the ambilateral set is a set of trees whose elements can not be obtained from each other via an arbitrary number of reflections with respect to vertical axes passing through any of the nodes on the tree . for the unlabeled set we give an alternate derivation to the existing exact solution . extending this technique for the ambilateral set , which is described by an infinite series of non - linear functional equations , we are able to give a double - exponentially converging approximant to the generating functions in a neighborhood of their convergence circle , and derive an explicit asymptotic form for the number of such trees . epsf
mass transfer ; diffusion ; dissolution ; precipitation ; particle ; mathematical analysismany technological applications involve mass transfer with respect to particles . for fundamental understanding in mathematical terms ,the problem of mass transfer to and from a particle is typically treated as an isolated sphere with time - dependent radius in a continuous medium of infinite extent as a consequence of heat - mass transfer ( cf . * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?when the attention is focused on the dissolution ( or growth by precipitation ) of solid particles in liquids , as especially important in pharmaceutical dosage form development , the mass transfer problem may often be simplified by ignoring the effects of convection and phase - change heating such that the governing equations become linear with the `` quasi - stationary '' treatment . thus , the mathematical problem is tractable for deriving analytical solutions as usually desired for engineering evaluations .moreover , when the mass transfer is mainly limited by the diffusion process rather than the rate of phase change , as often to be the case in several realistic applications , the solute concentration at the particle - medium interface can be assumed to take a constant value of the so - called solubility .then the mathematical problem physically describes a diffusion - controlled mass transfer process , with all the boundary conditions given in the form of dirichlet type . despite the efforts of many authors over years, the mathematical analyses of this relatively simplified diffusion - controlled quasi - stationary mass transfer problem have not been thoroughly satisfactory in terms of completeness and clarity .basic understanding of the accuracy and validity of some approximation formulas seems to be lacking in the literature .the purpose here is to first present a consolidated mathematical formulation of the spherically symmetric mass - transfer problem , then to derive the quasi - stationary approximating equations mainly based on a perturbation procedure for the leading - order effect , and to provide a complete set of exact analytical solutions for the entire parameter range . because the exact solutions can only be written in implicit forms , effort in semi - empirical construction of explicit formulas for the particle radius as a function of time is also made for convenience in engineering practice .the diffusion - controlled mass transfer to and from a spherical solid particle of ( a time dependent ) radius in an incompressible continuous fluid medium with a constant density and a constant diffusion coefficient is governed by where denotes the mass concentration of the solute ( namely the dissolved solid from the particle ) , the time , and the radial distance from the center of the sphere . in an incompressible fluid with a spherically symmetric flow , the radial velocity is simply where denotes the ( constant ) solid particle density , to satisfy the equation of continuity and to account for the effect of volume change during the solute phase change ( e.g. , * ? ? ?* ) . at the particle surface ,the mass balance based on fick s first law of binary diffusion accounting for the bulk flow effect with the solvent flux being ignored leads to {\breve{r}=\breve{r } } \ , .\ ] ] in a diffusion - controlled process , the typical boundary conditions for are and initial conditions are where denotes the solubility ( or ` saturated mass concentration ' ) of the solute in the fluid medium is treated as a constant , implying that the particle size effect on solubility as may be observed for submicron particles ( often due to significant surface energy influence ) , is ignored for theoretical simplicity ] and the initial uniform solute concentration . if we consider as a dimensionless variable , measure length in units of and time in units of ,the governing equations ( [ diffusion_eq])-([ic_eq ] ) can be written in a nondimensionalized form where , , , and ] .thus , we see that ( [ new_solution_r ] ) can not be the same as ( [ hode_solution ] ) at least for . a careful examination of the derivation of ( 22 ) , however , reveals an error ( which was somehow not corrected by those authors even in several follow - up publications , e.g. , * ? ? ?* ; * ? ? ?the correct expression of ( 22 ) should be therefore , we should have which leads to the same equation as ( [ first - order - r_eq ] ) and solution as ( [ hode_solution ] ) rather than ( [ new_solution_r ] ) .thus , the same leading - order result can be obtained via seemingly different treatments . for describing the quasi - stationary dissolution process , ( [ hode_solution ] ) should be taken as the correct leading - order solution ( for ) .it might be noted that so far consideration is only given to the case of , which describes the quasi - stationary dissolution process of a spherical particle .mathematically , solution also exists for the case of as well as in ( [ first - order - r_eq ] ) . from a physical point of view, the case of in ( [ first - order - r_eq ] ) describes the inverse process of precipitation growth of a spherical particle , i.e. , when corresponding to the situation of particle growth in a supersaturated solution .somehow , the exact analytical solution to ( [ first - order - r_eq ] ) for does not seem to have been presented in published literature , unlike the case of .here it is derived to complete the mathematical solution for ( [ first - order - r_eq ] ) .with , ( [ hode ] ) must be replaced by -(u+\epsilon)^2}{u } \mbox { or } \tau \frac{d\tilde{u}}{d\tau } = \frac{1-\tilde{u}^2}{\tilde{u}+\tilde{\epsilon } } \ , , \ ] ] where , and .the solution to ( [ hode2 ] ) is then ( \tilde{u}^2 - 1)}\left(\frac{\tilde{u}+1}{\tilde{u}-1}\right)^{\tilde{\epsilon } } \quad ( \tilde{u } > 1 , \epsilon < 0 ) \ , , \ ] ] which appears to be quite different from ( [ hode_solution ] ) = 2 \mbox { coth}^{-1 } \tilde{u} ] based on ( [ new_solution_r ] ) would be , , , and respectively for , , , and .moreover , ( [ new_solution_r ] ) tends to predict faster dissolution whereas ( [ approx_solution ] ) slower .this is because ( [ new_solution_r ] ) over estimated the flux term associated with in ( 26 ) by mistakenly replacing ( ) with .however , the effect of usually only dominates for a short time when is small and is not too far from unity especially when . shown in fig . 1 is a comparison among the exact quasi - stationary solution ( 18 ) , the quasi - steady - state solution ( 35 ) , and the approximate formulas ( 27 ) of and ( 37 ) .even at , the deviation of quasi - steady - state solution ( 35 ) from the exact solution is still quite significant due to the unaccounted initial effect from the flux term for small .the overall improvement of ( 37 ) from the quasi - steady - state solution ( 35 ) is clear , especially for small ( or in general ) where the curve of ( 37 ) consistenly remains close to that of the exact quasi - stationary solution . .] in view of fig .1 , an explicit approximation formula may be constructed semi - empirically by combining ( [ new_solution_r ] ) and ( [ approx_solution ] ) as + [ 1-\alpha(\epsilon ) ] \left(\sqrt{1 - 2 \epsilon \ , t } - 2 \epsilon \sqrt{t}\right)^2 } \ , , \ ] ] which with \quad ( 0 < \epsilon< 0.1 ) \ , \\ 0.0193 ( \log_{10}\epsilon)^2 - 0.2703 \log_{10}\epsilon + 0.095 \quad ( 0.1 \le \epsilon \le 0.5 ) \ , \end{array } \right . \ , \nonumber\end{aligned}\ ] ] can consistently produce the value of very close to that of the exact solution ( [ hode_solution ] ) for any in the entire range of . by the similar token , accurate formulas of for also be constructed but is not attempted here because most practical situations , e.g. , in pharmaceutical dosage form development , typically concern with . for particle growth in a supersaturated environment , i.e. , as in the case of phase separation in a drying coating , the exact quasi - stationary solution is given by ( [ hode2_solution ] ) with .2 for particle growth shows that comparing to ( [ hode2_solution ] ) , ( [ approx_solution ] ) seems to overestimate the particle growth whereas ( [ new_solution_r ] ) of and the quasi - steady - state solution ( [ qss_ode_solution ] ) both underestimate it . .] based on this observation , a fairly accurate explicit approximate formula to the exact quasi - stationary solution ( [ hode2_solution ] ) may be semi - empirically constructed with the same form as ( [ new_approx_solution ] ) but having \quad ( -0.5 \le \epsilon < 0 ) \ , \ , , \ ] ] for evaluating the situation of particle growth .starting from a consolidated mathematical formulation of the spherically symmetric mass - transfer problem , the quasi - stationary approximating equations can be derived based on a perturbation procedure for the leading - order effect . for the diffusion - controlled quasi - stationary process , a mathematically complete set of the exact analytical solutionsis obtained in implicit forms for consideration of both dissolution of a particle in a solvent and growth of it by precipitation in a supersaturated environment .understanding the dissolution behavior of solid particles in liquid plays an important role in pharmaceutical dosage form development , and particle growth by precipitation in a supersaturated environment is relevant to the drug - polymer microsphere formation process as well as the observed phase separation process during solvent removal in drying of a coating with the drug - polymer mixture .the commonly used explicit formula based on the solution with quasi - steady - state approximation is shown to provide unsatisfactory accuracy unless the solubility is restricted to very small values ( corresponding to ) .therefore , accurate explicit formulas for the particle radius as a function of time are also constructed semi - empirically to extend the applicable range at least to for practical convenience .the author is indebted to professor richard laugesen of the university of illinois for his helpful discussions and skillful illustration of mathematical manipulations .the author also wants to thank yen - lane chen , scott fisher , ismail guler , cory hitzman , steve kangas , travis schauer , and maggie zeng of bsc for their consistent support .barocas , v. , drasler ii , w. , girton , t. , guler , i. , knapp , d. , moeller , j. , parsonage , e. , 2009 . a dissolution - diffusion model for the taxus drug - eluting stent with surface burst estimated from continuum percolation ._ j. biomed .b : appl . biomater . _ 90(1 ) , 267 - 274 .richard , r. , schwarz , m. , chan , k. , teigen , n. , boden , m. , 2009 .controlled delivery of paclitaxel from stent coatings using novel styrene maleic anhydride copolymer formulations ._ j. biomed .res . a _ 90(2 ) , 522 - 532 .wu , x .- s .preparation , characterization , and drug delivery applications of microspheres based on biodegradable lactic / glycolic acid polymers .in wise , d. l. , trantolo , d. j. , altobelli , d. e. , yaszemski , m. j. , gresser , j. d. , schwartz , e. r. ( eds ) , _ encyclopedic handbook of biomaterials and bioengineering part a : materials . _ marcel dekker , new york , vol .1151 - 1200 .
a consolidated mathematical formulation of the spherically symmetric mass - transfer problem is presented , with the quasi - stationary approximating equations derived from a perturbation point of view for the leading - order effect . for the diffusion - controlled quasi - stationary process , a mathematically complete set of the exact analytical solutions is obtained in implicit forms to cover the entire parameter range . furthermore , accurate explicit formulas for the particle radius as a function of time are also constructed semi - empirically for convenience in engineering practice . both dissolution of a particle in a solvent and growth of it by precipitation in a supersaturated environment are considered in the present work .
since the advent of radar systems much of the efforts have been devoted to increasing radar range resolution .the relationship between range resolution and signal bandwidth is given by where denotes range resolution , is the speed of light and is the bandwidth of the signal being used .hence , wideband radar systems can achieve higher resolution than their narrow - band counterparts . however , wideband signals correspond to short pulses that experience low signal - to - noise ratio ( snr ) at the receiver .further , they require high speed a / ds , and fast processors .step - frequency radar ( sfr ) achieves high range resolution without sharing the disadvantages of wideband systems .sfr transmits several narrowband pulses at different frequencies .the frequency remains constant during each pulse but increases in steps of between consecutive pulses .thus , while its instantaneous bandwidth is narrow , the sfr system has a large effective bandwidth .conventional step - frequency radars obtain one sample from each received pulse and then apply an inverse discrete fourier transform ( idft ) on the phase detector output sequence for detection . since the idft resolution increases with the number of transmitted pulses, sfr requires a large number of transmit pulses , or equivalently , large effective bandwidth . in this paper, we propose a step - frequency radar with compressed sampling , that assuming the existence of a small number of targets exploits the sparseness of targets in the range - speed space .the application of compressive sampling to narrow - band radar systems was recently investigated in - . a cs - based data acquisition and imaging methodwas proposed in for stepped - frequency continuous - wave ground penetrating radars ( sfcw - gprs ) . in , cs - based step frequencywas applied on through - the - wall radar imaging ( twri ) . in , authors have assumed stationary targets and have shown that the the cs approach can provide a high - quality radar image using much fewer data samples than conventional methods . unlike , , the work in this paper explores joint target range and speed estimation based on compressive sampling , and proposes a radar system with reduced effective bandwidth as compared to traditional sfr systems .compressive sampling ( cs ) - has received considerable attention recently , and has been applied successfully in diverse fields , e.g. , image processing and wireless communications .the theory of cs states that a -sparse signal of length can be recovered exactly with high probability from measurements via -optimization .let denote the basis matrix that spans this sparse space , and let denote a measurement matrix .the convex optimization problem arising in cs is formulated as follows : where is a sparse vector with principal elements and the remaining elements can be ignored ; is an matrix with , that is incoherent with .it has been shown that two properties govern the design of a stable measurement matrix : _ restricted isometry property _ and _ incoherence property _ .a -sparse signal of length- can be recovered from samples provided and satisfies where is an arbitrary -sparse signal and .this property is referred to as _ restricted isometry property _ ( rip ) .the _ incoherence _ property suggests that the rows of should be incoherent with the columns of .let us consider an sfr system that transmits pulses and waits for echoes of all pulses to return before it starts any processing .the frequency of pulse equals where is the starting frequency and .the transmit pulse is of the form .the signal reflected by a target at distance moving with speed is where is the speed of light , is the pulse repetition interval ( pri ) .here we assume that is small enough to be considered constant within the pulse interval . as we assume each pulse to be narrowband, we can ignore the delay of in the signal envelope , and just consider the phase shift for extracting the information on targets .therefore we have and the output of phase detector is of the form the exponent of ( [ phase ] ) can be written as the first term in ( [ phase_decomp ] ) represents a constant phase shift due to the starting frequency , while the second term represents the phase shift due to frequency offset of the pulse .the maximum unambiguous range and range resolution for step - frequency radar are given by and respectively .here is the total effective bandwidth of the signal over pulses .targets which are at distance will be seen by the system to be at distance . the third term of ( [ phase_decomp ] )gives the doppler frequency shift experienced by the signal due to the target speed .the fourth term of ( [ phase_decomp ] ) represents the frequency spread due to target speed .this has the effect of spreading the energy of the main lobe at the target position .let us take the transmitter and receiver to be co - located and employ pulses for estimating the range and speed of targets .let us discretize the range space as ] .the whole target scene can be described using grid points in the range - speed plane .the range and speed spaces discretization steps are and , respectively .we assume that the targets can be present only on the grid points . by representing the target scene as a matrix of size , equation ( [ phase ] ) becomes where and represents zero - mean white noise . putting the outputs of phase of the phase detector , i.e. , in vector , we get where ^t$ ] , represents white zero - mean measurement noise , and the elements of matrix equal for , .we can think of the basis matrix as being a stack of column vectors , i.e. , where each is of size containing the phase detector outputs for all pulses corresponding to the phase shift due to a target located at the grid point .thus , accounts for the phase shift of all possible combinations of range and speed .taking the measurement matrix to be an identity matrix yields .based on ( [ y ] ) we can recover by applying the dantzig selector to the convex problem ( ) according to , the sparse vector can be recovered with very high probability if , where is a positive scalar , is the maximum norm of columns in the sensing matrix and is the variance of the noise in ( [ matrixeqn ] ) . a lower bound is readily available , i.e. , .also , should not be too large because in that case the trivial solution is obtained .thus , we may set .in conventional sfr systems , the idft algorithm requires the columns of the transform matrix to be orthogonal .the range resolution in space depends on the frequency resolution in the fourier domain .therefore , these systems require pulses in order to have a range resolution of . for the proposed approach we can use pulses and still achieve a range resolution of . for moving targets , the conventional idft method for estimating range and speed observes a shift in the target positions ( due to speed ) and a spreading effect around the shifted position ( due to the fourth term in equation ( [ phase_decomp ] ) ) .these effects degrade the receiver performance causing erroneous range estimation and sometimes missing the target completely . in the proposed approach , since has columns corresponding to all the possible range - speed combinations , the estimated results are comparatively more accurate .* stationary targets - * our simulations use the following parameter values : , , number of grid points .these values of give , and .we assume that the stationary point targets are present on the grid points . iterations of sparse target vectors are generated and the _ estimation accuracy _ is computed as the ratio of number of iterations for which the target ranges are correctly estimated to the total number of iterations .the basis matrix of size is generated according to equation ( [ matrix ] ) .the measurement matrix is an identity matrix of size .the optimization algorithm used to solve equation ( [ dantzig ] ) was obtained from .the number of pulses , , controls the column correlation of the measurement matrix for a given value of ( number of grid points ) . lowering the correlation between adjacent columns of increases the isolation among the columns , which results in better range estimation . for the measurement matrix generated by using equation ( [ matrix ] ) ,the adjacent column cross - correlation equals figure [ 100_5 ] shows the effect of changing the column correlation on the estimation accuracy of the cs sensing matrix when the unambiguous range is divided into grid points .figure [ acc_snr ] shows the accuracy of the cs detector in the presence of noise at different snr values for the case in which only targets are within the detectable range .the noise signal added at the received signal was gaussian zero - mean with variance , where the variance changes with snr .figure [ acc_comp ] compares the performance of the cs detector with the conventional idft detector for for a target scene containing stationary targets . as it can be seen , the cs detector performs better than the idft detector for all snrs .figures [ acc_snr ] and [ acc_comp ] show that we can use pulses to obtain , provided that .this proves that we can use lower bandwidths in cs compared to conventional techniques of idft and accurately estimate the target parameters , when the targets are sparsely present in the range - speed space . *moving targets - * the number of grid points in the range domain is taken to be . grid points were used for discretizing the speed axis .the carrier frequency is and the step frequency .the ranges and speeds of targets are generated randomly in each of monte carlo runs . out of targets ,two targets are placed on adjacent grid points on the range - speed space in each run .the accuracy of the detector is computed as the ratio of the number of runs for which all target ranges and speeds have been estimate accurately to the total number of runs . in fig .[ n_100_different_snr ] , we show the detection accuracy of cs and idft methods for different values of snr . for moving targets , the idft method requires speed compensation before performing idft .since the target speed is unknown , we compensate the received signal with all possible speed and choose the one with the highest and sharpest idft output .as it can be seen , the proposed cs significantly improves the detection accuracy as compared to the idft method .the advantage of the cs approach is more obvious at low snr .figure [ snr_15_different_pulses ] compares the detection accuracy of the cs and the idft methods for different number of pulses for .we can easily see that the proposed method requires much fewer pulses than the idft method to achieve the same accuracy level .for example , the cs approach requires pulses to achieve detection accuracy of , while the idft method needs about pulses in this particular case considered in our simulations .one trade - off that is not apparent from the simulations is computation time .convex optimization techniques have much higher computation cost as compared to the idft .the basis pursuit ( bp ) algorithm used in our simulations has a computation complexity of .thus the processing speed of the receiver system may put a limit on the number of grid points that can be used and the range resolution .however there have been other algorithms like orthogonal matching pursuit ( omp ) which have computation complexities of . a decoupled range - speed estimation approach along the lines of also be employed here to reduce complexity .we have proposed a cs - based sfr system for joint range - speed estimation .it has been shown by our simulation results that the proposed cs approach can achieve high resolution while employing lower effective bandwidth than traditional sfr systems .unlike the idft method , the proposed approach does not suffer from range shift and range spreading around the shift positions caused by the movement of targets .
this paper proposes a novel radar system , namely step - frequency with compressive sampling ( sfr - cs ) , that achieves high target range and speed resolution using significantly smaller bandwidth than traditional step - frequency radar . this bandwidth reduction is accomplished by employing compressive sampling ideas and exploiting the sparseness of targets in the range - speed space .
( fiwi ) access networks , also referred to as wireless - optical broadband access networks ( wobans ) , combine the reliability , robustness , and high capacity of optical fiber networks and the flexibility , ubiquity , and cost savings of wireless networks . to deliver peak datarates up to 200 mb / s per user and realize the vision of complete fixed - mobile convergence , it is crucial to replace today s legacy wireline and microwave backhaul technologies with integrated fiwi broadband access networks .significant progress has been made on the design of advanced fiwi network architectures as well as access techniques and routing protocols / algorithms over the last few years . among others ,the beneficial impact of advanced hierarchical frame aggregation techniques on the end - to - end throughput - delay performance of an integrated ethernet passive optical network ( epon)/wireless mesh network ( wmn)-based fiwi network was demonstrated by means of simulation and experiment for voice , video , and data traffic .a linear programming based routing algorithm was proposed in with the objective of maximizing the throughput of a fiwi network based on a cascaded epon and single - radio single - channel wmn .extensive simulations were conducted to study the throughput gain in fiwi networks under peer - to - peer traffic among wireless mesh clients and compare the achievable throughput gain with conventional wmns without any optical backhaul .the presented simulation results show that fiwi and conventional wmn networks achieve the same throughput when all traffic is destined to the internet , i.e. , no peer - to - peer traffic , since the interference in the wireless front - end is the major bandwidth bottleneck .however , with increasing peer - to - peer traffic the interferences in the wireless mesh front - end increase and the throughput of wmns decreases significantly , as opposed to their fiwi counterpart whose network throughput decreases to a much lesser extent for increasing peer - to - peer traffic . the design of routing algorithms for the wireless front - end only or for both the wireless and optical domains of fiwi access networks has received a great deal of attention , resulting in a large number of wireless , integrated optical - wireless , multipath , and energy - aware routing algorithms .important examples of wireless routing algorithms for fiwi access networks are the so - called delay - aware routing algorithm ( dara ) , delay - differentiated routing algorithm ( ddra ) , capacity and delay aware routing ( cadar ) , and risk - and - delay aware routing ( radar ) algorithm .recently proposed integrated routing algorithms for path computation across the optical - wireless interface include the so - called availability - aware routing , multipath routing , and energy - aware routing algorithms .most of these previous studies formulated routing in fiwi access networks as an optimization problem and obtained results mainly by means of simulation . in this paper , we present to the best of our knowledge the first analytical framework that allows to evaluate the capacity and delay performance of a wide range of fiwi network routing algorithms and provides important design guidelines for novel fiwi network routing algorithms that leverage the different unique characteristics of disparate optical fiber and wireless technologies .although a few fiwi architectural studies exist on the integration of epon with long - term evolution ( lte ) ( e.g. , ) or worldwide interoperability for microwave access ( wimax ) wireless front - end networks ( e.g. , ) , the vast majority of studies , including but not limited to those mentioned in the above paragraph , considered fiwi access networks consisting of a conventional ieee 802.3ah epon fiber backhaul network and an ieee 802.11b / g wireless local area network ( wlan)-based wireless mesh front - end network .our framework encompasses not only legacy epon and wlan networks , but also emerging next - generation optical and wireless technologies , such as long - reach and multi - stage 10 + gb / s time and/or wavelength division multiplexing ( tdm / wdm ) pons as well as gigabit - class very high throughput ( vht ) wlan .our contributions are threefold .first , we develop a unified analytical framework that comprehensively accounts for both optical and wireless broadband access networking technologies .we note that recent studies focused either on tdm / wdm pons only , e.g. , , or on wlans only , e.g. , .however , there is a need for a comprehensive analytical framework that gives insights into the performance of bimodal fiwi access networks built from disparate yet complementary optical and wireless technologies . toward this end ,our framework is flexibly designed such that it not only takes the capacity mismatch and bit error rate differences between optical and wireless networks into account , but also includes possible fiber cuts of optical ( wired ) infrastructures .second , our analysis emphasizes future and emerging next - generation pon and wlan technologies , as opposed to many previous studies that assumed state - of - the - art solutions , e.g. , conventional ieee 802.11a wlan without frame aggregation .our analytical approach in part builds on previous studies and includes significant original analysis components to achieve accurate throughput - delay modeling and cover the scope of fiwi networks .specifically , we build on analytical models of the distributed coordination function in wlans , e.g. , , and wlan frame aggregation , e.g. , .we develop an accurate delay model for multihop wireless front - ends under nonsaturated and stable conditions for traffic loads from both optical and wireless network nodes , as detailed in section [ sec : wireless ] .third , we verify our analysis by means of simulations and present extensive numerical results to shed some light on the interplay between different next - generation optical and wireless access networking technologies and configurations for a variety of traffic scenarios .we propose an _ optimized fiwi routing algorithm ( ofra ) _ based on our developed analytical framework .the obtained results show that ofra outperforms previously proposed routing algorithms , such as dara , cadar , and radar .they also illustrate that it is key to carefully select appropriate paths across the fiber backhaul in order to minimize link traffic intensities and thus help stabilize the entire fiwi access network . to our best knowledge, the presented unified analytical framework is the first to allow capacity and delay evaluations of a wide range of fiwi network routing algorithms , both previously proposed and new ones .our analytical framework covers not only legacy epon and wlan , but also next - generation high - speed long - reach wdm pon and emerging gigabit - class vht wlan technologies .the remainder of the paper is structured as follows . in section[ sec : related ] , we discuss related work and recent progress on fiwi access networks . section [ sec : fiwi ] describes fiwi access networks based on next - generation pon and gigabit - class wlan technologies in greater detail .section [ sec : model ] outlines our network model as well as traffic and routing assumptions .the capacity and delay of the constituent fiber backhaul and wireless front - end networks are analyzed in sections [ sec : fiber ] and [ sec : wireless ] , respectively , while the stability and end - to - end delay of the entire fiwi access network are evaluated in section [ sec : end - to - end ] .section [ sec : results ] presents numerical and verifying simulation results .section [ sec : conclusions ] concludes the paper .the recent survey of hybrid optical - wireless access networks explains the key underlying photonic and wireless access technologies and describes important fiwi access network architectures .energy - efficient fiwi network architectures as well as energy - efficient medium access control ( mac ) and routing protocols were reviewed in .recent efforts on energy - efficient routing in fiwi access networks focused on routing algorithms for cloud - integrated fiwi networks that offload the wireless mesh front - end and the optical - wireless gateways by placing cloud components , such as storage and servers , closer to mobile end - users , while at the same time maintaining low average packet delays .a delay - based admission control scheme for providing guaranteed quality - of - service ( qos ) in fiwi networks that deploy epon as backhaul for connecting multiple wimax base stations was studied in .a promising approach to increase throughput , decrease delay , and achieve better load balancing and resilience is the use of multipath routing schemes in the wireless mesh front - end of fiwi networks .however , due to different delays along multiple paths , packets may arrive at the destination out of order , which deteriorates the performance of the transmission control protocol ( tcp ) .a centralized scheduling algorithm at the optical line terminal ( olt ) of an epon that resequences the in - transit packets of each flow to ensure in - order packet arrivals at the corresponding destination was examined in .in addition , studied a dynamic bandwidth allocation ( dba ) algorithm that prioritizes flows that may trigger tcp s fast retransmit and fast recovery , thereby further improving tcp performance .given the increasing traffic amounts on fiwi networks , their survivability has become increasingly important .cost - effective protection schemes against link and node failures in the optical part of fiwi networks have been proposed and optimized in .the survivability of fiwi networks based on multi - stage pons , taking not only partial optical protection but also protection through a wireless mesh network into account , was probabilistically analyzed in .deployment of both back - up fibers and radios was examined in .recent research efforts have focused on the integration of performance - enhancing network coding techniques to increase the throughput and decrease the delay of fiwi access networks for unicast and multicast traffic .most previous fiwi access network studies considered a cascaded architecture consisting of a single - stage pon and a multihop wmn , as shown in fig .[ fig : fig1 ] . typically , the pon is a conventional ieee 802.3ah compliant wavelength - broadcasting tdm epon based on a wavelength splitter / combiner at the remote node ( rn ) , using one time - shared wavelength channel for upstream ( onus to olt ) transmissions and another time - shared wavelength channel for downstream ( olt to onus ) transmissions , both operating at a data rate of 1 gb / s .a subset of onus may be located at the premises of residential or business subscribers , whereby each onu provides fiber - to - the - home / business ( ftth / b ) services to a single or multiple attached wired subscribers .some onus have a mesh portal point ( mpp ) to interface with the wmn .the wmn consists of mesh access points ( maps ) that provide wireless fiwi network access to stations ( stas ) .mesh points ( mps ) relay the traffic between mpps and maps through wireless transmissions .most previous fiwi studies assumed a wmn based on ieee 802.11a / b / g wlan technologies , offering a maximum raw data rate of 54 mb / s at the physical layer .future fiwi access networks will leverage next - generation pon and wlan technologies to meet the ever increasing bandwidth requirements . a variety of next - generation pon technologiesare currently investigated to enable short - term evolutionary and long - term revolutionary upgrades of coexistent gigabit - class tdm pons .promising solutions for pon evolution toward higher bandwidth per user are ( ) data rate upgrades to 10 gb / s and higher , and ( ) multi - wavelength channel migration toward wavelength - routing or wavelength - broadcasting wdm pons with or without cascaded tdm pons .similarly , to alleviate the bandwidth bottleneck of the wireless mesh front - end , future fiwi networks are expected to be based on next - generation ieee 802.11n wlans , which offer data rates of 100 mb / s or higher at the mac service access point , as well as emerging ieee 802.11ac vht wlan technologies that achieve raw data rates up to 6900 mb / s . as shown in fig .[ fig : fig2 ] , current tdm pons may evolve into next - generation single- or multi - stage pons of extended reach by exploiting high - speed tdm and/or multichannel wdm technologies and replacing the splitter / combiner at the rn with a wavelength multiplexer / demultiplexer , giving rise to the following three types of next - generation pons : fig . [fig : fig2](a ) depicts a high - speed tdm pon , which maintains the network architecture of conventional tdm pons except that both the time - shared upstream wavelength channel and downstream wavelength channel and attached olt and tdm onus operate at data rates of 10 gb / s or higher .a wavelength - broadcasting wdm pon has a splitter / combiner at the rn and deploys multiple wavelength channels , as shown in fig .[ fig : fig2](b ) .each of these wavelength channels is broadcast to all connected wdm onus and is used for bidirectional transmission .each wdm onu selects a wavelength with a tunable bandpass filter ( e.g. , fiber bragg grating ) and reuses the downstream modulated signal coming from the olt for upstream data transmission by means of remodulation techniques , e.g. , fsk for downstream and ook for upstream .[ fig : fig2](c ) shows a wavelength - routing wdm pon , where the conventional splitter / combiner at the rn is replaced with a wavelength multiplexer / demultiplexer , e.g. , arrayed - waveguide grating ( awg ) , such that each of the wavelength channels on the common feeder fiber is routed to a different distribution fiber . a given wavelength channel may be dedicated to a single onu ( e.g. , business subscriber ) or be time shared by multiple onus ( e.g. , residential subscribers ) . in the latter case , the distribution fibers contain one or more additional stages , whereby each stage consists of a wavelength - broadcasting splitter / combiner and each wavelength channel serves a different sector , see fig .[ fig : fig2](c ) . note that due to the wavelength - routing characteristics of the wavelength multiplexer / demultiplexer , onus can be made colorless ( i.e. , wavelength - independent ) by using , for example , low - cost reflective semiconductor optical amplifiers ( rsoas ) that are suitable for bidirectional transmission via remodulation .wavelength - routing multi - stage wdm pons enable next - generation pons with an extended optical range of up to 100 km , thus giving rise to _ long - reach wdm pons _ at the expense of additional in - line optical amplifiers .long - reach wdm pons promise major cost savings by consolidating optical access and metropolitan area networks .ieee 802.11n specifies a number of phy and mac enhancements for next - generation wlans .applying orthogonal frequency division multiplexing ( ofdm ) and multiple - input multiple - output ( mimo ) antennas in the phy layer of ieee 802.11n provides various capabilities , such as antenna diversity ( selection ) and spatial multiplexing .using multiple antennas also provides multipath capability and increases both throughput and transmission range .the enhanced phy layer applies two adaptive coding schemes : space time block coding ( stbc ) and low density parity check coding ( ldpc ) .ieee 802.11n wlans are able to co - exist with ieee 802.11 legacy wlans , though in greenfield deployments it is possible to increase the channel bandwidth from 20 mhz to 40 mhz via channel bonding , resulting in significantly increased raw data rates of up to 600 mb / s at the phy layer .a main mac enhancement of 802.11n is frame aggregation , which comes in two flavors , as shown in fig .[ fig : fig3 ] .* aggregate mac service data unit ( a - msdu ) : * multiple msdus , each up to 2304 octets long , are joined and encapsulated into a separate subframe , see fig .[ fig:3a ] .specifically , multiple msdus are packed into an a - msdu , which is encapsulated into a phy service data unit ( psdu ) .all constituent msdus must have the same traffic identifier ( tid ) value ( i.e. , same qos level ) and the resultant a - msdu must not exceed the maximum size of 7935 octets . each psdu is prepended with a phy preamble and phy header .although the fragmentation of msdus with the same destination address is allowed , a - msdus must not be fragmented .* aggregate mac protocol data unit ( a - mpdu ) : * multiple mpdus , each up to 4095 octets long , are joined and inserted in a separate subframe , see fig .[ fig:3b ] .specifically , multiple mpdus are aggregated into one psdu of a maximum size 65535 octets .aggregation of multiple mpdus with different tid values into one psdu is allowed by using multi - tid block acknowledgment ( mtba ) .both a - msdu and a - mpdu require only a single phy preamble and phy header . in a - msdu , the psdu includes a single mac header and frame check sequence ( fcs ) , as opposed to a - mpdu where each mpdu contains its own mac header and fcs .a - mpdu and a - msdu can be used separately or jointly .future gigabit - class wmns may be upgraded with emerging ieee 802.11ac vht wlan technologies that exploit further phy enhancements to achieve raw data rates up to 6900 mb / s and provide an increased maximum a - msdu / a - mpdu size of 11406/1048575 octets .we consider a pon consisting of one olt and attached onus .the tdm pon carries one upstream wavelength channel and a separate downstream wavelength channel .we suppose that both the wavelength - broadcasting and the wavelength - routing multi - stage wdm pons carry bidirectional wavelength channels . in the wavelength - routing multi - stage wdm pon , the onusare divided into sectors .we use to index the wavelength channel as well as the corresponding sector . in our model ,sector , accommodates onus .specifically , onus with indices between and belong to sector , i.e. , form the set of nodes thus , sector comprises onus , sector comprises onus , and so on , while we assign the index to the olt .the one - way propagation delay between olt and onus of sector is ( in seconds ) and the data rate of the associated wavelength channel is denoted by ( in bit / s ) .hence , each sector of the wavelength - routing multi - stage wdm pon is allowed to operate at a different data rate serving a subset of onus located at a different distance from the olt ( e.g. , business vs. residential service areas ) . for ease of exposition, we assume that in the wavelength - broadcasting tdm and wdm pons all wavelength channels operate at the same data rate ( in bit / s ) and that all onus have the one - way propagation delay ( in seconds ) from the olt .all or a subset of the onus are equipped with an mpp to interface with the wmn .the wmn is composed of different zones , whereby each zone operates on a distinct frequency such that the frequencies of neighboring zones do not overlap .frequencies may be spatially reused in nonadjacent zones .a subset of mps are assumed to be equipped with multiple radios to enable them to send and receive data in more than one zone and thereby serve as relay nodes between adjacent zones .we denote each radio operating in a given relay mp in a given zone by a unique .the remaining mps as well as all mpps , maps , and stas are assumed to have only a single radio operating on the frequency of their corresponding zone .all wireless nodes are assumed to be stationary ; incorporating mobility is left for future research . adopting the notation proposed in ,we let denote the set of multi - radio relay mps and denote the set of single - radio mps , mpps , maps , and stas in zone .note that set is empty if there are only single - radio mps in zone .note that due to this set definition each multi - radio mp is designated by multiple ; one and corresponding set for each zone in which it can send and receive .the wmn operates at a data rate ( in bit / s ) . in the wmn, we assume that the bit error rate ( ber ) of the wireless channel is . on the contrary, the ber of the pon is assumed to be negligible and is therefore set to zero .however , individual fiber links may fail due to fiber cuts and become unavailable for routing traffic across the pon , as described next in more detail . throughout, we neglect nodal processing delays . we denote for the set of fiwi network nodes that act as traffic sources and destinations . specifically , we consider to contain the olt , the onus ( whereby a given onu models the set of end users with wired access to the onu ) , and a given number of stas . in our model ,mpps , mps , and maps forward in - transit traffic , without generating their own traffic .hence , the number of traffic sources / destinations is given by .furthermore , we define the traffic matrix , where represents the number of frames per second that are generated at fiwi network node and destined to fiwi network node ( note that for ) . we allow for any arbitrary distribution of the frame length ( in bit ) and denote and for the mean and variance of the length of a frame , respectively .the traffic generation is assumed to be ergodic and stationary . our capacity and delay analysis flexibly accommodates any routing algorithm .for each pair of fiwi network source node and destination node , a particular considered routing algorithm results in a specific traffic rate ( in frames / s ) sent in the fiber domain and traffic rate sent in the wireless domain .a conventional onu without an additional mpp can not send in the wireless domain , i.e. , , and sends its entire generated traffic to the olt , i.e. , . on the other hand, an onu equipped with an mpp can send in the wireless domain , i.e. , .note that we allow for multipath routing in both the fiber and wireless domains , whereby traffic coming from or going to the olt may be sent across a single or multiple onus and their collocated mpps .we consider throughout first - come - first - served service in each network node .for the wavelength - routing multi - stage wdm pon , we define the normalized downstream traffic rate ( intensity ) in sector , as where the first term represents the traffic generated by the olt for sector and the second term accounts for the traffic from all onus sent to sector via the olt .we define the upstream traffic rate ( in frames / s ) of onu as where the first term denotes traffic destined to the olt and the second term represents the traffic sent to other onus via the olt .the normalized upstream traffic rate ( intensity ) of sector is for stability , the normalized downstream and upstream traffic rates have to satisfy in each sector , of the wavelength - routing multi - stage wdm pon . in the wavelength - broadcasting tdm pon ( ) and wdm pon ( ) ,we define the upstream traffic intensity and downstream traffic intensity as : the tdm and wdm pons are stable if and .the delay analysis of section [ delay_ana : sec ] applies only for a stable network , which can be ensured through admission control techniques . in the wavelength - routing multi - stage wdm pon, the olt sends a downstream frame to an onu in sector by transmitting the frame on wavelength , which is received by all onus in the sector .we model all downstream transmissions in sector to emanate from a single queue . for poisson frame traffic ,the downstream queueing delay is thus modeled by an m / g/1 queue characterized by the pollaczek - khintchine formula giving the total downstream frame delay weighing the downstream delays in the sectors by the relative downstream traffic intensities in the sectors , gives the average downstream delay of the wavelength - routing multi - stage wdm pon for the upstream delay , we model each wavelength channel , as a single upstream wavelength channel of a conventional epon .accordingly , from eq .( 39 ) in , we obtain for the mean upstream delay of sector and the average upstream delay of the wavelength - routing multi - stage wdm pon equals to improve the accuracy of our delay analysis , we take into account that traffic coming from an onu in sector and destined to onu in sector is queued at the intermediate olt before being sent downstream to onu , i.e. , the olt acts like an insertion buffer between onus and .consequently , to compensate for the queueing delay at the olt we apply the method proposed in by subtracting the correction term whereby for the setting that for all channels denotes the rate of upstream traffic in sector destined for sector , from the above calculated mean downstream delay .thus , for sector , the corrected mean downstream delay is given by by replacing with in eq .( [ eq : wdm_pon_dsdelay ] ) we obtain a more accurate calculation of the average downstream delay for the wavelength - routing multi - stage wdm pon , as examined in section [ sec : results ] .next , we evaluate the average downstream and upstream delays for the wavelength - broadcasting tdm pon ( ) and wdm pon ( ) . with the aforementioned correction termthe average downstream and upstream delays are given by and respectively , whereby far , we have analyzed only the optical fiber backhaul of the fiwi network .next , we focus on the wireless front - end . in the following ,we derive multiple relations between known parameter values and unknown variables .afterwards , we outline how to obtain the unknowns numerically .more specifically , in sections [ ftm : sec][sec : durframetra ] we build on and adapt existing models of distributed coordination and frame aggregation in wlans to formulate the basic frame aggregate transmission and collision probabilities as well as time slot duration in the distributed access system .we note that these existing models have primarily focused on accurately representing the collision probabilities and system throughput ; we found that directly adapting these existing models gives delay characterizations that are reasonably accurate only for specific scenarios , such as single - hop networking , but are very coarse for multi - hop networking . in sections [ sertime_fa :sec][dwmnp : sec ] we develop a general multihop delay model that is simple , yet accurate by considering the complete service time of a frame aggregate in the wireless front - end network carrying traffic streams from and to both wireless and optical network nodes . as defined in section [ sec : netarch ] , we denote the radio operating in a given sta or onu equipped with an mpp by a unique .moreover , we denote each radio operating in a given relay mp in a unique zone by a unique . for ease of exposition, we refer to `` radio '' henceforth as `` node . ''similar to , we model time as being slotted and denote for the mean duration of a time slot at node .the mean time slot duration corresponds to the average time period required for a successful frame transmission , a collided frame transmission , or an idle waiting slot at node and is evaluated in section [ sec : durframetra ] .we let denote the probability that there is a frame waiting for transmission at node in a time slot . for an sta or onu with collocated mpp denote for the traffic load that emanates from node , i.e. , for a relay mp we obtain for a given wireless mesh routing algorithm the frame arrival rate for each of the mp s radios associated with a different zone : whereby and denote any pair of sta or onu with collocated mpp that send traffic on a path via relay mp , as computed by the given routing algorithm for the wireless mesh front - end of the fiwi network . for exponentially distributed inter - frame arrival times with mean ( which occur for a poisson process with rate ), is related to the offered frame load at node during mean time slot duration via in this section , we first characterize the sizes of the frame aggregates and then the frame aggregate error probability . for a prescribed distribution of the size ( in bit ) of a single frame ,e.g. , the typical trimodal ip packet size distribution , the distribution of the size ( in bit ) of a transmitted a - msdu or a - mpdu can be obtained as the convolution of with itself , i.e. , the number of required convolutions equals the number of frames carried in the aggregate , which in turn depends on the minimum frame size , including the mac - layer overhead of the corresponding frame aggregation scheme , and the maximum size of an a - msdu / a - mpdu ( see fig . [fig : fig3 ] ) . from the distribution obtain the average frame aggregate sizes ] .correspondingly , we divide the traffic rate ( in frames / s ) by the average number of frames in an aggregate to obtain the traffic rate in frame aggregates per second .moreover , as ground work for section [ sec : durframetra ] we obtain the average size of the longest a - msdu , ] , involved in a collision with the simplifying assumption of neglecting the collision probability of more than two packets as = \int_0^{a_{\max}^{\text{a - msdu / a - mpdu}}}\left(1-a(x)^2\right ) dx.\ ] ] the probability of an erroneously transmitted frame aggregate , referred to henceforth as `` transmission error '' , can be evaluated in terms of bit error probability and size of a transmitted a - msdu ( with distribution ) with ( * ? ? ?( 16 ) ) ; for a - mpdu , can be evaluated in terms of and the sizes of the aggregated frames with ( * ? ? ?( 18 ) ) . in particular, where is the size of a transmitted a - msdu ( with distribution ) , index runs from one to the total number of aggregated frames , and is the size of the frame in a transmitted a - mpdu .following , we note that the transmission of any transmitting node in zone can not collide if none of the other nodes transmits , i.e. , we obtain the collision probability as where denotes the transmission probability of wmn node .note that if the considered node is a relay mp , eq .( [ eq : p_c ] ) holds for each associated zone ( and corresponding radio ) .we define the probability of either a collision or transmission error , in brief collision / transmission error probability , as the transmission probability for any node can be evaluated as a function of the frame waiting probability , the frame collision / transmission error probability , the minimum contention window , and the maximum backoff stage by ( * ? ? ?( 1 ) ) , as explained in . in particular , }-\frac{q_\omega^2(1-p_\omega ) } { 1-q_\omega}\right),\ ] ] with }\nonumber \\ & & \ \ \ + ( 1-q_\omega ) \nonumber \\ & & \ \ \+ \frac{q_\omega(w_0 + 1)[p_\omega(1-q_\omega)-q_\omega(1-p_\omega)^2 ] } { 2(1-q_\omega)}\nonumber \\ & & \ \ \ + \frac{p_\omega q_\omega^2}{2(1-q_\omega)(1-p_\omega ) } \left(\frac{w_0}{1-(1-q_\omega)^{w_0}}-(1-p_\omega)^2\right ) \nonumber \\ & & \ \ \ \ \ \ \ \ \ \ \ \\cdot \left ( \frac{2w_0 [ 1-p_\omega - p_\omega(2p_\omega)^{h-1 } ] } { 1 - 2p_\omega } + 1 \right),\end{aligned}\ ] ] where is node s minimum contention window , is the node s maximum window size , and is the maximum backoff stage .the probability that there is at least one transmission taking place in zone in a given time slot is given by a successful frame aggregate transmission occurs if exactly one node transmits ( and all other nodes are silent ) , given that there is a transmission , i.e. , we denote for the duration of an empty time slot without any data transmission on the wireless channel in zone , which occurs with probability . with probability there is a transmission in a given time slot in zone , which is successful with probability and unsuccessful ( resulting in a collision ) with the complementary probability .we denote for the mean duration of a successful frame aggregate transmission and is the mean duration of a frame aggregate transmission with collision in zone .note that and depend on the frame aggregation technique ( a - msdu or a - mpdu ) and on the access mechanism ( basic access denoted by or rts / cts denoted by ) .for the basic access mechanism , we define , where denotes the propagation delay and the wmn data rate . for the rts / cts access mechanism , we define .( note that in ieee 802.11n the parameters ack , rts , and cts as well as the phy / mac header and fcs below are given in bits , while the other parameters are given in seconds . ) then , for a successful frame aggregate transmission we have : + \text{fcs})/r & \text{for a - msdu}\\ \\ \theta_s^{\alpha } + e[\text{a - mpdu}]/r\ & \text{for a - mpdu}. \end{array}\right.\ ] ] moreover , with , for a collided frame aggregate transmission we have : + \text{fcs})/r & \text{for a - msdu},\\ \\ \theta_c^{\rm basic } + e[\text{a - mpdu}^*]/r\ & \text{for a - mpdu } \end{array}\right.\ ] ] as well as for both a - msdu and a - mpdu , thus , we obtain the expected time slot duration at node in zone of our network model ( corresponding to ( * ? ? ?( 13 ) ) ) as .\end{aligned}\ ] ] equations ( [ eq : q_omega ] ) , ( [ eq : p_omega ] ) , ( * ? ? ?( 1 ) ) , and ( [ eq : e_omega ] ) can be solved numerically for the unknown variables , , , and for each given set of values for the known network model parameters .we use the obtained numerical solutions to evaluate the mean delay at node as analyzed in the following sections [ sertime_fa : sec ] and [ wmnnodedel : sec ] .we proceed to evaluate the expected service ( transmission ) time for a frame aggregate , which may require several transmission attempts , at a given node . with the basic access mechanism ,the transmission of the frame aggregate occurs without a collision ( ) or transmission error with probability ( [ eq : p_omega ] ) , requiring one . with probability ,the frame aggregate suffers , collisions or transmission errors , requiring backoff procedures and re - transmissions .thus , the expected service time for basic access is for the rts / cts access mechanism , collisions can occur only for the rts or cts frames ( which are short and have negligible probability of transmission errors ) , whereas transmission errors may occur for the frame aggregates .collisions require only retransmissions of the rts frame , whereas transmission errors require retransmissions of the entire frame aggregate .more specifically , only one frame transmission ( ) is required if no transmission error occurs ; this event has probability .this transmission without transmission error may involve , collisions of the rts / cts frames . on the other hand , two frame transmissions ( )are required if there is once a transmission error ; this event has probability .this scenario requires twice an rts / cts reservation , which each time may experience collisions , as well as two full frame transmission delays .generally , , frame transmissions are required if times there is a frame transmission error .each of the frame transmission attempts requires an rts / cts reservation and a full frame transmission delay . in summary, we evaluate the mean service delay for a frame aggregate with rts / cts access as .\end{aligned}\ ] ] we first evaluate the overall service time from the time instant when a frame aggregate arrives at the head of the queue at node to the completion of its successful transmission .subsequently , with characterizing the overall service time at node , we evaluate the queueing delay .the overall service time is given by the service time required for transmitting a frame aggregate and the sensing delay required for the reception of frame aggregates by node from other nodes , i.e. , as a first step towards modeling the sensing delay at a node , we consider the service times at nodes and scale these service times linearly with the corresponding traffic intensities to obtain the sensing delay component as a second modeling step , we consider the service times plus sensing delay components scaled by the respective traffic intensities to obtain the sensing delay employed in the evaluation of the overall service delay ( [ delom_ovall : eqn ] ) .we approximate the queue at node by an m / m/1 queue with mean arrival rate and mean service time .this queue is stable if the total delay ( for queueing plus service ) at node is then given by if node is an onu with a collocated mpp the accuracy of the queueing delay calculation is improved by subtracting a correction term : for the wavelength - broadcasting tdm pon and wdm pon , or for the wavelength - routing multi - stage wdm pon , whose sector accommodates the onu with collocated mpp .note that or accounts for the traffic of all pairs of source node and destination node traversing onu from the fiber backhaul towards the wireless front - end network . in order to obtain the delay in the wireless front - end of our fiwi network, we have to average the sums of the nodal delays of all possible paths for all pairs of source node and destination node : with the queueing delay correction terms whereby is the traffic intensity at node due to traffic flowing from source node to destination node .the entire fiwi access network is stable if and only if all of its optical and wireless subnetworks are stable .if the optical backhaul consists of a wavelength - routing multi - stage wdm pon the stability conditions in eq .( [ eq : wr_1 ] ) must be satisfied . in the case of the wavelength - broadcasting tdm and wdm pon ,the optical backhaul is stable if both and defined in eqs .( [ eq : rho_u ] ) and ( [ eq : rho_d ] ) , respectively , are smaller than one . the wireless mesh front - end is stable if the stability condition in eq .( [ eq : rho_omega ] ) is satisfied for each wmn node .we obtain the mean end - to - end delay of the entire bimodal fiwi access network as set the parameters of the fiwi mesh front - end to the default values for next - generation wlans , see table [ tab:1 ] .we consider a distance of 1 km between any pair of adjacent wmn nodes ( which is well within the maximum distance of presently available outdoor wireless access points ) , translating into a propagation delay of s. .fiwi network parameters [ cols="^,^",options="header " , ] [ default ] we first verify the accuracy of our probabilistic analysis by means of simulations . in our initial verifying simulations, we consider the fiwi network configuration of fig .[ fig : fig4 ] .the fiber backhaul is a tdm pon , or a wavelength - broadcasting / routing wdm pon with bidirectional wavelength channels ( , ) , each operating at gb / s ( compliant with ieee 802.3ah ) . in the case of the wavelength - routing ( wr ) wdmpon , the two sectors are defined as : : and : .all four onus are located 20 km from the olt ( translating into a one - way propagation delay ms ) and are equipped with an mpp .the wmn is composed of the aforementioned 4 mpps plus 16 stas and 4 mps , which are distributed over 11 wireless zones , as shown in fig .[ fig : fig4 ] . for instance ,the wmn zone containing comprises 1 mpp , 2 stas , and 1 mp .mpps and stas use a single radio , whereas mps use 3 , 4 , 4 , 3 radios from left to right in fig .[ fig : fig4 ] .all wmn nodes apply the rts / cts access mechanism .the wmn operates at mb / s ( compliant with ieee 802.11n ) with a bit error rate of .we consider poisson traffic with fixed - size frames of 1500 bytes ( octets ) .we use a - msdu for frame aggregation , whereby each a - msdu carries the maximum permissible payload of 5 frames , see fig .[ fig : fig3](a ) . similar to , we consider two operation modes : ( ) _ wmn - only mode _ which has no fiber backhaul in place ; and wmn nodes apply minimum wireless hop routing ( ) _ wireless - optical - wireless mode _ which deploys the fiwi network configuration of fig .[ fig : fig4 ] . for both modes ,we consider the minimum interference routing algorithm , which selects the path with the minimum number of wireless hops .we compare different routing algorithms in section [ fiwiroutalg : sec ] .the simulation results presented in indicate that the throughput performance of wmns deteriorates much faster for increasing peer - to - peer traffic among stas than that of fiwi networks , while wmn and fiwi networks achieve the same throughput when all traffic is destined to the internet . for comparison with , we consider _ peer - to - peer ( p2p ) traffic _ , where each frame generated by a given sta is destined to any other of the remaining 15 stas with equal probability 1/15 , and _ upstream traffic _ , where all frames generated by the stas are destined to the olt .[ fig : result1 ] depicts the results of our probabilistic analysis for the mean delay as a function of the mean aggregate throughput of a stand - alone wmn network and a tdm pon based fiwi network for p2p and upstream traffic .the figure also shows verifying simulation results and their 95% confidence intervals , whereby simulations were run 100 times for each considered traffic load .we observe from fig .[ fig : result1 ] that the mean delay of the wmn increases sharply as the mean aggregate throughput asymptotically approaches its maximum data rate of 300 mb / s .we also confirm the findings of that under p2p traffic the mean aggregate throughput can be increased by using a tdm pon as fiber backhaul to offload the wireless mesh front - end at the expense of a slightly increased mean delay due to the introduced upstream and downstream pon delay to and from the olt . as opposed to ,however , fig .[ fig : result1 ] shows that the throughput - delay performance of the considered fiwi network is further improved significantly under upstream traffic .these different observations are due to the fact that in the single - radio single - channel wmn based on legacy ieee 802.11a wlan with a limited data rate of 54 mb / s suffered from severe channel congestion close to the mpps , which is alleviated in the multi - radio multi - channel wmn based on next - generation high - throughput wlan technologies . next, we verify different fiwi network architectures and their constituent subnetworks for _ uniform _ and _ nonuniform traffic _ for minimum ( wireless or optical ) hop routing .[ fig : result2 ] depicts the throughput - delay performance of a stand - alone wmn front - end , stand - alone tdm pon , and a variety of integrated fiwi network architectures using different fiber backhaul solutions , including conventional tdm pon , wavelength - broadcasting wdm pon ( wdm pon ) , and wavelength - routing wdm pon ( wr pon ) . in the tdm pononly ( wmn only ) scenario under uniform traffic , each onu ( sta ) generates the same amount of traffic and each generated frame is destined to any of the remaining onus ( stas ) with equal probability . as expected , the wmn and tdm pon saturate at roughly 300 mb / s and 1 gb / s , respectively , and the tdm pon is able to support much higher data rates per source node ( onu ) at lower delays than the wmn .furthermore , we observe from fig . [fig : result2 ] that under uniform traffic conditions , where stas and onus send unicast traffic randomly uniformly distributed among themselves , fiwi networks based on a wavelength - broadcasting wdm pon or a wr pon give the same throughput - delay performance , clearly outperforming their single - channel tdm pon based counterpart .however , there is a clear difference between wdm pon and wr pon fiber backhauls when traffic becomes unbalanced . to see this ,let us consider a nonuniform traffic scenario , where and and their 4 associated stas ( see fig . [fig : fig4 ] ) generate 30% more traffic than the remaining onus and stas .under such a nonuniform traffic scenario , a fiwi network based on a wavelength - broadcasting wdm pon performs better , as shown in fig .[ fig : result2 ] .this is due to the fact that the wdm pon provides the two heavily loaded and with access to both wavelength channels , as opposed to the wr pon , thus resulting in an improved throughput - delay performance .overall , we note that the analysis and verifying simulation results presented in figs .[ fig : result1 ] and [ fig : result2 ] match very well for a wide range of fiwi network architectures and traffic scenarios .recall from section [ sec : traffic_model ] that our capacity and delay analysis flexibly accommodates any routing algorithm and allows for multipath routing in both the fiber and wireless domains . in this section ,we study the impact of different routing algorithms on the throughput - delay performance of next - generation fiwi access networks in greater detail , including their sensitivity to key network parameters .specifically , we examine the following single - path routing algorithms : * minimum hop routing : * conventional shortest path routing selects for each source - destination node pair the path minimizing the required number of wireless and/or optical hops .* minimum interference routing * : the path with the minimum wireless hop count is selected .the rationale behind this algorithm is that the maximum throughput of wireless networks is typically much lower compared to the throughput in optical networks .thus , minimizing the wireless hop count tends to increase the maximum fiwi network throughput . *minimum delay routing : * similar to the previously proposed wmn routing algorithms dara , cadar , and radar , we apply a slightly extended minimum delay routing algorithm , which aims at selecting the path that minimizes the end - to - end delay of eq .( [ eq : delay ] ) across the entire bimodal fiwi access network .the applied minimum delay routing algorithm is a greedy algorithm and proceeds in two steps . in the initialization step , paths are set to the minimum hop routes .the second step computes for each source - destination node pair the path with the minimum end - to - end delay under given traffic demands .* optimized fiwi routing algorithm ( ofra ) : * we propose the optimized fiwi routing algorithm ( ofra ) , which proceeds in two steps similar to minimum delay routing .after the initialization step to minimum hop routes , the second step of ofra computes for each source - destination node pair the path with the minimization objective where represents the long - run traffic intensity at a generic fiwi network node , which may be either an optical node belonging to the fiber backhaul or a wireless node belonging to the wireless mesh front - end . based on a combination of historic traffic patterns as well as traffic measurements and estimations similar to ,the traffic intensities used in ofra can be periodically updated with strategies similar to .these long - run traffic intensities vary typically slowly , e.g. , with a diurnal pattern , allowing essentially offline computation of the ofra paths .more precisely , for the wr pon we have ( see eq .( [ eq : wr_olt ] ) ) if node is the olt and ( see eq .( [ eq : wr_onu ] ) ) if node is an onu . for the wavelength - broadcasting tdm and wdm pon, we have ( eq .( [ eq : rho_d ] ) ) and ( eq . ( [ eq : rho_u ] ) ) if node is the olt or an onu , respectively . for a wireless node , is given by the left - hand side of ( [ eq : rho_omega ] ) .ofra s path length measure includes the maximum traffic intensity along a path in order to penalize paths with a high traffic intensity at one or more fiwi network nodes . for a given set of traffic flows, ofra minimizes the traffic intensities , particularly the high ones , at the fiwi network nodes . decreasing the traffic intensities tends to allow for a higher number of supported traffic flows and thus higher throughput . to allow for a larger number of possible paths for the following numerical investigations of the different considered routing algorithms , we double the fiwi network configuration of fig .[ fig : fig4 ] .we consider a wavelength - routing ( wr ) wdm pon with a total of 8 onu / mpps , 8 mps , and 32 stas in 22 wireless zones , whereby onu / mpps 1 - 4 and onu / mpps 5 - 8 are served on wavelength channel and , respectively .furthermore , to evaluate different traffic loads in the optical and wireless domains , we consider the following traffic matrix for the olt , onus , and stas : + + , + + + where denotes the mean traffic rate ( in frames / second ) . the parameter can be used to test different traffic intensities in the pon , since the onus could be underutilized compared to the wmn in the considered topology .recall from fig .[ fig : fig1 ] that onus may serve multiple subscribers with wired onu access , whose aggregate traffic leads to an increased load at onus . . ] for a conventional wr wdm pon with a typical optical fiber range of 20 km , fig .[ fig : result3 ] illustrates that ofra yields the best throughput - delay performance for , i.e. , every optical and wireless fiwi node generates the same amount of traffic .minimum interference routing tends to overload the wireless mpp interfaces as it does not count the fiber backhaul as a hop , resulting in high delays .the throughput - delay performance of the four considered fiwi routing algorithms largely depends on the given traffic loads and length of the fiber backhaul .[ fig : result4 ] depicts their throughput - delay performance for ( ) a conventional 20 km range and ( ) a 100 km long - reach wr wdm pon , whereby in both configurations we set , i.e. , the amount of generated traffic among optical nodes ( olt and onus ) is 100 times higher than that between node pairs involving at least one wireless node ( sta ) .more precisely , all the ( increased ) inter - onu / olt traffic is sent across the wdm pon only , thus creating a higher loaded fiber backhaul .we observe from fig .[ fig : result4 ] that in general all four routing algorithms achieve a higher maximum aggregate throughput due to the increased traffic load carried on the fiber backhaul .we observe that for a conventional 20 km range wr wdm pon with small to medium traffic loads , ofra gives slightly higher delays than the other considered routing algorithms .this observation is in contrast to fig .[ fig : result3 ] , though in both figures ofra yields the highest maximum aggregate throughput .we have measured the traffic on the optical and wireless network interfaces of each onu / mpp .our measurements show that at low to medium traffic loads with , ofra routes significantly less traffic across the wdm pon than the other routing algorithms , but instead uses the less loaded wireless mesh front - end .this is due to the objective of ofra to give preference to links with lower traffic intensities . as a consequence , for , ofra routes relatively more traffic over lightly loaded wireless links , even though this implies more wireless hops , resulting in a slightly increased mean delay compared to the other routing algorithms at low to medium loads . )a conventional 20 km range and ( ) a 100 km long - reach wavelength - routing ( wr ) wdm pon and . ] fig .[ fig : result4 ] also shows the impact of the increased propagation delay in a long - reach wdm pon with a fiber range of 100 km between olt and onus . aside from agenerally increased mean delay , we observe that minimum hop and minimum interference routing as well as ofra provide comparable delays at low to medium traffic loads , while the maximum achievable throughput differences at high traffic loads are more pronounced than for the 20 km range .the favorable performance of ofra at high traffic loads is potentially of high practical relevance since access networks are the bottlenecks in many networking scenarios and thus experience relatively high loads while core networks operate at low to medium loads .[ fig : result4 ] illustrates that minimum delay routing performs poorly in long - reach wdm pon based fiwi access networks .our measurements indicate that minimum delay routing utilizes the huge bandwidth of the long - reach wdm pon much less than the other routing algorithms in order to avoid the increased propagation delay . as a consequence , with minimum delay routing most trafficis sent across the wmn , which offers significantly lower data rates than the fiber backhaul , resulting in a congested wireless front - end and thereby an inferior throughput - delay performance . to highlight the flexibility of our analysis, we note that it accommodates any type and number of fiber failures .fiber failures represent one of the major differences between optical ( wired ) fiber and wireless networks and affect the availability of bimodal fiwi networks . in the event of one or more distribution fiber cuts , the corresponding disconnected onu / mpp(s )turn(s ) into a conventional wireless mp without offering gateway functions to the fiber backhaul any longer .fiwi access network survivability for arbitrary fiber failure probabilities has been analyzed in .[ fig : result5 ] illustrates the detrimental impact of distribution fiber failures on the throughput - delay performance of a 20 km range wavelength - routing wdm pon , which is typically left unprotected due to the small number of cost - sharing subscribers and cost - sensitivity of access networks . .] we also note that the analytical framework is able to account for other types of network failure , e.g. , onu / mpp failures . in this case , malfunctioning onu / mpps become unavailable for both optical and wireless routing . in principle ,fiwi access networks can be made more robust against fiber failures through various optical redundancy strategies , such as onu dual homing , point - to - point interconnection fibers between pairs of onus , fiber protection rings to interconnect a group of closely located onus by a short - distance fiber ring , or meshed pon topologies .these redundancy strategies in general imply major architectural and onu modifications of the fiwi access network of fig .[ fig : fig1 ] under consideration . to incorporate such topological pon modifications ,the fiber part of the capacity and delay analysis would need to be modified accordingly .our analysis is also applicable to the emerging ieee standard 802.11ac for future vht wlans with raw data rates up to 6900 mb / s .in addition to a number of phy layer enhancements , ieee 802.11ac will increase the maximum a - msdu size from 7935 to 11406 octets and the maximum a - mpdu size from 65535 octets to 1048575 octets .both enhancements can be readily accommodated in our analysis by setting the parameters and accordingly .( optical and wireless data rates , and , are given in gb / s and mb / s , respectively ) . ][ fig : result6 ] illustrates the fiwi network performance gain achieved with a wireless front - end based on vht wlan instead of ieee 802.11n wlan with maximum data rate of 600 mb / s , for minimum hop routing , an optical range of 20 km , and .for a wavelength - routing wdm pon operating at a wavelength channel data rate of 1 gb / s , we observe from fig .[ fig : result6 ] that vht wlan roughly triples the maximum mean aggregate throughput and clearly outperforms 600 mb / s 802.11n wlan in terms of both throughput and delay .furthermore , the figure shows that replacing the 1 gb / s wavelength - routing wdm pon with its high - speed 10 gb / s counterpart ( compliant with the ieee 802.3av 10g - epon standard ) does not yield a higher maximum aggregate throughput , but it does lower the mean delay especially at medium traffic loads before wireless links at the optical - wireless interfaces get increasingly congested at higher traffic loads .a variety of routing algorithms have recently been proposed for integrated fiwi access networks based on complementary epon and wlan - mesh networks . in this article, we presented the first analytical framework to quantify the performance of fiwi network routing algorithms , validate previous simulation studies , and provide insightful guidelines for the design of novel integrated optical - wireless routing algorithms for future fiwi access networks leveraging next - generation pons , notably long - reach 10 + gb / s tdm / wdm pons , and emerging gigabit - class wlan technologies .our analytical framework is very flexible and can be applied to any existing or new optical - wireless routing algorithm .furthermore , it takes the different characteristics of disparate optical and wireless networking technologies into account . beside their capacity mismatch and bit error rate differences, the framework also incorporates arbitrary frame size distributions , traffic matrices , optical / wireless propagation delays , data rates , and fiber cuts .we investigated the performance of minimum hop , minimum interference ( wireless hop ) , minimum delay , and our proposed ofra routing algorithms .the obtained results showed that ofra yields the highest maximum aggregate throughput for both conventional and long - reach wavelength - routing wdm pons under balanced and unbalanced traffic loads .for a higher loaded fiber backhaul , however , ofra gives priority to lightly loaded wireless links , leading to an increased mean delay at small to medium wireless traffic loads .we also observed that using vht wlan helps increase the maximum mean aggregate throughput significantly , while high - speed 10 gb / s wdm pon helps lower the mean delay especially at medium traffic loads .there are several important directions for future research .one direction is to examine mechanisms for providing qulity of service or supporting specific traffic types , see e.g. , .further detailed study of the impact of different dynamic bandwidth allocation approaches for long - reach pons , e.g. , , and their effectiveness in integrated fiwi networks is of interest .yet another direction is the examine the internetworking of fiwi networks with metropolitan area networks .s. sarkar , h .- h .yen , s. dixit , and b. mukherjee , `` hybrid wireless - optical broadband access network ( woban ) : network planning using lagrangean relaxation , '' _ ieee / acm transactions on networking _ , vol .17 , no . 4 , pp .10941105 , aug .m. a. ali , g. ellinas , h. erkan , a. hadjiantonis , and r. dorsinville , `` on the vision of complete fixed - mobile convergence , '' _ ieee / osa j. lightwave technol ._ , vol . 28 , no . 16 , pp .23432357 , aug .2010 .a. reaz , v. ramamurthi , s. sarkar , d. ghosal , s. dixit , and b. mukherjee , `` cadar : an efficient routing algorithm for a wireless - optical broadband access network ( woban ) , '' _ ieee / osa journal of optical communications and networking _ , vol . 1 , no . 5 , pp . 392403 , oct .2009 .g. shen , r. s. tucker , and c .- j .chae , `` fixed mobile convergence architectures for broadband access : integration of epon and wimax , '' _ ieee communications magazine _ , vol .45 , no . 8 , pp . 4450 , aug .2007 .f. aurzada , m. scheutzow , m. reisslein , and m. maier , `` towards a fundamental understanding of the stability and delay of offline wdm epons , '' _ ieee / osa j. optical communications and networking _ , vol . 2 , no . 1 , pp . 5166 , jan .f. aurzada , m. scheutzow , m. reisslein , n. ghazisaidi , and m. maier , `` capacity and delay analysis of next - generation passive optical networks ( ng - pons ) , '' _ ieee transactions on communications _ , vol .59 , no . 5 , pp .13781388 , may 2011 .s. bharati and p. saengudomlert , `` analysis of mean packet delay for dynamic bandwidth allocation algorithms in epons , '' _ ieee / osa journal of lightwave technology _ , vol . 28 , no . 23 , pp . 34543462 , 2010 .b. lannoo , l. verslegers , d. colle , m. pickavet , m. gagnaire , and p. demeester , `` analytical model for the ipact dynamic bandwidth allocation algorithm in epons , '' _ osa journal of optical networking _ , vol . 6 , no . 6 , pp . 677688 , jun .m. mcgarry , m. reisslein , f. aurzada , and m. scheutzow , `` shortest propagation delay ( spd ) first scheduling for epons with heterogeneous propagation delays , '' _ ieee journal on selected areas in communications _28 , no . 6 , pp . 849862 , aug . 2010 .m. t. ngo , a. gravey , and d. bhadauria , `` a mean value analysis approach for evaluating the performance of epon with gated ipact , '' in _ proc . of int .conference on optical network design and modeling ( ondm ) _ , mar .2008 , pp . 16 .a. reaz , v. ramamurthi , and m. tornatore , `` cloud - over - woban ( cow ) : an offloading - enabled access network design , '' in _ proc . , ieee international conference on communications ( icc ) _ , jun .2011 , pp .a. r. dhaini , p .- h . ho , and x. jiang , `` qos control for guaranteed service bundles over fiber - wireless ( fiwi ) broadband access networks , '' _ieee j. lightw ._ , vol . 29 , no . 10 , pp . 15001513 , may 2011 .y. liu , l. guo , b. gong , r. ma , x. gong , l. zhang , and j. yang , `` green survivability in fiber - wireless ( fiwi ) broadband access network , '' _ optical fiber technology _ , vol .18 , no . 2 ,6880 , mar .2012 .y. liu , l. guo , r. ma , and w. hou , `` auxiliary graph based protection for survivable fiber - wireless ( fiwi ) access network considering different levels of failures , '' _ opt . fiber techn ._ , vol . 18 , no . 6 , pp .430439 , 2012 .z. yubin , h. li , x. ruitao , q. yaojun , and j. yuefeng , `` wireless protection switching for video service in wireless - optical broadband access network , '' in _ proc .conf . on broadband network multimedia technology ( ic - bnmt ) _, 2009 , pp . 760764 .n. ghazisaidi , m. scheutzow , and m. maier , `` survivability analysis of next - generation passive optical networks and fiber - wireless access networks , '' _ ieee trans ._ , vol . 60 , no . 2 , pp. 479492 , june 2011 .y. liu , q. song , r. ma , b. li , and b. gong , `` protection based on backup radios and backup fibers for survivable fiber - wireless ( fiwi ) access network , '' _ j. network and computer appl ., in print _ , 2013 .j. zhang , w. xu , and x. wang , `` distributed online optimization of wireless optical networks with network coding , '' _ ieee / osa journal of lightwave technology _ , vol .30 , no .14 , pp . 22462255 , jul . 2012 .m. d. andrade , g. kramer , l. wosinska , j. chen , s. sallent , and b. mukherjee , `` evaluating strategies for evolution of passive optical networks , '' _ ieee comm ._ , vol .49 , no .7 , pp . 176184 , jul . 2011 .f. aurzada , m. scheutzow , m. herzog , m. maier , and m. reisslein , `` delay analysis of ethernet passive optical networks with gated service , '' _ osa journal of optical networking _ , vol . 7 , no . 1 ,2541 , jan . 2008 .a. bianco , j. finochietto , g. giarratana , f. neri , and c. piglione , `` measurement - based reconfiguration in optical ring metro networks , '' _ ieee / osa j. lightwave tech ._ , vol . 23 , no . 10 , pp . 31563166 , 2005 .a. elwalid , d. mitra , i. saniee , and i. widjaja , `` routing and protection in gmpls networks : from shortest paths to optimized designs , '' _ ieee / osa j. lightwave tech ._ , vol .21 , no . 11 , pp . 28282838 , 2003 .a. dixit , b. lannoo , g. das , d. colle , m. pickavet , and p. demeester , `` dynamic bandwidth allocation with sla awareness for qos in ethernet passive optical networks , '' _ ieee / osa journal of optical communications and networking _ , vol . 5 , no . 3 , pp .240253 , 2013 .j. ahmed , j. chen , l. wosinska , b. chen , and b. mukherjee , `` efficient inter - thread scheduling scheme for long - reach passive optical networks , '' _ ieee communications mag ._ , vol .51 , no . 2 , pp .s35s43 , feb .a. buttaboni , m. de andrade , and m. tornatore , `` a multi - threaded dynamic bandwidth and wavelength allocation scheme with void filling for long reach wdm / tdm pons , '' _ ieee / osa journal of lightwave technology _ , vol .31 , no . 8 , pp . 11491157 , apr .2013 .t. jimenez , n. merayo , p. fernandez , r. duran , i. de miguel , r. lorenzo , and e. abril , `` implementation of a pid controller for the bandwidth assignment in long - reach pons , '' _ ieee / osa j. optical commun . and netw ._ , vol . 4 , no . 5 , pp . 392401 , may 2012 .a. mercian , m. mcgarry , and m. reisslein , `` offline and online multi - thread polling in long - reach pons : a critical evaluation , '' _ ieee / osa journal of lightwave technology _ , vol .31 , no . 12 , pp .20182228 , jun . 2013 .n. merayo , t. jimenez , r. duran , p. fernandez , i. miguel , r. lorenzo , and e. abril , `` adaptive polling algorithm to provide subscriber and service differentiation in a long - reach epon , '' _ photonic network communications _257264 , 2010 .m. maier , m. reisslein , and a. wolisz , `` a hybrid mac protocol for a metro wdm network using multiple free spectral ranges of an arrayed - waveguide grating , '' _ computer networks _41 , no . 4 , pp . 407433 , mar .m. scheutzow , m. maier , m. reisslein , and a. wolisz , `` wavelength reuse for efficient packet - switched transport in an awg - based metro wdm network , '' _ ieee / osa journal of lightwave technology _ , vol .21 , no . 6 , pp . 14351455 , jun .yang , m. maier , m. reisslein , and w. m. carlyle , `` a genetic algorithm - based methodology for optimizing multiservice convergence in a metro wdm network , '' _ ieee / osa journal of lightwave technology _ , vol .21 , no . 5 , pp .11141133 , may 2003 .m. yuang , i .-f . chao , and b. lo , `` hopsman : an experimental optical packet - switched metro wdm ring network with high - performance medium access control , '' _ ieee / osa journal of optical communications and networking _ , vol . 2 , no . 2 , pp .91101 , feb .i. m. white , m. s. rogge , k. shrikhande , and l. g. kazovsky , `` a summary of the hornet project : a next - generation metropolitan area network , '' _ ieee journal on selected areas in communications _ , vol .21 , no . 9 ,14781494 , nov .
current gigabit - class passive optical networks ( pons ) evolve into next - generation pons , whereby high - speed 10 + gb / s time division multiplexing ( tdm ) and long - reach wavelength - broadcasting / routing wavelength division multiplexing ( wdm ) pons are promising near - term candidates . on the other hand , next - generation wireless local area networks ( wlans ) based on frame aggregation techniques will leverage physical layer enhancements , giving rise to gigabit - class very high throughput ( vht ) wlans . in this paper , we develop an analytical framework for evaluating the capacity and delay performance of a wide range of routing algorithms in converged fiber - wireless ( fiwi ) broadband access networks based on different next - generation pons and a gigabit - class multi - radio multi - channel wlan - mesh front - end . our framework is very flexible and incorporates arbitrary frame size distributions , traffic matrices , optical / wireless propagation delays , data rates , and fiber faults . we verify the accuracy of our probabilistic analysis by means of simulation for the wireless and wireless - optical - wireless operation modes of various fiwi network architectures under peer - to - peer , upstream , uniform , and nonuniform traffic scenarios . the results indicate that our proposed optimized fiwi routing algorithm ( ofra ) outperforms minimum ( wireless ) hop and delay routing in terms of throughput for balanced and unbalanced traffic loads , at the expense of a slightly increased mean delay at small to medium traffic loads . availability , fiber - wireless ( fiwi ) access networks , frame aggregation , integrated routing algorithms , next - generation pons , vht wlan .
adaptive synchronization of networked dynamical systems has attracted a growing interest during recent years .it is motivated by a broad area of potential applications : formation control , cooperative control , control of power networks , communication networks , production networks , etc .existing works and others are dealing with full state feedback and linear interconnections .the solutions are based on lyapunov functions formed as sum of lyapunov functions for local subsystems . as for adaptive control algorithmsthey are based on either local ( decentralized ) or nearest neighbor ( described by an information graph ) strategies . despite a great interest in control of network ,only a restricted class of them is currently solved .e.g. in existing papers mainly linear models of subsystems are considered . in nonlinear caseonly passive or passifiable systems are studied and control is organized according to information graph , i.e. not completely decentralized .availability of the whole state vector for measurement as well as appearance of control in all equations for all nodes is assumed in decentralized stability and synchronization problems .powerful passivity based approaches are not developed for adaptive synchronization problems . in this paperwe consider the problem of master - slave ( leader - follower ) synchronization in a network of nonidentical systems in lurie form where system models can be split into linear and nonlinear parts .case of identical nodes is studied in .linearity of interconnections is not assumed ; links between subsystems can also be nonlinear . in the contrary to known works on adaptive synchronization of networks ,see , only some output function is available and control appears only in a part of the system equations .it is also assumed that some plant parameters are unknown.the leader subsystem is assumed to be isolated and the control objective is to approach the trajectory of the leader subsystem by all other ones under conditions of uncertainty .interconnection functions are assumed to be lipschitz continuous .the results of are employed to solve the posed problem .adaptation algorithm is designed by the speed - gradient method .it is shown that the control goal is achieved under leader passivity condition , if the interconnection strengths satisfy some inequalities .the results are illustrated by example of synchronization in network of nonidentical chua curcuits .we need yakubovich - kalman lemma in following form , see .[ yakub_lemma ] let be real matrices and + then the following statements are equivalent : \1 ) there exists matrix such that \2 ) polinomial is hurwitz and following frequency domain conditions hold for all in order to present the main syncronization result of this paper we need to formulate problem statement of decentralized control and theorem from which can also be derived from theorem from . consider stands for column vector with components consisting of components of a system consisting of interconnected subsystems dynamics of each being described by the following equation : where state vector , - vector of inputs ( tunable parameters ) of subsystem , - aggregate state and input vectors of system s , vector - function describes local dynamics of subsystem and vectors describe interconnection between subsystems .let be local goal functions and let the control goal be : for all we assume existence of smooth vector functions such that i.e. decentralized speed - gradient algorithm is introduced as follows : where - matrix . [ th_2_18 ]_ suppose the following assumptions hold for the system : _ 1 .functions are continuous in continuously differentiable in and locally bounded in functions are uniformly continuous in second argument for all in bounded set , functions are convex in there exist constant vectors and scalar monotonically increasing functions such that and 2 .functions are continuous and satisfy the following inequalities is hurwitz , is identity matrix .then system , is globally asymptotically stable in variables all trajectories are bounded on and satisfy .let the leader subsystem be described by the equation where state , measurement , is control that specified in advance , internal nonlinearity .let and be known and not depending on the vector of unknown parameters where is known set . consider a network of interconnected subsystems let subsystem be described by the following equation where functions describe interconnections between subsystems .we assume let matrices and functions depend on the vector of unknown parameters network model can describe , for example , interconnected electrical generators .let the control goal be specified as convergence of all subsystems and the leader trajectories : the adaptive synchronization problem is to find a decentralized controller + ensuring the goal for all values of unknown plant parameters .denote let the main loop of the adaptive system be specified as set of linear tunable local control laws : where are tunable parameters . by applying speed - gradient method it is easy to derive the following adaptation law : where matrices , introduce the following definition .[ g_monot ] _ let function is called g - monotonically decreasing if inequality holds for all . _ _ remark 1 ._ apparently , for -monotonical decrease of the function is equivalent to incremental passivity of the static system with characteristics definition [ g_monot ] is easily extended to dynamical systems with the state vector input and output it corresponds to existence of a smooth function satisfying an integral inequality the corresponding property can be called incremental -passivity by analogy with .consider real matrices of size correspondingly and a number such that : denote condition number of matrix where are maximum and minimum eigenvalues of matrix .for analysis of the system dynamics the following assumptions are made .a1 ) the functions are globally lipschitz : the function is such that the unique existence of solutions of holds . a2)(matching conditions , ) for each there exist vectors and numbers such that for denote for the case when matrix is hurwitz introduce notation for stability degree of the function s denominator , i.e. where are eigenvalues of . [ th_noident ] _ let matrix be hurwitz and for some the following frequency domain conditions hold : for all then there exist such that relations hold ._ let for all assumptions a1 , a2 hold , function be -monotonically decreasing , and following inequalities hold where , is condition number of matrix . then for all adaptive controller , ensures achievement of the goal and boundedness of functions on for all solutions of the closed - loop system , , , ._ let s apply lemma [ yakub_lemma ] .note that in our case i. e. is scalar .let s choose instead of in .then statement of the lemma [ yakub_lemma ] and conditions of theorem [ th_noident ] ensure existence of matrix such that now we can conclude that there exists number such that the following is true : denoting introduce auxiliary error subsystems : here we choose same as in .let us choose following goal functions and apply theorem [ th_2_18 ] .we need to evaluate the derivative trajectories of along trajectories of isolated ( i.e. without interconnections ) auxiliary subsystems : .\end{aligned}\ ] ] denote by taking we obtain =\\ & z_i{^{\scriptscriptstyle { \rm t}}}h[a_l x_i+b_l\overline{u}+b_l(\psi_0(y_i)-\psi_0(\overline{y}))-a_l x_i - b_l\overline{u}]=\\ & z_i{^{\scriptscriptstyle { \rm t}}}h[a_l z_i+b_l(\psi_0(y_i)-\psi_0(\overline{y}))].\\ \end{aligned}\ ] ] further , for the last inequality holds because is -monotonically decreasing .so taking into account we conclude by taking we ensure that holds for .other conditions from the first part of theorem [ th_2_18 ] hold , since the right hand side of the system and function are continuous in functions not depending in for any .convexity condition is valid since the right hand side of is linear in .the interconnection condition in our case reads : and matrix should be hurwitz ( ) .for the case we can take and last inequality will be satisfied .let s consider case for rewrite as follows : for then for evaluate lower bound of the right - hand side of : it is seen that for to ensure it is sufficient to impose an inequality or denote where noting that in can be chosen arbitrarily close to and taking into account we can conclude that the left - hand side of : thus , if following inequality holds then is ensured : introduce matrix as follows such choice of ensures .note that is symmetric .if matrix is positive definite then is hurwitz .diagonal elements of are positive since and by taking into account that and applying gershgorin circle theorem we conclude that is positive definite .thus , statement of the theorem [ th_noident ] follows from theorem [ th_2_18 ] ._ remark 2 ._ the value of can be evaluated by solving lmi by means of one of existing software package ._ remark 3 ._ by interconnections graph of network we can consider directed graph which is a pair of two sets : a set of nodes and a set of arcs .cardinality of a set of nodes is -th node is associated with subsystem for any we say that arc from -th node to -th node belongs to the set of arcs if is not zero function . by weighted in - degree of -th nodewe define following number : if each nonzero addend from last sum is equal to 1 then introduced definition of weighted in - degree of the node coincides with the definition of in - degree of digraph s node .thus the inequality can be interpreted as follows : weighted in - degree of each node of interconnections graph must be less than circuit is a well known example of simple nonlinear system possessing complex chaotic behavior .its trajectories are unstable and it is represented in the lurie form .let us apply our results to synchronization with leader subsystem in the network of five interconnected nonidentical chua systems .let and let the leader subsystem be described by the equation where is state vector of the system , is output available for measurement , is scalar control variable , where further , let transfer function it is seen from the nyquist plot of presented on fig .[ nyquist ] , that first frequency domain inequality of holds .the second frequency domain inequality of also holds since relative degree of is equal to one and highest coefficient of its numerator is positive .obviously is -monotonically decreasing ..,width=288,height=172 ] let subsystem for be described by with by choosing and using we obtain matrices for which are not equal , i.e. nodes are nonidentical .denote let be equal to further , let lipschitz constants of all are equal to it follows from theorem [ th_noident ] that decentralized adaptive control provides synchronization goal if for all inequality holds , i.e. if interconnections are sufficiently weak .consider following control of leader subsystem .$ ] such ensures chaotic behavior of leader subsystem .let us put where identity matrix , and denote by matrix with element lying in the -th row and the -th column , and let us choose adaptive control as in and apply theorem [ th_noident ] .if we take then simulation shows that i.e. synchronization is achieved : all state vectors of nonidentical nodes converge to the state vector of the leader subsystem , see fig .[ fig_sim]-(b ). phase portrait of the leader subsystem , found by 40 sec .simulation are shown on fig .[ fig_sim ] .in contrast to a large number of previous results , we obtained synchronization conditions for networks consisting of nonidentical nonlinear systems with incomplete measurement , incomplete control , incomplete information about system parameters and coupling .the design of the control algorithm providing synchronization property is based on speed - gradient method , while derivation of synchronizability conditions is based on yakubovich - kalman lemma and result presented in .99 j. lu , g. chen , `` a time - varying complex dynamical network model and its controlled synchronization criteria '' , _ ieee trans .autom.control_ , vol .50(6 ) , pp . 841 - 846 , 2005 .j. yao , d. j. hill , z .- h .guan , h. o. wang , `` synchronization of complex dynamical networks with switching topology via adaptive control '' , in _ proc .45th ieee conf .2819 - 2824 , 2006 .j. zhou , j. lu , j. lu , `` adaptive synchronization of an uncertain complex dynamical network '' , _ ieee trans .control _ , vol.51(4 ) ,652 - 656 , 2006 .w. s. zhong , g. m. dimirovski , j. zhao , `` decentralized synchronization of an uncertain complex dynamical network '' , in _ proc .2007 amer ._ , pp . 1437 - 1442 , 2007 .a. ioannou , `` decentralized adaptive control of interconnected systems '' , _ ieee trans . on autom .31(4 ) , pp . 310 - 314 , 1986 .b. m. mirkin , `` adaptive decentralized control with model coordination '' , _ automation and remote control _ , vol .60(1 ) , pp . 73 - 81 , 1999 .d. t. gavel , d. d. siljak , `` decentralized adaptive control : structural conditions for stability '' , _ ieee trans .34(4 ) , pp . 413 - 426 , 1989 . c. wen ,y. c. soh , `` decentralized model reference adaptive control without restriction on subsystem relative degrees '' , _ ieee trans . on autom .44(7 ) , pp . 1464 - 1469 , 1999 .s. jain , f. khorrami , `` decentralized adaptive control of a class of large - scale nonlinear systems '' , _ ieee trans .control _ ,42(2 ) , pp . 136 - 154 , 1997 .p. jiang , `` decentralized and adaptive nonlinear tracking of large - scale systems via output feedback '' , _ ieee trans .45(11 ) , pp . 2122 - 2128 , 2000 .a. l. fradkov , _ adaptive control in complex systems _ , moscow : nauka , 1990 ( in russian ) .a. l. fradkov , i. v. miroshnik , v. o. nikiforov , _ nonlinear and adaptive control of complex systems _ , kluwer academic publishers , dordrecht , 1999 . j. r. fax , r. m. murray , `` information flow and cooperative control of vehicle formations '' , _ ieee trans . on autom .49(9 ) , pp . 1465 - 1476 , 2004 . c. yoshioka , t. namerikawa , `` observer - based consensus control strategy for multi - agent system with communication time delay '' , in _ proc .17th ieee intern .conf . on control applications _ ,1037 - 1042 , 2008. n. chopra , m. w. spong , `` output synchronization of nonlinear systems with time delay in communication '' , in _ proc .45th ieee conf .4986 - 4992 , 2006. n. chopra , m. w. spong , `` output synchronization of nonlinear systems with relative degree one '' in _ recent advances in learning and control _ , vol .371 , springer - verlag , pp .51 - 64 , 2008 . i. a. dzhunusov and a. l. fradkov , `` adaptive synchronization of a network of interconnected nonlinear lure systems '' , _ automation and remote control _ ,70(7 ) , pp . 1190 - 1205 , 2009. v. a. yakubovich , g. a. leonov , a. kh .gelig _ stability of stationary sets in control systems with discontinuous nonlinearities _, singapore : world scientific , 2004 . j. l. willems , `` a partial stability approach to the problem of transient power system stability '' , _ int .j. of control _ , vol .19(1 ) , pp . 1 - 14 , 1974 .a. l. fradkov , `` passification of nonsquare linear systems and feedback yakubovich - kalman - popov lemma '' , europ .j. of contr .9(6 ) , pp . 577 - 586 , 2003 .a. pavlov , l. marconi , `` incremental passivity and output regulation '' , systems and control letters , vol .57(5 ) , pp . 400 - 409 , 2008 .e. skafidas , a. l. fradkov , r. j. evans , i. m. mareels , `` trajectory approximation based adaptive control for nonlinear systems under matching conditions '' , _ automatica _ , vol .34(3 ) , pp . 287 - 299 , 1998 . c. w. wu ,l. o. chua , `` synchronization in an array of linearly coupled dynamical systems '' , _ ieee trans .circuits and systems - i . _ vol .42(8 ) , pp . 430 - 447 , 1995 .
for a network of interconnected nonlinear dynamical systems an adaptive leader - follower output feedback synchronization problem is considered . the proposed structure of decentralized controller and adaptation algorithm is based on speed - gradient and passivity . sufficient conditions of synchronization for nonidentical nodes are established . an example of synchronization of the network of nonidentical chua systems is analyzed . the main contribution of the paper is adaptive controller design and analysis under conditions of incomplete measurements , incomplete control and uncertainty .
by the farkas - minkowski - weyl theorem a convex polyhedron in has two representations .it can either be described by a finite set of linear inequalities ( facets ) or by a finite set of generators ( vertices and rays ) .precise definitions are given in section [ sec : basic_notations ] .one of the most fundamental problems in the theory of polyhedra and its applications , such as combinatorial optimization or computational geometry , is the conversion between two different descriptions .many algorithms for this representation conversion have been proposed ( see for example , , , ) . for certain classes of polyhedra efficient methodsare known , but there is no approach known which efficiently solves the problem in general .programs like cdd , lrs , pd , porta ( or polymake either relying on some of the others or using its own method closely related to ` cdd ` ) allow conversion of the representation of a polyhedron .since the programs are implementations of quite different methods , their efficiency may vary tremendously on a given example .many interesting polyhedra both pose difficulties for the standard representation conversion approaches and have many symmetries that could potentially be exploited . in many applicationsit is sufficient ( or at least necessary ) to obtain a list of inequalities or generators up to symmetries . in the present paperwe give a brief survey of approaches that can be used for the representation conversion problem up to symmetries .we do not discuss their asymptotic complexity ( which , in the worst case , is not encouraging ) , but rather refer to previous papers where the methods have proven themselves on difficult instances that could not otherwise be solved . for the new approach discussed in section [ sec : sym - pivoting ] we provide some experimental data ourselves .the paper is organized as follows . in section [ sec : basic_notations ] we give some basic notations and facts from the theory of convex polyhedra . in section [ sec : symmetries ] we consider different notions of symmetries and describe how they can practically be obtained using a graph automorphism computation . in section[ sec : orbits ] we describe the group theoretical notions used in the representation conversion methods discussed in the remainder of the paper . in section[ sec : subpolytope ] we consider decomposition methods which reduce the given problem to a number of smaller problems .these approaches have been used quite successfully by different authors . in section [ sec : cascade ] we describe the incremental _ cascade algorithm _ and in section [ sec : sym - pivoting ] we show how symmetry can be exploited in a simplex pivot based algorithm .in this section , we give a brief introduction to some basic concepts and terminology of ( convex ) polyhedra . for more details on polyhedra, we refer to the books , , . given the vector space , denote by its dual vector space , i.e. the vector space of linear functionals on .a _ convex polyhedron _ can be defined by a finite set of linear inequalities with and for .if the number of inequalities in the description is minimum , we speak of a non - redundant description .the dimension of is the dimension of the smallest affine subspace containing it . under the assumption that is full - dimensional , i.e. , every inequality of a non - redundant description defines a _ facet _ of , which is a -dimensional convex polyhedron contained in the boundary of . by the farkas - minkowski - weyl theorem ( see e.g. , corollary 7.1a ) , can also be described by a finite set of generators : where for .if the number of generators is minimum , the description is again called _ non - redundant_. in the non - redundant case , the generators , , are called _ vertices _ and , , are the _ extreme rays _ of . in case is bounded we have and we speak of a _convex polytope_. the _ representation conversion _ from a minimal set of generators into a minimal set of linear functionals ( or vice versa ) is called the _ dual description problem_. by using homogeneous coordinates , the general inhomogeneous problem stated above can be reduced to the homogeneous one where and .for example , we embed in the hyperplane in and consider the closure of its conic hull .the so - obtained polyhedron , with for and for , is referred to as _ polyhedral cone_. by duality , the problem of converting a description by homogeneous inequalities into a description by extreme rays is equivalent to the opposite conversion problem .so for simplicity we assume from now on that is a _polyhedral cone _ given by a minimal ( non - redundant ) set of generators ( extreme rays ) .if we say that is generated by .we want to find a minimal set with by choosing a suitable projection , it is possible to reduce the problem further to the case where is full - dimensional and does not contain any non - trivial linear subspaces .for example , if span a -dimensional linear subspace , we may just choose ( project onto ) independent coordinates . the appropriate projection ( i.e. equations of the linearity space ) can be found efficiently via gaussian elimination .in other words , without loss of generality we may assume the span ( is full - dimensional ) and the linear inequalities span ( is pointed ) .a _ face _ of is a set where is an element of the polyhedral cone _dual _ to .note that .the faces of a pointed polyhedral cone are themselves pointed polyhedral cones .we speak of a -face , if its dimension is .the faces form a ( combinatorial ) lattice ordered by inclusion , the _ face lattice _ of .the rank of a face in the lattice is given by its dimension .each face is generated by a subset of the generators of and therefore it is uniquely identified by some subset of . in particular , the -dimensional face is identified with the empty set and itself with the full index set .all other faces of are identified with some strict , non - empty subset .we write for two faces of with and .two -faces of are said to be _ adjacent _ , if they contain a common -face and are contained in a common -face .in particular , two extreme rays ( -faces ) are adjacent , if they generate a common -face and two facets are adjacent , if they share a common -face ( a _ ridge _ ) . by the properties of a lattice , for two faces and , not necessarily of the same dimension , there is always a unique largest common face contained in them , and a unique smallest face , containing both .any sub - lattice ] denote the centrally symmetric -cube .define for each permutation , let denote the simplex with vertices : let denote the union of all and let denote the set of -simplices formed by intersecting the simplices of with the boundary of .it is known that forms a triangulation of ; consequently forms a triangulation of the boundary , since every triangulation of a polytope induces a triangulation of its boundary .we call ( respectively ) the _ linear ordering triangulation _ of ( respectively of the boundary of . ) since there is a bijection between permutations in and simplices in , acts transitively on by permuting coordinates. let denote the vertices of , i.e. .let denote the stabilizer of the automorphism group of on .define .let denote the -orbitwise pulling of in the order induced by .then the following holds : 1 . acts transitively on . is combinatorially equivalent to .let us first remark that the order of orbits induced by is well defined .the group is generated by a set of generators permuting coordinates , along with a _ switching permutation _ that maps to .it follows that the -orbits of are the equivalence classes of with respect to . to see ( a ) , consider the -simplex ( corresponding to the identity permutation ) with vertices denote the permutation in that first applies ( i.e. switching ) followed by reversing the order of coordinates .the permutation is an automorphism of which carries the -simplex to .these two simplices are precisely the contribution of to the linear ordering triangulation .thus any simplex of can be mapped to any other by an action of on ( i.e. permuting coordinates ) , followed by possibly applying .we now consider ( b ) .we argue that induces a linear ordering triangulation of each -face of , .each -face will receive the same ( linear ordering ) triangulation from the two facets that contain it , hence the triangulations of the facets form a triangulation of the boundary . for , there is nothing to prove .let be a -face of , .let ( resp . ) be the vertex of with the most positive ( resp .negative ) coordinates .recall that we will first pull the -orbit with smallest value .if the functional is minimized uniquely at then the perturbation corresponds locally to a standard pulling and the corresponding subdivision is into pyramids with apex and bases corresponding to all of the faces of that do not contain .otherwise is minimized at both and .the perturbation thus takes to and to some , .this turns out to induce a subdivision of into polytopes with vertices where is the vertex set of a -face of containing neither nor .the polytope is a _-fold pyramid _ , since and .it follows that .that the are induced by the perturbation can be seen by exhibiting a supporting hyperplane of . without loss of generality ,let be defined by equations , .the -face must be defined by further equations , , .let .consider the hyperplane , where it can be verified that supports and .it remains to see that the cover , i.e. that there are no other cells in the induced subdivision . consider an arbitrary relative interior point of .let be the ray from through .let be the first intersection of with after .for each -face of , the double pulling of acts like a single pulling decomposing the boundary of into pyramids with apex either or ; hence for some .it follows that is in .now suppose for all , induces a linear ordering triangulation of the -faces of .from the refinement property proposition [ prop : valid1 ] , we know that induces a decomposition of the pyramids ( respectively of the -fold pyramids ) . in both cases the resulting -simplices correspond to the coordinatewise - monotone paths from to in .suppose we are given which are homogeneous coordinates for the generators of some polyhedron .we consider here as a matrix ( with as rows ) , and let denote the column vector of corresponding coordinates in . following the conventions of section [ sec : basic_notations ] , we suppose is if is a vertex , and if it is an extreme ray .we consider the polyhedron the polyhedron may be thought of as intersected with the hyperplane . by duality , to find the generators of is equivalent to finding the facets of .let be the restricted automorphism group of .let be the induced group acting on .we will assume without loss of generality that the origin is the centroid of v and thus a fixed point of .it follows that is a fixed point of .applying proposition [ prop : fixedpoint ] to our dual representation , we see that orbitwise perturbing the right - hand side vector according will preserve the symmetries of .the perturbed system thus has the form where for and is the index of the orbit containing . to ensure an orbitwise lexicographic perturbation, we will insist where by we mean that is much smaller than , i.e. it is not possible to combinatorially change the polyhedron defined by by choosing smaller . to implement this symbolically , we need a modification of the standard lexicographic pivot rule ( see or for more details ) .let and $ ] .after adding _ slack variables _ to , we are left with a system of the form with rows and columns , where the first columns are the _ decision variables_. a _ feasible basis _ consists of a partition of the column indices such that ( columns of indexed by ) is non - singular and any slack variables in are non - negative . in order to move from one feasible basis to another , we need to perform a _pivot_. we start by choosing a column index to leave . to find a column index to replace , we need to find where and . in our case ,where , we may decompose , and thus the ratio test into two parts and . because the values of the are chosen very small ,the second part is considered only to break ties .we write where column of is defined by summing the columns of corresponding to orbit of generators , and multiplying by . because of the ordering , in order to evaluate we proceed column by column in , reducing the set of ties at each iteration . consider facets and that are equivalent under some symmetry of the basis automorphism group . this same symmetry acts as an isomorphism between the corresponding basis graphs .it follows that when we discover a basis defining a new orbit , but the facet spanned by is known , we do not need to explore the neighbours of since they will be explored in our canonical ( i.e. discovered first ) facet in the orbit of . in order to ensure orbitsare not discarded , we are careful not to mark as known until its canonical discovery .although this pruning does not reduce the number of orbits of bases explored , it can reduce the number of actual bases visited ( and tested for isomorphism ) , since bases of a given orbit are not revisited in every copy of the facet . as an example , consider the quadrilateral facets illustrated in figure [ fig : rotate ] , with rotational symmetry yielding orbits of bases .facets with a -fold rotational symmetry .basis has vertices around the boundary of the facet.,height=144 ] without pruning , a depth first search visits all of the bases ; with pruning only of bases are visited ( see figure [ fig : prune ] ) .the speedup obtained depends roughly on the number of facets visited by the unpruned search , which is bounded by the number of basis orbits .c & in the case where our symmetry group preserves the inner product between pairs of vectors , as is the case for the restricted automorphisms discussed in this paper , we may take advantage of this in several ways . for any face or basis to be tested for isomorphism , we may construct a graph ( analogous to that constructed in proposition [ prop : isomorphism - equivalence ] ) whose nodes are the vectors of and whose edges are the angles between them .this graph contains geometric information not present in the index sets representing , which can help to speed up an algorithm to find an isomorphism . a simpler observation , and equally widely applicable , is that the set of pairwise inner products of two isomorphic faces or bases must be equal .this allows us to store orbit representatives in a data structure such as a hash table or a balanced tree , with the key to the data structure being the set of inner products .this permits more efficient isometry testing by retrieving exactly those orbit representatives which pass the inner product invariant .it is computationally easy to test whether a given linear transformation is in the restricted automorphism group of vector family .since we are interested in restricted automorphisms carrying basis to basis y , we can additionally test if for some ( where can be computed by the same techniques as proposition [ prop : isomorphism - equivalence ] ) .we have only implemented an exhaustive search of , and this is naturally only effective when is quite small . in principleit should be possible to integrate the test for being a restricted isomorphism into a backtracking procedure to search for .much as in the case of polyhedral representation conversion without symmetries , a certain amount of trial and error seems to be necessary to decide on the the best method to attack a given conversion problem up to symmetries .currently decomposition methods have the best record of solving interesting problems ; on the other hand current software requires a certain amount of user intervention in the form of choosing how to treat subproblems .it would be helpful to automate this process . in this context, a virtue of the pivoting methods is that good methods to estimate their running time exist .it would be beneficial , not just when working with symmetry , to have effective methods ( or at least heuristics ) for estimating the running time of incremental methods .the authors would like to acknowledge fruitful discussions with david avis , antoine deza , komei fukuda , alexander hulpke , michael joswig , jesus de loera , brendan mckay , hugh thomas , and frank vallentin .they would also like to thank the centre de recherche mathmatiques for making possible the workshop at which many of these discussions took place .anzin , _ on the density of a lattice covering for and _ , tr . mat .steklova * 239 * ( 2002 ) , diskret .chisel , 2051 ; translation in proc .* 239 - 4 * ( 2002 ) , 1344 .b. ballinger , g. blekherman , h. cohn , n. giansiracusa , e. kelly and a. schrmann , _ experimental study of energy - minimizing point configurations on spheres _ , preprint at http://arxiv.org/abs/math.mg/0611451 .m. dutour sikiri , a. schrmann and f. vallentin , _ classification of eight dimensional perfect forms _ , electron ., to appear , preprint at http://arxiv.org/abs/math.nt/0609388 .g. voronoi , _ nouvelles applications des paramtres continues la thorie des formes quadratiques 1 : sur quelques proprits des formes quadratiques positives parfaites _ , j. reine angew .math * 133 * ( 1908 ) , 97178 .voronoi , _ nouvelles applications des parametres continus l thorie des formes quadratiques , deuxime mmoire , recherches sur les paralllloedres primitifs _ , j. reine angew .( 1909 ) * 134 * , 198287 and * 136 * , 67181 .
we give a short survey on computational techniques which can be used to solve the representation conversion problem for polyhedra up to symmetries . we in particular discuss decomposition methods , which reduce the problem to a number of lower dimensional subproblems . these methods have been successfully used by different authors in special contexts . moreover , we sketch an incremental method , which is a generalization of fourier motzkin elimination , and we give some ideas how symmetry can be exploited using pivots .
in game theory , a fundamental notion is _ nash equilibrium _ ( ne ) , which is a state that is _ stable _ against deviations of any individual participants ( known as agents ) of the game in the sense that any such deviation will not bring about additional benefit to the deviator .much stronger stability is exhibited by a _strong nash equilibrium _ ( sne ) , a notion introduced by aumann , at which no coalition of agents exists such that each member of the coalition can benefit from coordinated deviations by the members of the coalition .evidentally selfish individual agents stand to benefit from cooperation and hence snes are much more preferred to nes for stability .however , snes do not necessarily exist and , even if they do , they are much more difficult to identify and to compute .it is therefore very much desirable to have the advantages of both computational efficiency and strong stability , which motivates our study in this paper .we establish that , for general ne job assignments in load balancing games , which exist and are easy to compute , their loss of strong stability possessed by snes is at most 25% . in a load balancing game , there are selfish agents , each representing one of a set of jobs . in the absence of a coordinating authority ,each agent must choose one of identical servers , , to assign his job to in order to complete the job as soon as possible .all jobs assigned to the same server will finish at the same time , which is determined by the workload of the server , defined to be the total processing time of the jobs assigned to the server .let job have a processing time ( ) and let denote the set of jobs assigned to server ( ) . for convenience , we will use agent " and `` job '' interchangeably , and consider job processing times also as their `` lengths '' .the completion time of job is the _ workload _ of its server : .the notions of ne and sne can be stated more specifically for the load balancing game . a job assignment is said to be an ne if no individual job can reduce its completion time by unilaterally migrating from server to another server . a job assignment is said to be an sne if no subset of jobs can each reduce their job completion times by forming a coalition and making coordinated migrations from their own current servers. nes in the load balancing game have been widely studied ( see , e.g. , ) with the main focus of quantifying their loss of global optimality in terms of the price of anarchy , a term coined by koutsoupias and papadimitriou , as largely summarized in . in this paper , we study nes in load balancing games from a different perspective by quantifying their loss of strong stability .we focus on pure nes , those corresponding to deterministic job assignments in load balancing games .while high - quality nes are easily computed , identification of an sne is strongly np - hard .given any job assignment in a load balancing game , the _ improvement ratio _ ( ir ) of a deviation of a job is defined as the ratio between the pre- and post - deviation costs .an ne is said to be a -approximate sne ( ) ( which is called -se in ) if there is no coalition of jobs such that each job of the coalition will have an ir more than from coordinated deviations of the coalition .clearly , the stability of ne improves with a decreasing value of and a -approximate sne is in fact an sne itself . for the load balancing game of two servers, one can easily verify that every ne is also an sne .if there are three or four servers in the game , then it is proved in and , respectively , that any ne assignment is a -approximate sne , and the bound is tight .furthermore , it is a -approximate sne if the game has servers for .we establish in this paper that , in the -server load balancing game ( ) , any ne is a -approximate sne , which is tight and hence closes the final gap in the literature on the study of ne approximation of sne in load balancing games . to establish our approximation bound, we make a novel use of a powerful graph - theoretic tool .we start with an example to help the reader get some intuition of the problem under consideration .the example also provides a lower bound of for any ne assignment to approximate sne .the left panel of fig .[ fig : lower_bound ] below shows an ne assignment of six jobs to three identical machines with job completions , and , respectively , for the three pairs of jobs .if the four jobs of lengths and form a coalition and make a coordinated deviation as shown in the figure , then in the resulting assignment , each of the four jobs in the coalition achieves an improvement ratio of . + as a tool of our analysis , we start with the minimal deviation graph introduced by chen . for conveniencewe collect into this subsection some basic results on minimal deviation graphs from . given an ne job assignment , as an ne - based coalitional deviation or simply _ coalitional deviation _ , we refer to a collective action of a subset of jobs in which each job of migrates from its server in the assignment so that its completion time is decreased after the migration .accordingly , is called the corresponding _ coalition_. we introduce deviation graphs to characterize coalitional deviations . in a coalitional deviation, a server is said to be _ participating _ or _ involved _ if its job set changes after the deviation . given a coalitional deviation with the corresponding coalition , we define the corresponding ( directed ) deviation graph as follows : in what follows , without loss of generality we consider coalitional deviations with .given a coalitional deviation , we denote by the workload of server after deviation , and by ir( ) the minimum of the improvement ratios of all jobs taking part in . then we have the following definition and lemmas from : [ lem : out - degree ] the out - degree of any node of a deviation graph is at least 1 , and hence .[ lem : no - cycles ] if all servers are involved in a coalitional deviation , then the deviation graph does not contain a set of node - disjoint directed cycles such that each node of the graph is in one of the directed cycles .[ def : min - deviation ] let be a coalitional deviation and be the corresponding coalition .deviation graph is said to be _ minimal _ if for any coalitional deviation such that the corresponding coalition is a proper subset of .[ lem : in - degree ] the in - degree of any node of a minimal deviation graph is at least 1 .[ lem : strong - connectivity ] a minimal deviation graph is strongly connected . in our study of bounding ne approximation of sne, we can apparently focus on those coalitional deviations that correspond to minimal deviation graphs .we start with several observations on any ne - based coalitional deviation involving servers for .let denote the corresponding minimal deviation graph .if two jobs assigned to server in the ne assignment migrate to server ( ) together , or both stay on the server , then we can treat them as one single job without loss of generality in our study of the minimal deviation graph . with this understanding ,if we let ( ) denote the number of jobs assigned to server in the ne assignment , then the following is immediate .[ obs : out - degrees ] for any , we have . or . as a result of the above observation ,the node set can be partitioned into two , and , as follows : by applying a data scaling if necessary , we assume without loss of generality that [ obs : bound - of - li ] for any , we have .suppose to the contrary that , which implies that .let denote the length of the shortest job assigned to server in the ne assignment .we have , which leads to , that is , , which implies that the shortest job assigned to server in the ne assignment can have the benefit of reducing its job completion time by unilaterally migrating to the server of which the workload is 1 , contradicting the ne property .the following observation states that , if all jobs on a server participate in the migration , then none of the servers they migrate to will have _ all _ its jobs migrate out .[ obs : out - degree - relation ] if and , then .suppose to the contrary that . according to observation [ obs : out - degrees ], we have , which implies that all the jobs assigned to server and server in the ne assignment belong to coalition . since , there is a job that migrates from server to server .consider the new coalition formed by all members of except .then we have .let be such a coalitional deviation of that is the same as except without the involvement of and the job(s ) that migrate(s ) to ( resp . ) in will migrate to ( resp . ) in .then we have , contradicting the minimality of the deviation graph according to definition [ def : min - deviation ] .the following observation is a direct consequence of observation [ obs : out - degree - relation ] : [ obs : equiv - min - deviation ] assume . hence according to observation [ obs : out - degree - relation ] .let be the same as except that any job that migrates to ( resp . ) in will migrate to ( resp . ) in .then , and is also minimal .to help our analysis , we will introduce in this section a special arc set in the minimal deviation graph . for any node , denote and . for notational convenience , for any node set , we denote and . with by above , we similarly define , , and .let us define as follows . according to lemma [ lem : in - degree ] , for any .for each we pick up an arc from the non - empty set to form an -element subset .then possesses the following property : denote for any .then it is clear that if node set is a singleton , then we will also use to denote the singleton if no confusion can arise .hence , due to ( [ eqn : in - degree ] ) we will also use to denote the single _ element _ of the corresponding set .any arc set that possesses property ( [ eqn : in - degree ] ) is said to be _ tilde - valid_. our main result is stated in the following theorem .[ thm : main ] for any minimal deviation graph involving servers , its improvement ratio .let us perform some initial investigation to see what we need to do to prove the theorem .recall that , for any , is the number of jobs assigned to server in the ne assignment and for a fixed arc set defined in section [ sec : auxiliary_arc_set ] for the minimal deviation graph . fora pair of integers and with and , let .then it is clear that denote for all possible pairs and : and .let . then according to ( [ eqn :in - degree ] ) and ( [ eqn : def - of - mab ] ) , we have according to the definition of ir , we have for .summing up these inequalities over all arcs in leads to which implies that according to observation [ obs : bound - of - li ] , we have , which implies that the right - hand side of ( [ eqn : bound - of - r ] ) , which we denote by , is at most 2 , since is a convex combination of and with the corresponding combination coefficients ( ) and due to ( [ eq : scalling ] ) and ( [ eqn : out - degree ] ) . on the other hand , since according to the definition , we conclude that , which implies that is a decreasing function of for which or , and an increasing function of for which . therefore , we increase by increasing to for such that , and by decreasing to for such that or . noticing that according to observation [ obs : out - degrees ], we obtain which together with ( [ eqn : equations - for - dab ] ) implies that in order to prove theorem [ thm : main ] , we need to show .then it suffices to show which is equivalent to by replacing the right - hand side of the above inequality with the left - hand side of the second equality in ( [ eqn : equations - for - dab ] ) , we have that is in what follows , we are to prove ( [ eqn : ultimate ] ) and thereby theorem [ thm : main ] through a series of lower bounds established in section [ sec : bounding ] on different terms of the right - hand side of inequality ( [ eqn : ultimate ] ) .we introduce an auxiliary node set in addition to the auxiliary arc set introduced earlier .let then we immediately have [ lem : w0 ] .suppose to the contrary that .then any in ( [ eqn : out - degree ] ) , which implies that for any , so that forms some node - disjoint directed cycles that span all nodes , contradicting lemma [ lem : no - cycles ] . note that , from the formation of arc set , it is clear that as a tilde - valid arc set may not be unique . however , among all possible choices of a tilde - valid arc set , we choose one that has some additional properties in terms of minimum cardinalities of some combinatorial structures , which we shall define in due course .these additional properties will be presented in a sequence of three assumptions , which are made without loss of generality due to the finiteness of the total number of tilde - valid arc sets . with the same reason, we assume that our coalitional deviation is chosen in such a way that it has a certain property ( see assumption [ ass : delta ] ) .[ ass : mimimum - w0 ] arc set is tilde - valid and it minimizes . let . then according to lemmas [ lem : w0 ] and [ lem : out - degree ] .a node is said to be _ associated _ with if it is _ linked _ to an element of through a sequence of arcs ( but not a directed path ) in and in alternation ( see fig . [fig : w_1 ] for an illustration ) .more formally , is associated with if and only if , for some integer , there are nodes with and , such that nb : the solid arcs belong to and dotted arcs to + , title="fig : " ] note that in the above definition , if is associated with , then , , used in ( [ eqn : def - of - w1 ] ) are each associated with .define immediately we have , which implies that on the other hand , since according to ( [ eqn : in - degree ] ) , we have . [ lem : w1 ] for any , .furthermore , .it is clear from the definition that .hence for any .assume for contradiction that for some .since is associated with , in addition to nodes satisfying ( [ eqn : def - of - w1 ] ) , we have a node ( hence ) such that according to the definition of .now we remove arcs from and add new arcs to .it is easy to see that the new set still has property ( [ eqn : in - degree ] ) .additionally , under the new , all remain the same except two of them : and , with the former increased by 1 and the latter decreased by 1 . since under the original , then under the new .consequently , the new determined by the new contains a smaller number of elements , contradicting assumption [ ass : mimimum - w0 ] about the original . to prove the second part of the lemma ,let us first prove .let and .we show that .in fact , since according to ( [ eqn : in - degree ] ) , we have a node such that .now since is associated with , we conclude that is also associated with , which implies that .therefore , with ( [ eqn : tilde - w0-versus - tilde - w1 ] ) we have proved that .the other direction of the inclusion is apparent .it follows from lemma [ lem : w1 ] and ( [ eqn : in - degree ] ) that the mapping from onto is a one - to - one correspondence and hence let as we can see from inequality ( [ eqn : ultimate ] ) , bounding the sizes and of the respective sets and is vital in our establishment of the desired bound .we therefore take a close look at the two sets by partitioning into a number of subsets , so that different bounding arguments can be applied to different subsets .we assemble our notations here in one place for easy reference and the reader is advised to conceptualize each _ only _ when it is needed in an analysis at a later point .let . for convenience ,we reserve letter to exclusively index elements of and let with the understanding that it is always the case that . for any , implies since . on the other hand , since arc , we have according to observation [ obs : out - degree - relation ] , which implies that must belong to one of the following three mutually disjoint node sets : therefore , if we define then we have and for any .in other words , for any element , the two - element set has exactly one element in .now let clearly , ( ) and .to bound from below the right - hand side of the key inequality ( [ eqn : ultimate ] ) , or equivalently , to bound from above the left - hand side of ( [ eqn : ultimate ] ) , we establish through a series of five lemmas and a corollary that the number of nodes in is at most , where and are defined below for with mutually disjoint node sets .we divide our proofs into two parts with the second part on bounding .let us start with some straightforward upper bounds .since for any according to the definition of , we immediately have the following lemma thanks to observation [ obs : out - degree - relation ] . [lem : x2 ] let . then and . note that for any and ( ) due to ( [ eqn : in - degree ] ) , which lead to the following lemma .. then and . the following lemma follows directly from the definition of : let . then and .for any , and , unless . at this point , we introduce our second additional assumption about without loss of generality .[ ass : m-2 - 2 ] arc set is such that it first satisfies assumption [ ass : mimimum - w0 ] and then minimizes . for any ,since according to the definition of , there is . then ( otherwise we would have according to lemma [ lem : w1 ] ) .in fact , node has the following property : to see this , consider replacing with in to form a new tilde - valid arc set .it is easy to see that satisfies assumption [ ass : mimimum - w0 ] .however , with the new arc set , is no longer a node in the new , which implies that has to become a node in in order not to contradict assumption [ ass : m-2 - 2 ] with the original choice of , which in turn implies properties ( [ eqn:2nd - special - node - property ] ) .furthermore , since and , there is no such that , which implies that .consequently , we have the following lemma . [lem : x5 ] let . then and . now let us establish an upper bound on in the following lemma with the minimality of our deviation graph .[ lem : x1 ] let . then .suppose to the contrary that , that is , according to ( [ eqn : cadinality - of - w1 ] ) .let be a proper subset of elements .define where ( see fig . [fig : lemma_proof ] for an illustration with explanations to follow ) ., title="fig : " ] + then since .we claim is a proper subset of .to see this , let .since ( lemma [ lem : w1 ] ) , we have . observation [ obs : out - degree - relation ] implies .therefore , we have , i.e. , , but . with the same arguments we note that the three constituent subsets of are mutually disjoint . in fig .[ fig : lemma_proof ] , the set is a subset of according to the definition of and the mapping between and is a one - to - one correspondence due to equation ( [ eqn : bipartition ] ) .since , we can assume there is a one - to - one correspondence between the nodes ( i.e. , servers ) of the two sets and .now let us define a new coalitional deviation with , which is the same as restricted on except that , if migrates in to a server of , then let migrate in to the corresponding ( under ) server of .we show that the improvement ratio of any job deviation in is at least the same as that in , which then implies that , contradicting the minimality of according to definition [ def : min - deviation ] . to this end, we only need to show that the new coalitional deviation takes place among the servers assigned with jobs of the coalition , that is , that benefit of any job deviation will not decrease due to the fact that _ all _ jobs on servers of migrate out in and hence in as well , leaving empty space for deviational jobs under , which originally migrate to servers of under .first we have according to lemma [ lem : w1 ] . on the other hand, it can be easily verified that according to the definition of .now we show , which then implies ( [ eqn : grand - inclusion ] ) .in fact , for any , noticing that , according to the definitions of and , we have , which implies that according to the definition of .to prove our final upper bound , we need to introduce the following two structures in graph with tilde - valid arc set : note that each element in represents a directed -cycles of both arcs in and each element in is a directed 2-path of both arcs in . in both cases of and ,the interior node has an in - degree and all its out - arcs are in .our next result is based on the following further refinement of the tilde - valid arc set .[ lem : omega ] if for some arc set satisfying assumption [ ass : m-2 - 2 ] , then there exists an arc set such that , while it also satisfies assumption [ ass : m-2 - 2 ] , additionally , is a proper subset of .assume and let be as in the definition of . then there must be a node with , since otherwise , which implies that there would be no directed path from any other nodes in to nodes or , contradicting lemma [ lem : strong - connectivity ] .therefore , the following set is not empty : let .we define a new tilde - valid arc set it is easily seen that and . on the other hand , still satisfies assumption [ ass : mimimum - w0 ] due to , and hence also satisfies assumption [ ass : m-2 - 2 ] since ( which implies that ) according to observation [ obs : out - degree - relation ] ( as no other node not in can possibly become a member of ) . as a result of lemma [ lem : omega ], we can further refine our initial choice of so that it satisfies the following assumption , where the benefit of minimizing will be seen in the proof of lemma [ lem : company ] ( see inequality ( [ eqn : pi ] ) ) .[ ass : omega - pi ] arc set is such that it first satisfies assumption [ ass : m-2 - 2 ] and then lexicographically minimizes .[ cor : omega ] any arc set satisfying assumption [ ass : omega - pi ] must satisfy . an arc set in graph that satisfies assumption [ ass : omega - pi ] is said to be _ derived _ from . without loss of generality ,our coalitional deviation is considered to have been chosen so that it satisfies the following assumption .[ ass : delta ] coalitional deviation defining minimal deviation graph is such that the arc set derived from gives lexicographical minimum let minimal deviation graph with satisfying assumption [ ass : delta ] be given . for any , there is , such that ( note ( [ eqn : bipartition ] ) ) and . given and .since according to the definition of , we have and hence since according to observation [ obs : out - degree - relation ] . since and ( again according to the definition of ) ,we let .then since otherwise we would have , contracting corollary [ cor : omega ] with our assumption [ ass : omega - pi ] ( see fig . [fig : company ] for an illustration with more explanations to follow ) . of or new coalitional deviation ,title="fig : " ] + we claim and hence are done .let us assume for a contradiction that .note that with replacing in the definition of , we conclude that .now let us define a new coalitional deviation so that its derived arc set gives a .in fact , let be defined as in observation [ obs : equiv - min - deviation ] after node has been replaced by in the statement of observation [ obs : equiv - min - deviation ] .denote as the arc set of the resulting minimal deviation graph .let be the natural result of after the re - orientation from and , i.e. , an arc in pointing to ( resp . ) will become an arc in pointing to ( resp . ) .other arcs are the same for and .apparently , has increased from , then clearly it must be the result of and/or becoming element(s ) of . in any such case ( say , the former case for the sake of argument ) , based on the definition of , we can use the approach in lemma [ lem : omega ] to find as defined in ( [ eqn : h ] ) and perform an arc - swap as in ( [ eqn : one - arc - swap ] ) with and replaced by and , respectively , to reduce while maintaining the values of and . for convenience, we still use to denote the tilde - valid arc set after such arc - swap(s ) if needed .consequently , we have however , we claim a desired contradiction . to see inequality ( [ eqn : pi ] ) , we first note that ( i ) any 2-path in starting at is also a 2-path in , and vice versa , and ( ii ) any 2-path in ( resp . ) starting at ( resp . ) must have the first arc ( resp . , since due to according to ( [ eqn : bipartition ] ) ) . on the other hand ,the following can be easily observed : 1 . if ( ) , then , and vice versa .2 . if ( ) , then , and vice versa . , since would imply by definition of and hence by definition of , which in turn implies that since under .consequently , we obtain , contradicting corollary [ cor : omega ] .4 . with similar reasons for , we have .therefore , overall contains at least one element less than as indicated in points 3 and 4 above .we call identified in the above lemma a _ company _ of .clearly , any can not be a company of two different elements of according to the statement of the lemma , which leads us to the following corollary .[ cor : x6 ] denote := \{j\in m\backslash x:\ \textrm{node is a company of }\} ] .then and . we have used the cardinalities of the six sets to bound , , , , and , respectively .let us make sure these sets do not overlap with and are mutually disjoint . according to lemmas [ lem : x2][lem : x5 ] and corollary [ cor : x6 ] , we have and hence and ( ) . since for any ( definition of ) and for any ( observation [ obs : out - degree - relation ] ) , noticing that ( lemma [ lem : w1 ] ) , we conclude that we are ready to go back to proving ( [ eqn : ultimate ] ) and hence theorem [ thm : main ]. since , the left - hand side of inequality ( [ eqn : ultimate ] ) is at most on the other hand , if we let which imply then noticing the properties ( [ eqn : disjoint - y ] ) and that we see that the right - hand side of inequality ( [ eqn : ultimate ] ) is at least according to lemmas [ lem : x2][lem : x1 ] and corollary [ cor : x6 ] , the right - hand side of inequality ( [ eqn : lhs ] ) is at most that of ( [ eqn : middle ] ) , which in turn ultimately leads to inequality ( [ eqn : ultimate ] ) .consequently , theorem [ thm : main ] is established . from theorem [ thm : main ] and the lower bound demonstrated in section [ sec: lower_bound ] , the following theorem follows . in the m - server load balancing game ( ) , any ne is a -approximate sne and the bound is tight .by establishing a tight bound of for the approximation of general nes to snes in the -server load balancing game for , we have closed the final gap for the study of approximation of general nes to snes . however , as demonstrated by feldman & tamir and by chen , a special subset of nes known as lpt assignments , which can be easily identified as nes , do approximate snes better than general nes .it is still a challenge to provide a tight approximation bound for this subset of nes .research by the second and third author was partially supported by the national natural science foundation of china ( grant no .11071142 ) and the natural science foundation of shandong province , china ( grant no .zr2010am034 ) .fotakis d. , kontogiannis s. , mavronicolas m. , and spiraklis p. , the structure and complexity of nash equilibria for a selfish routing game ._ proc . of the 29th international colloquium on automata , languages and programming _ , 2002 , 510519 .
we study strong stability of nash equilibria in load balancing games of ( ) identical servers , in which every job chooses one of the servers and each job wishes to minimize its cost , given by the workload of the server it chooses . a nash equilibrium ( ne ) is a strategy profile that is resilient to unilateral deviations . finding an ne in such a game is simple . however , an ne assignment is not stable against coordinated deviations of several jobs , while a strong nash equilibrium ( sne ) is . we study how well an ne approximates an sne . given any job assignment in a load balancing game , the improvement ratio ( ir ) of a deviation of a job is defined as the ratio between the pre- and post - deviation costs . an ne is said to be a -approximate sne ( ) if there is no coalition of jobs such that each job of the coalition will have an ir more than from coordinated deviations of the coalition . while it is already known that nes are the same as snes in the -server load balancing game , we prove that , in the -server load balancing game for any given , any ne is a -approximate sne , which together with the lower bound already established in the literature yields a tight approximation bound . this closes the final gap in the literature on the study of approximation of general nes to snes in load balancing games . to establish our upper bound , we make a novel use of a graph - theoretic tool . * keywords : * load balancing game , nash equilibrium , strong nash equilibrium , approximate strong nash equilibrium
we consider programs containing high security inputs and low security outputs . informally , the quantitative information flow problem concerns the amount of information that an attacker can learn about the high security input by executing the program and observing the low security output .the problem is motivated by applications in information security .we refer to the classic by denning for an overview .in essence , quantitative information flow measures _ how _ secure , or insecure , a program is .thus , unlike non - interference , that only tells whether a program is completely secure or not completely secure , a definition of quantitative information flow must be able to distinguish two programs that are both interferent but have different degrees of `` secureness . ''for example , consider the following two programs : in both programs , is a high security input and is a low security output .viewing as a password , is a prototypical login program that checks if the guess matches the password .is a program constant .see section [ sec : prelim ] for modeling attacker / user ( i.e. , low security ) inputs . ] by executing , an attacker only learns whether is equal to , whereas she would be able to learn the entire content of by executing .hence , a reasonable definition of quantitative information flow should assign a higher quantity to than to , whereas non - interference would merely say that and are both interferent , assuming that there are more than one possible value of .researchers have attempted to formalize the definition of quantitative information flow by appealing to information theory .this has resulted in definitions based on the shannon entropy , the min entropy , the guessing entropy , and channel capacity .much of the previous research has focused on information theoretic properties of the definitions and approximate ( i.e. , incomplete and/or unsound ) algorithms for checking and inferring quantitative information flow according to such definitions . in this paper, we give a verification theoretic and complexity theoretic analysis of quantitative information flow and investigate precise methods for checking quantitative information flow . in particular , we study the following _ comparison problem _ : given two programs and , decide if . here denotes the information flow quantity of the program according to the quantitative information flow definition where is either ] ( min - entropy based with distribution ) , ] , ] with uniform . *checking if is not a -safety property for any .* restricted to loop - free boolean programs , checking if is # p - hard .the results are in stark contrast to non - interference which is known to be a -safety property in general ( technically , for the termination - insensitive case ) and can be shown to be conp - complete for loop - free boolean programs ( proved in section [ sec : complex ] ) . ( # p is known to be as hard as the entire polynomial hierarchy . )the results suggest that precisely inferring ( i.e. , computing ) quantitative information flow according to these definitions would be harder than checking non - interference and may require a very different approach ( i.e. , not self composition ) .we also give the following positive results which show checking if the quantitative information flow of one program is larger than the other for all distributions according to the entropy - based definitions is easier .below , is , , or . *checking if (m_1 ) \leq\mathcal{y}[\mu](m_2) ] is conp - complete .these results are proven by showing that the problems (m_1 ) \leq { \it se}[\mu](m_2) ] , and (m_1 ) \leq { \it ge}[\mu](m_2) ] , which is the average of the information content , and intuitively , denotes the uncertainty of the random variable .let be a random variable with sample space and be a probability distribution associated with ( we write explicitly for clarity ) .the shannon entropy of is defined as (x)=\sum_{x\in\mathbb{x } } \mu(x = x)\log\frac{1}{\mu(x = x)}\ ] ] ( the logarithm is in base 2 . ) next , we define _conditional entropy_. informally , the conditional entropy of given denotes the uncertainty of after knowing .let and be random variables with sample spaces and , respectively , and be a probability distribution associated with and .then , the conditional entropy of given , written (x|y) ] denotes the initial uncertainty knowing the low security input and (h|o , l) ] , but (m_2 ) \not\leq { \it se}[u](m_1) ] .the above theorem is complementary to the one proven by clark et al . which states that for any such that for all and , (m)=0 ] represents the highest probability that an attacker guesses in a single try .we now define the min - entropy - based definition of quantitative information flow .[ def : me ] let be a program with high security input , low security input , and low security output .let be a distribution over and .then , the min - entropy - based quantitative information flow is defined (m)=\mathcal{h}_\infty[\mu](h|l)-\mathcal{h}_\infty[\mu](h|o , l)\ ] ] whereas smith focused on programs lacking low security inputs , we extend the definition to programs with low security inputs in the definition above .it is easy to see that our definition coincides with smith s for programs without low security inputs .also , the extension is arguably natural in the sense that we simply take the conditional entropy with respect to the distribution over the low security inputs .computing the min - entropy based quantitative information flow for our running example programs and from section [ sec : introduction ] with the uniform distribution , we obtain , (m_1)&=\mathcal{h}_\infty[u](h)-\mathcal{h}_\infty[u](h|o)\\ & = \log 4-\log 2\\ & = 1\\&\\ { \it me}[u](m_2)&=\mathcal{h}_\infty[u](h)-\mathcal{h}_\infty[u](h|o)\\ & = \log 4 -\log 1\\ & = 2 \end{array}\ ] ] again , we have that (m_1 ) \leq { \it me}[u](m_2) ] , and so is deemed less secure than .the third definition of quantitative information flow treated in this paper is the one based on the guessing entropy , that is also recently proposed in literature .let and be random variables , and be an associated probability distribution .then , the guessing entropy of is defined (x)=\sum_{1\le i\le m}i\times\mu(x = x_i)\ ] ] where and .the conditional guessing entropy of given is defined (x|y)=\sum_{y\in{\mathbb y}}\mu(y = y)\sum_{1\le i\le m}i\times\mu(x = x_i|y = y)\ ] ] where and .intuitively , (x) ] and (m_2 ) \not\leq { \it ge}[u](m_1) ] : given programs and having the same input domain , decide if (m_1 ) \leq { \it se}[\mu](m_2) ] and (m_2) ] , defined to be the problem (m_1 ) \leq { \it me}[\mu](m_2) ] , defined to be the problem (m_1 ) \leq { \it ge}[\mu](m_2) ] , we require the two programs to share the same input domain for these problems . we show that none of these comparison problems are -safety problems for any .informally , a program property is said to be a _property if it can be refuted by observing number of ( finite ) execution traces .a -safety problem is the problem of checking a -safety property .note that the standard safety property is a -safety property .an important property of a -safety problem is that it can be reduced to a standard safety ( i.e. , -safety ) problem , such as the unreachability problem , via a simple program transformation called _ self composition _ .it is well - known that non - interference is a -safety property ,- safety property . ] and this has enabled its precise checking via a reduction to a safety problem via self composition and piggybacking on advances in automated safety verification methods .unfortunately , the results in this section imply that quantitative information flow inference problem is unlikely to receive the same benefits . because we are concerned with properties about pairs of programs ( i.e. , comparison problems ) ,we extend the notion of -safety to properties refutable by observing traces from each of the two programs .more formally , we say that the comparison problem is a -safety property if implies that there exists \hspace*{-1.2pt}]} ] such that * * * \hspace*{-1.2pt } ] } \wedge t_2 \subseteq { [ \hspace*{-1.2pt}[m_2']\hspace*{-1.2pt } ] } \rightarrow ( m_1 ' , m_2 ' ) \not\in c ] denotes the semantics ( i.e. , traces ) of , represented by the set of input / output pairs .we now state the main results of the section .( recall that denotes the uniform distribution . )we sketch the main idea of the proofs .all proofs are by contradiction .let be the comparison problem in the statement and suppose is -safety .let .then , we have \hspace*{-1.2pt}]} ] satisfying the properties ( 1 ) , ( 2 ) , and ( 3 ) above . from this , we construct and such that \hspace*{-1.2pt}]} ] and to obtain the contradiction . ] is not a -safety property for any .[ thm : meks ] ] is a -safety property .let and be programs having the same input domain such that ] and \hspace*{-1.2pt}]} ] .let where .now , we construct new programs and as follows . where * , * , * , , , , and are distinct , * , * , and * . then ,comparing the shannon - entropy - based quantitative information flow of and , we have , (\bar{m'})-{\it se}[u](\bar{m})\\ \hspace{2em}=\sum_{o_x'\in{\{{o_1',\dots , o_i'}\}}}u(o_x')\log\frac{1}{u(o_x')}\\ \hspace{3em}+u(o')\log\frac{1}{u(o')}+u(o_r')\log\frac{1}{u(o_r')}\\ \hspace{4em}-(\sum_{o_x\in{\{{o_1,\dots , o_j}\}}}u(o_x)\log\frac{1}{u(o_x)}\\ \hspace{5em}+\sum_{o_y\in{\{{o_{j+1},\dots , o_{j+i}}\}}}u(o_y)\log\frac{1}{u(o_y)}\\ \hspace{6em}+u(o_r)\log\frac{1}{u(o_r)})\\ \end{array}\ ] ] ( note the abbreviations from appendix [ sec : lemdefs ] . ) by lemma [ lem : a7 ] , we have and trivially , we have as a result , we have (\bar{m'})-{\it se}[u](\bar{m})\ge 0\ ] ] note that and have the same counterexamples and , that is , \hspace*{-1.2pt}]} ] . however , we have ] [ thm : mecomp ] } ] [ thm : cccomp ] we remind that the above results apply ( even ) when the comparison problems ] , ] , ] , and can be used a polynomial number of times to solve a # p - complete problem . because toda s theorem implies that the entire polynomial hierarchy can be solved by using a # p - complete oracle a polynomial number of times ,our results show that the comparison problems for quantitative information flow can also be used a polynomial number of times to solve the entire polynomial hierarchy , for the case of loop - free boolean programs .as shown below , this presents a gap from non - interference , which is only conp - complete for loop - free boolean programs .[ thm : nicomp ] checking non - interference is conp - complete for loop - free boolean programs .the above is an instance of the general observation that , by solving quantitative information flow problems , one is able to solve the class of problems known as _ counting problems _ , which coincides with # sat for the case of loop - free boolean programs .we discuss the details of the proof of theorem [ thm : secomp ] .the proofs of theorems [ thm : mecomp ] , [ thm : gecomp ] , [ thm : cccomp ] are deferred to appendix [ sec : proofs ] .first , we prove the following lemma which states that we can compare the number of solutions to boolean formulas by computing ] where and .let and .we have (m_j ) & = \frac{j}{2^{|h|+1}}\log\frac{2^{|h|+1}}{j } + \frac{2^{|h|+1}-j}{2^{|h|+1}}\log\frac{2^{|h|+1}}{2^{|h|+1}-j}\\ \hspace{5.3em } & = p\log p + ( 1-p)\log \frac{1}{1-p}\\ { \it se}[u](m_i ) & = \frac{i}{2^{|h|+1}}\log\frac{2^{|h|+1}}{i } + \frac{2^{|h|+1}-i}{2^{|h|+1}}\log\frac{2^{|h|+1}}{2^{|h|+1}-i}\\ \hspace{5.3em } & = q\log q + ( 1-q)\log \frac{1}{1-q } \end{array}\ ] ] * * only if * + suppose . then, (m_i)-{\it se}[u](m_j ) \\ \hspace{3em } = p\log\frac{1}{p }+ ( 1-p)\log\frac{1}{1-p}\\ \hspace{4em } - q\log\frac{1}{q } - ( 1-q)\log\frac{1}{1-q}\\ \hspace{3em } = \log(\frac{1-p}{p})^p \frac{1-q}{1-p } ( \frac{q}{1-q})^q \end{array}\ ] ] then , from and , we have (m_i)-{\it se}[u](m_j ) & \geq \log(\frac{1-p}{p})^p ( \frac{q}{1-q})^q\\ & \ge\log(\frac{1-p}{p})^q ( \frac{q}{1-q})^q\\ & = \log(\frac{(1-p)q}{p(1-q)})^q\\ & = \log(\frac{q - pq}{p - pq})^q\\ & = \log(\frac{pq - q}{pq - p})^q\\ & = \log(\frac{1-\frac{1}{p}}{1-\frac{1}{q}})^q\\ & \geq 0 \end{array}\ ] ] the last line follows from . * * if * + we prove the contraposition .suppose . then , (m_j ) - { \itse}[u](m_i ) \\ \qquad = q\log\frac{1}{q } + ( 1-q)\log\frac{1}{1-q } \\ \qquad\qquad- p\log\frac{1}{p } - ( 1-p)\log\frac{1}{1-p } \\ \qquad > 0 \end{array}\ ] ]the last line follows from the fact that .therefore , (m_j ) \not\leq { \it se}[u](m_i) ] at most times .first , we define a procedure that returns the number of solutions of .let where is a formula over having assignments and be a boolean variable such that .note that , by lemma [ lem : a2 ] , such can be generated in linear time .then , we invoke the following procedure where .(f(n),m ' ) \vee \neg c_{\it se}[u](m',f(n))\\ \qquad{\sf if}\;c_{\it se}[u](f(n),m')\\ \qquad\qquad{\sf then}\;\{\ell = n;n=(\ell+r)/2;\}\\ \qquad\qquad{\sf else}\;\{r = n;n=(\ell+r)/2;\}\\ { \sf return}\;n \end{array}\ ] ] note that when the procedure terminates , we have (f(n ) ) = { \it se}[u](m') ] is accessed times , and this proves the lemma .finally , theorem [ thm : secomp ] follows from lemma [ lem : a4 ] and the fact that # sat , the problem of counting the number of solutions to a boolean formula , is # p - complete .as proved in section [ sec : qifnksafe ] , precisely computing quantitative information flow is quite difficult . indeed, we have shown that even just comparing two programs on which has the larger flow is difficult ( i.e. , , , , and ) . in this section ,we show that universally quantifying the shannon - entropy based comparison problem ] , or the guessing - entropy based problem ] for all via self composition ( and likewise for ] ) .we actually show in section [ sec : qbysc ] that we can even use the security - type - based approach suggested by terauchi and aiken to minimize code duplication during self composition ( i.e. , do _ interleaved _ self composition ) . we remind that except for the conp - completeness result ( theorem [ thm : rcomp ] ) , the results in this section apply to any ( deterministic and terminating ) programs and not just to loop - free boolean programs .[ def : r ] we define to be the relation such that iff for all and , if then .note that essentially says that if an attacker can distinguish a pair of high security inputs by executing , then she could do the same by executing .hence , naturally expresses that is at least as secure as .have appeared in literature ( often in somewhat different representations ) . in particular ,clark et al . have shown a result analogous to the direction of theorem [ thm : reqse ] below .but , s properties have not been fully investigated . ]it may be somewhat surprising that this simple relation is actually equivalent to the rather complex entropy - based quantitative information flow definitions when they are cast as comparison problems and the distributions are universally quantified , as stated in the following theorems .first , we show that coincides exactly with with its distribution universally quantified .[ thm : reqse ] (m_1,m_2)}\}} ] [ thm : reqge ] (m_1,m_2)}\}} ] is at least as large as (m) ] , that is , }\}} ] , (m_2 ) \not\leq { \it se}[u](m_{\it spec}) ] .finally , we have that , and so is at least as secure as according to all of the definitions of quantitative information flow considered in this paper .in fact , it can be also shown that .( however , note that and are not semantically equivalent , i.e. , their outputs are reversed . )this work builds on previous work that proposed information theoretic notions of quantitative information flow .the previous research has mostly focused on information theoretic properties of the definitions and proposed approximate ( i.e. , incomplete and/or unsound ) methods for checking and inferring them . in contrast , this paper investigates the verification theoretic and complexity theoretic hardness of precisely inferring quantitative information flow according to the definitions and also proposes a precise method for checking quantitative information flow .our method checks the quantitative information flow of a program against that of a specification program having the desired level of security via self composition for all distributions according to the entropy - based definitions .it is quite interesting that the relation unifies the different proposals for the definition of quantitative information flow when they are cast as comparison problems and their distributions are universally quantified . as remarked in section [ sec : qshan ] , naturally expresses the fact that one program is more secure than the other , and it could be argued that it is the essence of quantitative information flow .researchers have also proposed definitions of quantitative information flow that do not fit the models studied in this paper .these include the definition based on the notion of _ belief _ , and the ones that take the maximum over the low security inputs .refines these notions in the same sense as theorem [ thm : rimpcc ] , but the other direction is not guaranteed to hold . ] despite the staggering complexity made apparent in this paper , recent attempts have been made to ( more ) precisely infer quantitative information flow ( without universally quantifying over the distribution as in our approach ) .these methods are based on the idea of _ counting_. as remarked in section [ sec : complex ] , quantitative information flow is closely related to counting problems , and several attempts have been made to reduce quantitative information flow problems to them .for instance , newsome et al . reduce the inference problem to the # sat problem and apply off - the - shelf # sat solvers . to achieve scalability , they sacrifice both soundness and completeness by only computing information flow from one execution path .backes et al . also propose a counting - based approach that involves self composition .however , unlike our method , they use self composition repeatedly to find a new solution ( i.e. , more than a bounded number of times ) , and so their results do not contradict the negative results of this paper .we have investigated the hardness and possibilities of precisely checking and inferring quantitative information flow according to the various definitions proposed in literature .specifically , we have considered the definitions based on the shannon entropy , the min entropy , the guessing entropy , and channel capacity .we have shown that comparing two programs on which has the larger flow according to these definitions is not a -safety problem for any , and therefore that it is not possible to reduce the problem to a safety problem via self composition .the result is in contrast to non - interference which is a -safety problem .we have also shown a complexity theoretic gap with non - interference by proving the # p - hardness of the comparison problems and conp - completeness of non - interference , when restricted to loop - free boolean programs .we have also shown a positive result that checking if the entropy - based quantitative information flow of one program is larger than that of another for all distributions is a -safety problem , and that it is also conp - complete when restricted to loop - free boolean programs .we have done this by proving a surprising result that universally quantifying the distribution in the comparison problem for the entropy - based definitions is equivalent to a simple -safety relation .motivated by the result , we have proposed a novel approach to precisely checking quantitative information flow that reduces the problem to a safety problem via self composition .our method checks the quantitative information flow of a program for all distributions against that of a specification program having the desired level of security .we would like to thank takeshi tsukada for important insights and useful discussions that motivated this work .we also thank the anonymous reviewers for their useful comments .this work was supported by mext kakenhi 20700019 and 20240001 , and global coe program `` ceries . ''10 m. backes , b. kpf , and a. rybalchenko . automatic discovery and quantification of information leaks . in _ieee symposium on security and privacy _ , pages 141153 .ieee computer society , 2009 .t. ball and s. k. rajamani .the slam project : debugging system software via static analysis . in _ popl _ , pages 13 , 2002 .g. barthe , p. r. dargenio , and t. rezk .secure information flow by self - composition . in _ csfw _ , pages 100114 .ieee computer society , 2004 .d. beyer , t. a. henzinger , r. jhala , and r. majumdar .the software model checker blast ., 9(5 - 6):505525 , 2007 .d. clark , s. hunt , and p. malacaria .quantified interference for a while language ., 112:149166 , 2005 .d. clark , s. hunt , and p. malacaria .quantitative information flow , relations and polymorphic types ., 15(2):181199 , 2005 .d. clark , s. hunt , and p. malacaria . a static analysis for quantifying information flow in a simple imperative language ., 15(3):321371 , 2007 .m. r. clarkson , a. c. myers , and f. b. schneider .belief in information flow . in _pages 3145 .ieee computer society , 2005 .m. r. clarkson and f. b. schneider. hyperproperties . in _ csf _, pages 5165 .ieee computer society , 2008 .e. s. cohen .information transmission in computational systems . in _sosp _ , pages 133139 , 1977 . .darvas , r. hhnle , and d. sands . a theorem proving approach to analysis of secure information flow . in d.hutter and m. ullmann , editors , _ spc _ , volume 3450 of _ lecture notes in computer science _ , pages 193209 .springer , 2005 .d. e. r. denning . .addison - wesley longman publishing co. , inc . ,boston , ma , usa , 1982 .c. flanagan and j. b. saxe . avoiding exponential explosion : generating compact verification conditions . in _pages 193205 , 2001 .j. a. goguen and j. meseguer .security policies and security models . in _ ieee symposium on security and privacy _ , pages 1120 , 1982 .t. a. henzinger , r. jhala , r. majumdar , and g. sutre .lazy abstraction . in _ popl _ , pages 5870 , 2002 .b. kpf and d. basin . an information - theoretic model for adaptive side - channel attacks . in _ccs 07 : proceedings of the 14th acm conference on computer and communications security _ , pages 286296 , new york , ny , usa , 2007 .k. r. m. leino .efficient weakest preconditions . , 93(6):281288 , 2005 .p. li and s. zdancewic .downgrading policies and relaxed noninterference . in j.palsberg and m. abadi , editors , _ popl _ , pages 158170 .acm , 2005 .p. malacaria .assessing security threats of looping constructs . in m.hofmann and m. felleisen , editors , _ popl _ , pages 225235 .acm , 2007 .p. malacaria and h. chen .lagrange multipliers and maximum information leakage in different observational models . in _plas 08 : proceedings of the third acm sigplan workshop on programming languages and analysis for security _ , pages 135146 , new york , ny , usa , 2008 .j. l. massey .guessing and entropy . in _isit 94 : proceedings of the 1994 ieee international symposium on information theory _, page 204 , 1994 .s. mccamant and m. d. ernst .quantitative information flow as network flow capacity . in r.gupta and s. p. amarasinghe , editors , _ pldi _ , pages 193205 .acm , 2008 .j. mclean . a general theory of composition for trace sets closed under selective interleaving functions . in _sp 94 : proceedings of the 1994 ieee symposium on security and privacy _, page 79 , washington , dc , usa , 1994 .ieee computer society .k. l. mcmillan .lazy abstraction with interpolants . in t. ball and r. b. jones , editors , _ cav _ , volume 4144 of _ lecture notes in computer science _ , pages 123136 .springer , 2006 .d. a. naumann . from coupling relations to mated invariants for checking information flow . in _computer security - esorics 2006 , 11th european symposium on research in computer security , proceedings _, pages 279296 , hamburg , germany , sept . 2006 .j. newsome , s. mccamant , and d. song .measuring channel capacity to distinguish undue influence . in _ proceedings of the fourth acm sigplan workshop on programming languages and analysis for security ( plas ) _ , dublin , ireland ,june 2009 .a. sabelfeld and a. c. myers . a model for delimited information release .in k. futatsugi , f. mizoguchi , and n. yonezaki , editors , _ isss _ , volume 3233 of _ lecture notes in computer science _ , pages 174191 .springer , 2003 .c. shannon . a mathematical theory of communication ., 27:379423 , 623656 , 1948 .g. smith .on the foundations of quantitative information flow . in _fossacs 09 : proceedings of the 12th international conference on foundations of software science and computational structures _ ,pages 288302 , berlin , heidelberg , 2009 .springer - verlag .t. terauchi and a. aiken .secure information flow as a safety problem . in c.hankin and i. siveroni , editors , _ sas _ , volume 3672 of _ lecture notes in computer science _ , pages 352367 .springer , 2005 .s. toda .is as hard as the polynomial - time hierarchy ., 20(5):865877 , 1991 .h. unno , n. kobayashi , and a. yonezawa .combining type - based analysis and model checking for finding counterexamples against non - interference . in v. c. sreedhar and s. zdancewic , editors , _ plas _ , pages 1726 .acm , 2006 .d. volpano , g. smith , and c. irvine . a sound type system for secure flow analysis ., 4(3):167187 , 1996 .we define some abbreviations . [ def : distabrv ] we use this notation whenever the correspondences between random variables and their values are clear . for convenience , we sometimes use large letters , , , etc . to range over boolean variables as well as generic random variables . for simplicity, we often compute the shannon - entropy based quantitative information flow for programs that do not have low security inputs . for such programs ,the equation _ se _ from definition [ def : se ] can be simplified as follows. (m)&=\mathcal{i}[\mu](o;h)\\ & = \mathcal{h}[\mu](o ) \end{array}\ ] ] we note the following property of deterministic programs .[ lem : detse ] for deterministic , (m)=\mathcal{i}[\mu](o;h|l ) = \mathcal{h}[\mu](o|l)\ ] ] the following lemma is used to show that we can generate a boolean formula that has exactly the desired number of solutions in polynomial ( actually , linear ) time . [lem : a2 ] let be an integer such that .then , a boolean formula that has exactly assignments over the variables can be computed in time linear in .we define a procedure that returns the boolean formula .below , , i.e. , is the variable . here , is an empty string .let be a -bit binary representation of .we prove that returns a boolean formula that has exactly k assignments by induction on the number of variables , that is , .* * * + returns , that is , . has no satisfying assignment .* * + returns , that is , . has only one satisfying assignment . * * * + let be a binary representation of . . by induction hypothesis , has satisfying assignments for .it follows that has just satisfying assignments , because has no assignment and has just assignments . ** + let be a binary representation of . returns . is a binary representation of . by induction hypothesis, has satisfying assignments for .it follows that has just satisfying assignments , because has just assignments and when , has just assignments .we frequent the following property of logarithmic arithmetic when proving statements concerning the shannon entropy .[ lem : a7 ] let and be numbers such that ] . * ( ) suppose that is non - interferent .then , by lemma [ lem : detse ] , (m)&=&\mathcal{i}[\mu](o;h|l)\\ & = & \mathcal{h}[\mu](o|l)\\ & = & \sum_{o}\sum_{\ell}\mu(o,\ell)\log\frac{\mu(\ell)}{\mu(o,\ell)}\\ & = & \sum_{o}\sum_{\ell}\mu(o,\ell)\log\frac{\mu(\ell)}{\mu(\ell)}\\ & = & 0 \end{array}\ ] ] the last step follows from the fact that non - interference implies . *( ) suppose that is interferent .then , there must be and such that , , and .pick a probability function such that . then , by lemma [ lem : detse ] , (m)&=&\mathcal{i}[\mu](o;h|l)\\ & = & \mathcal{h}[\mu](o|l)\\ & = & \sum_{o}\sum_{\ell}\mu(o,\ell)\log\frac{\mu(\ell)}{\mu(o,\ell)}\\ & = & \mu(o_0,\ell')\log\frac{\mu(\ell')}{\mu(o_0,\ell')}\\ & & \qquad+\mu(o_1,\ell')\log\frac{\mu(\ell')}{\mu(o_1,\ell')}\\ & = & \frac{1}{2}\log 2 + \frac{1}{2}\log 2\\ & = & 1 \end{array}\ ] ] therefore , there exists such that (m ) \neq 0 ] is a -safety property .let and be programs having same input domain such that ] and \hspace*{-1.2pt}]} ] . the number of outputs of the program is greater than or equal to the number of the outputs of the program .hence , by lemma [ lem : melog ] , we have ] and \hspace*{-1.2pt}]} ] is a -safety property .let and be programs having the same input domain such that ] and \hspace*{-1.2pt}]} ] .we compare the guessing - entropy - based quantitative information flow of the two programs .(\bar{m'})-{\it ge}[u](\bar{m})\\ \quad=\frac{|\mathbb{h}|}{2}-\frac{1}{2|\mathbb{h}|}\sum_{o'\in m'(\mathbb{h})}|m'^{-1}(o')|^2\\ \qquad-\frac{|\mathbb{h}|}{2}+\frac{1}{2|\mathbb{h}|}\sum_{o\in m(\mathbb{h})}|m^{-1}(o)|^2\\ \quad=\frac{1}{2|\mathbb{h}|}\sum_{o\in m(\mathbb{h})}|m^{-1}(o)|^2\\ \qquad-\frac{1}{2|\mathbb{h}|}\sum_{o'\in m'(\mathbb{h})}|m'^{-1}(o')|^2\\ \quad=\frac{1}{2|\mathbb{h}|}(\sum_{o_x\in{\{{o_1,\dots , o_i}\}}}|m^{-1}(o_x)|^2\\ \qquad\qquad+|m^{-1}(o)|^2+|m^{-1}(o_r)|^2)\\ \qquad-\frac{1}{2|\mathbb{h}|}(\sum_{o_x'\in{\{{o_1',\dots , o_j'}\}}}|m'^{-1}(o_x')|^2\\ \qquad\qquad+\sum_{o_y'\in{\{{o_{j+1}',\dots , o_{j+i}'}\}}}|m'^{-1}(o_y')|^2\\ \qquad\qquad+|m'^{-1}(o_r')|^2)\\ \end{array}\ ] ] by lemma [ lem : gelem ] , we have trivially , we have as a result , we have (\bar{m'})-{\it ge}[u](\bar{m})\ge 0\ ] ] recall that and have the same counterexamples and , that is , \hspace*{-1.2pt}]} ] .however , we have ] at most times .let where is a formula over having assignments and is a boolean variable such that .note that by lemma [ lem : a2 ] , such can be generated in linear time .then , we invoke the following procedure where is defined in figure [ boolenc ] .}\\ \qquad\qquad{\sf and}\;(t(b(n)),t(\phi\wedge h'))\inc_{\it me}[u])\\ \qquad{\sf if}\;(t(\phi\wedge h'),t(b(n)))\in c_{\it me}[u]\\ \qquad\qquad{\sf then}\;\{\ell = n;n=(\ell+r)/2;\}\\ \qquad\qquad{\sf else}\;\{r = n;n=(\ell+r)/2;\}\\ { \sf return}\;n \end{array}\ ] ] note that when the procedure terminates , we have (t(b(n))={\it me}[u](t(\phi\wedge h')) ] is accessed times , and this proves the lemma .[ lem : gemono ] let and be distinct variables and and be boolean formulas over .let and .then , we have iff (m)\le{\it ge}[u](m') ] at most times .let where is a formula over having assignments and is a boolean variable such that .note that by lemma [ lem : a2 ] , such can be generated in linear time .}\\ \qquad\qquad{\sf and}\;(o:=b(n),o:=\phi\wedge h')\inc_{\it ge}[u])\\ \qquad{\sf if}\;(o:=\phi\wedge h',o:=b(n))\in c_{\it ge}[u]\\ \qquad\qquad{\sf then}\;\{\ell = n;n=(\ell+r)/2;\}\\ \qquad\qquad{\sf else}\;\{r = n;n=(\ell+r)/2;\}\\ { \sf return}\;n \end{array}\ ] ] we show that the procedure iterates at most times . to see this , every iteration in the procedure narrows the range between and by one half . because is bounded by , it follows that the procedure iterates at most times .hence , the oracle ] where and . * ( ) + suppose .we have (h|o',l)\le \mathcal{h}_\infty[\mu](h|o , l)\\ \qquad\textrm { iff } \mathcal{v}[\mu](h|o , l)\le \mathcal{v}[\mu](h|o',l ) \end{array}\ ] ] by the definition of min entropy , and + (h|o , l)\\ \quad=\sum_{o\in{\mathbb o},\ell\in{\mathbb l } } \mu(o,\ell)\max_{h\in\mathbb h } \mu(h|o,\ell)\\ \quad = \sum_{o\in{\mathbb o},\ell\in{\mathbb l } } \mu(o,\ell ) \max_{h\in\mathbb h } \frac{\mu(h , o,\ell)}{\mu(o,\ell)}\\ \quad = \sum_{o\in{\mathbb o},\ell\in{\mathbb l } } \max_{h\in\mathbb h } \mu(o,\ell)\frac{\mu(h , o,\ell)}{\mu(o,\ell)}\\ \quad = \sum_{o\in{\mathbb o},\ell\in{\mathbb l}}\max_{h\in\mathbb h } \mu(h , o,\ell)\\ \quad = \sum_{o\in{\mathbb o},\ell\in{\mathbb l}}\max_{h\in { \{{h'\mid o = m(h',\ell)}\ } } } \mu(h,\ell)\\ \end{array}\ ] ] where ] .+ for any and , there exists such that . because , by lemma [ lem : a10 ] , we have + therefore , for some . hence , each summand in also appears in . and, we have the above proposition .* ( ) + we prove the contraposition .. then , there exist such that , , , and .pick a probability distribution such that . then, we have (h|o',l)\\ \quad = \sum_{o'\in{\mathbb o'},\ell\in{\mathbb l}}\max_{h\in{\{{h'\mid o'=m(h',\ell)}\ } } } \mu(h,\ell)\\ \quad= \frac{1}{2 } \end{array}\ ] ] and (h|o , l)\\ \quad=\sum_{o\in{\mathbb o},\ell\in{\mathbb l}}\max_{h\in{\{{h'\mid o = m(h',\ell)}\ } } } \mu(h,\ell)\\ \quad= \frac{1}{2}+\frac{1}{2}\\ \quad= 1 \end{array}\ ] ] therefore , (h|o',l)\not\le \mathcal{h}_\infty[\mu](h|o , l) ] iff (h|o , l)\ge\mathcal{h}_\infty[\mu](h|o',l) ] and ] . by theorem [ thm : reqse ], we have (m)\le{\it se}[\mu](m')\ ] ] now , there exists such that (m)\ ] ] therefore , (m)\le{\it se}[\mu'](m')\ ] ] trivially , (m')\le { \it cc}(m')\ ] ] therefore , we have the conclusion .* + we prove by reducing to unsat , which is conp - complete .we reduce via self composition .let and be boolean programs that we want to know if they are in .first , we make copies of and , with all variables in and replaced by fresh ( primed ) variables .call these copies and .let where ,, , and are the low security outputs of ,, , and , respectively .note that can be obtained in time polynomial in the size of and . here , like in theorem [ thm : nicomp ] , we use the optimized weakest precondition generation technique to generate a formula quadratic in the size of . then , if and only if is valid , that is , if and only if is unsatisfiable . * + we prove by reducing ni to , because ni is conp - complete by theorem [ thm : nicomp ] .we can check the non - interference of by solving where is non - interferent and have the same input domain as by theorem [ thm : rni ] . note that such can be constructed in polynomial time .therefore , we have .
researchers have proposed formal definitions of quantitative information flow based on information theoretic notions such as the shannon entropy , the min entropy , the guessing entropy , and channel capacity . this paper investigates the hardness and possibilities of precisely checking and inferring quantitative information flow according to such definitions . we prove that , even for just comparing two programs on which has the larger flow , none of the definitions is a k - safety property for any k , and therefore is not amenable to the self - composition technique that has been successfully applied to precisely checking non - interference . we also show a complexity theoretic gap with non - interference by proving that , for loop - free boolean programs whose non - interference is conp - complete , the comparison problem is # p - hard for all of the definitions . for positive results , we show that universally quantifying the distribution in the comparison problem , that is , comparing two programs according to the entropy based definitions on which has the larger flow for all distributions , is a 2-safety problem in general and is conp - complete when restricted for loop - free boolean programs . we prove this by showing that the problem is equivalent to a simple relation naturally expressing the fact that one program is more secure than the other . we prove that the relation also refines the channel - capacity based definition , and that it can be precisely checked via the self - composition as well as the `` interleaved '' self - composition technique .
a majority of computational homogenization algorithms rely on the solution of the unit cell problem , which concerns the determination of local fields in a representative sample of a heterogeneous material under periodic boundary conditions . currently , the most efficient numerical solvers of this problem are based on discretization of integral equations . in the case of particulate composites with smoothbounded inclusions embedded in a matrix phase , the problem can be reduced to internal interfaces and solved with remarkable accuracy and efficiency by the fast multipole method , see ( * ? ? ? * and references therein ) .an alternative method has been proposed by suquet and moulinec to treat problems with general microstructures supplied in the form of digital images .the algorithm is based on the neumann series expansion of the inverse to an operator arising in the associated lippmann - schwinger equation and exploits the fast fourier transform ( fft ) to evaluate the action of the operator efficiently .the major disadvantage of the fft - based method consists in its poor convergence for composites exhibiting large jumps in material coefficients . to overcome this difficulty , eyre and milton proposed in an accelerated scheme derived from a modified integral equation treated by means of the series expansion approach .in addition , michel et al . introduced an equivalent saddle - point formulation solved by the augmented lagrangian method .as clearly demonstrated in a numerical study by moulinec and suquet , both methods converge considerably faster than the original variant ; the number of iterations is proportional to the square root of the phase contrast instead of the linear increase for the basic scheme .however , this comes at the expense of increased computational cost per iteration and the sensitivity of the augmented lagrangian algorithm to the setting of its internal parameters . in this short note, we introduce yet another approach to improve the convergence of the original fft - based scheme based on the trigonometric collocation method and its application to the helmholtz equation as introduced by vainikko .we observe that the discretization results in a system of linear equations with a structured dense matrix , for which a matrix - vector product can be computed efficiently using fft , cf .section [ sec : methodology ] .it is then natural to treat the resulting system by standard iterative solvers , such as the krylov subspace methods , instead of the series expansion technique . in section [ sec : results ], the potential of such approach is demonstrated by means of a numerical study comparing the performance of the original scheme and the conjugate- and biconjugate - gradient methods for two - dimensional scalar electrostatics .in this section , we briefly summarize the essential steps of the trigonometric collocation - based solution to the unit cell problem by adapting the original exposition by vainikko to the setting of electrical conduction in periodic composites . in what follows , , and denote scalar , vector and second - order tensor quantities with greek subscripts used when referring to the corresponding components , e.g. .matrices are denoted by a serif font ( e.g. ) and a multi - index notation is employed , in which with represents and stands for the -th element of the matrix .we consider a composite material represented by a periodic unit cell . in the context of linear electrostatics ,the associated unit cell problem reads as where is a -periodic vectorial electric field , denotes the corresponding vector of electric current and is a second - order positive - definite tensor of electric conductivity . in addition ,the field is subject to a constraint where denotes a prescribed macroscopic electric field and represents the -dimensional measure of .next , we introduce a homogeneous reference medium with constant conductivity , leading to a decomposition of the electric current field in the form the original problem is then equivalent to the periodic lippmann - schwinger integral equation , formally written as where the operator is derived from the green s function of the problem with and .making use of the convolution theorem , eq .attains a local form in the fourier space : where denotes the fourier coefficient of for the -th frequency given by " is the imaginary unit and here , we refer to for additional details . numerical solution of the lippmann - schwinger equation is based on a discretization of a unit cell into a regular periodic grid with nodal points and grid spacings .the searched field in is approximated by a trigonometric polynomial in the form ( cf . ) where , designates the fourier coefficients defined in and we recall , e.g. from , that the -th component of the trigonometric polynomial expansion admits two equivalent finite - dimensional representations . the first one is based on a matrix of the fourier coefficients of the -th component and equation with .second , the data can be entirely determined by interpolation of nodal values where is a matrix storing electric field values at grid points , is the corresponding value at the -th node with coordinates and basis functions satisfy the dirac delta property with . both representations can be directly related to each other by where the vandermonde matrices and implement the forward and inverse fourier transform , respectively , e.g. ( * ? ? ? * section 4.6 ) .the trigonometric collocation method is based on the projection of the lippmann - schwinger equation to the space of the trigonometric polynomials of the form , cf . . in view of eq ., this is equivalent to the collocation at grid points , with the action of operator evaluated from the fourier space expression converted to the nodal representation by .the resulting system of collocation equations reads where and store the corresponding solution and of the macroscopic field , respectively .furthermore , is the unit matrix and the non - symmetric matrix can be expressed , for the two - dimensional setting , in the partitioned format as } { \left [ \begin{array}{cc } { \widehat{{{\mathsf{\gamma}}}}}^0_{11 } & { \widehat{{{\mathsf{\gamma}}}}}^0_{12 } \\ { \widehat{{{\mathsf{\gamma}}}}}^0_{21 } & { \widehat{{{\mathsf{\gamma}}}}}^0_{22 } \end{array}\right ] } { \left [ \begin{array}{cc } { { \mathsf{f } } } & { \mathsf{0 } } \\ { \mathsf{0 } } & { { \mathsf{f}}}\end{array}\right ] } { \left [ \begin{array}{cc } { { \mathsf{\delta{\mathsf{l}}}}}_{11 } & { { \mathsf{\delta{\mathsf{l}}}}}_{12 } \\ { { \mathsf{\delta{\mathsf{l}}}}}_{21 } & { { \mathsf{\delta{\mathsf{l}}}}}_{22 } \end{array}\right ] } , \ ] ] with an obvious generalization to an arbitrary dimension . here , and are diagonal matrices storing the corresponding grid values , for which it holds it follows from eq . that the cost of the multiplication by or by is driven by the forward and inverse fourier transforms , which can be performed in operations by fft techniques .this makes the resulting system ideally suited for iterative solvers .in particular , the original fast fourier transform - based homogenization ( ffth ) scheme formulated by moulinec and suquet in is based on the neumann expansion of the matrix inverse , so as to yield the -th iterate in the form convergence of the series was comprehensively studied in , where it was shown that the optimal rate of convergence is achieved for with and denoting the minimum and maximum eigenvalues of on and being the identity tensor . here , we propose to solve the non - symmetric system by well - established krylov subspace methods , in particular , exploiting the classical conjugate gradient ( ) method and the biconjugate gradient ( ) algorithm . even though that algorithm is generally applicable to symmetric and positive - definite systemsonly , its convergence in the one - dimensional setting has been proven by vondejc ( * ? ? ?* section 6.2 ) .a successful application of method to a generalized eshelby inhomogeneity problem has also been recently reported by novk and kanaun .to assess the performance of the conjugate gradient algorithms , we consider a model problem of the transverse electric conduction in a square array of identical circular particles with volume fraction .a uniform macroscopic field is imposed on the corresponding single - particle unit cell , discretized by nodes and the phases are considered to be isotropic with the conductivities set to for the matrix phase and to for the particle .the conductivity of the homogeneous reference medium is parameterized as where corresponds to the optimal convergence of algorithm .all conjugate gradient - related results have been obtained using the implementations according to and referred to as algorithm 6.18 ( method ) and algorithm 7.3 ( scheme ) .two termination criteria are considered .the first one is defined for the -th iteration as and provides the test of the equilibrium condition in the fourier space .an alternative expression , related to the standard residual norm for iterative solvers , has been proposed by vinogradov and milton in and admits the form with the additional term ensuring the proportionality to at convergence . from the numerical point of view , the latter criterion is more efficient than the equilibrium variant , which requires additional operations per iteration . from the theoretical point of view , its usage is justified only when supported by a convergence result for the iterative algorithm . in the opposite case , the equilibrium norm appears to be more appropriate , in order to avoid spurious non - physical solutions .since no results for the optimal choice of the reference medium are known for -based solvers , we first estimate their sensitivity to this aspect numerically .the results appear in fig .[ fig : example1a ] , plotting the relative number of iterations for and solvers against the conductivity of the reference medium parameterized by , recall eq . . as expected , both and solvers achieve a significant improvement over method in terms of the number of iterations , ranging from for a mildly - contrasted composite down to for .moreover , contrary to all other available methods , the number of iterations is almost independent of the choice of the reference medium .we also observe , in agreement with results by ( * ? ? ?* section 6.2 ) for the one - dimensional setting , that and algorithms generate identical sequences of iterates ; the minor differences visible for or can be therefore attributed to accumulation of round - off errors .these conclusions hold for both equilibrium- and residual - based norms , which appear to be roughly proportional for the considered range of the phase contrasts , cf .[ fig : example1b ] .therefore , the residual criterion will mostly be used in what follows . of and plotted against the conductivity parameter for -based termination condition with tolerance . ] in fig .[ fig : example1c ] , we supplement the comparison by considering the total cpu time required to achieve a convergence .the data indicate that the cost of one iteration is governed by the matrix - vector multiplication , recall eq .: the overhead of scheme is about with respect to method , while the application of algorithm , which involves and products per iteration , is about twice as demanding . as a result , cg algorithm significantly reduces the overall computational time in the whole range of contrasts , whereas a similar effect has been reported for the candidate schemes only for , cf . . as confirmed by all previous works ,the phase contrast is the critical parameter influencing the convergence of fft - based iterative solvers . in fig .[ fig : example2 ] , we compare the scaling of the total number of iterations with respect to phase contrast for cg and methods , respectively . the results clearly show that the number of iterations grows as instead of the linear increase for method .this follows from error bounds the first estimate was proven in , whereas the second expression is a direct consequence of the condition number of matrix being proportional to and a well - known result for the conjugate gradient method , e.g. ( * ? ? ?* section 6.11.3 ) .the cg - based method , however , failed to converge for the infinite contrast limit .such behavior is equivalent to the eyre - milton scheme .it is , however , inferior to the augmented lagrangian algorithm , for which the convergence rate improves with increasing and the method converges even as .nonetheless , such results are obtained for optimal , but not always straightforward , choice of the parameters .plotted against phase contrast ; the reference medium corresponds to for and tolerance is related to norm . ]the final illustration of the -based algorithm is provided by fig .[ fig : example3 ] , displaying a detailed convergence behavior for both low- and high - contrast cases . the results in fig .[ fig : example3a ] correspond well with estimates for both residual and equilibrium - based norms .influence of a higher phase contrast is visible from fig .[ fig : example3b ] , plotted in the full logarithmic scale .for algorithm , two regimes can be clearly distinguished . in the first few iterations , the residual error rapidly decreases , butthe iterates tend to deviate from equilibrium . then , both residuals are simultaneously reduced . for scheme, the increase of the equilibrium residual appears only in the first iteration and then the method rapidly converges to the correct solution .however , its convergence curve is irregular andthe algorithm repeatedly stagnates in two consecutive iterations .further analysis of this phenomenon remains a subject of future work .in this short note , we have presented a conjugate gradient - based acceleration of the fft - based homogenization solver originally proposed by moulinec and suquet and illustrated its performance on a problem of electric conduction in a periodic two - phase composite with isotropic phases .on the basis of obtained results , we conjecture that : * the non - symmetric system of linear equations , arising from discretization by the trigonometric collocation method , can be solved using the standard conjugate gradient algorithm , * the convergence rate of the method is proportional to the square root of the phase contrast , * the methods fails to converge in the infinite contrast limit , * contrary to available improvements of the original fft - solver , the cost of one iteration remains comparable to the basic scheme and the method is insensitive to the choice of auxiliary reference medium .the presented computational experiments provide the first step towards further improvements of the method , including a rigorous analysis of its convergence properties , acceleration by multi - grid solvers and preconditioning and the extension to non - linear problems .the authors thank milan jirsek ( czech technical university in prague ) and christopher quince ( university of glasgow ) for helpful comments on the manuscript .this research was supported by the czech science foundation , through projects no .gar 103/09/1748 , no .gar 103/09/p490 and no .gar 201/09/1544 , and by the grant agency of the czech technical university in prague through projectsgs ohk1 - 064/10 .h. moulinec , p. suquet , a fast numerical method for computing the linear and nonlinear mechanical properties of composites , comptes rendus de lacadmie des sciences .srie ii , mcanique , physique , chimie , astronomie 318 ( 11 ) ( 1994 ) 14171423 .g. vainikko , fast solvers of the lippmann - schwinger equation , in : r. p. gilbert , j. kajiwara , y. s. xu ( eds . ) , direct and inverse problems of mathematical physics , vol . 5 of international society for analysis , applications and computation , kluwer academic publishers , dordrecht , the netherlands , 2000 , pp . 423440 .j. c. michel , h. moulinec , p. suquet , a computational method based on augmented lagrangians and fast fourier transforms for composites with high contrast , cmes - computer modeling in engineering & sciences 1 ( 2 ) ( 2000 ) 7988 .r. fletcher , conjugate gradient methods for indefinite systems , in : g. watson ( ed . ) , numerical analysis , proceedings of the dundee conference on numerical analysis , 1975 , vol .506 of lecture notes in mathematics , springer - verlag , new york , 1976 , pp .j. vondejc , analysis of heterogeneous materials using efficient meshless algorithms : one - dimensional study , master s thesis , czech technical university in prague ( 2009 ) .j. novk , calculation of elastic stresses and strains inside a medium with multiple isolated inclusions , in : m. papadrakakis , b. topping ( eds . ) , proceedings of the sixth international conference on engineering computational technology , stirlingshire , uk , 2008 , p. 16pp , paper 127 .doi:10.4203/ccp.89.127 h. moulinec , p. suquet , a numerical method for computing the overall response of nonlinear composites with complex microstructure , computer methods in applied mechanics and engineering 157 ( 12 ) ( 1998 ) 6994 .
in this short note , we present a new technique to accelerate the convergence of a fft - based solver for numerical homogenization of complex periodic media proposed by moulinec and suquet . the approach proceeds from discretization of the governing integral equation by the trigonometric collocation method due to vainikko , to give a linear system which can be efficiently solved by conjugate gradient methods . computational experiments confirm robustness of the algorithm with respect to its internal parameters and demonstrate significant increase of the convergence rate for problems with high - contrast coefficients at a low overhead per iteration . numerical homogenization ; fft - based solvers ; trigonometric collocation method ; conjugate gradient solvers
over the last decades the -calculus has been connecting mathematics and physics in applications that span from quantum theory and statistical mechanics , to number theory and combinatorics ( see and references therein ) .its history dates back to the beginnings of the last century when , based on pioneering works of euler and heine , the english reverend frank hilton jackson developed the -calculus in a systematic way .his work gave rise to generalizations of series , functions and special numbers within the context of the -calculus .more important , he reintroduced the concepts of the -derivative ( also known as jackson s derivative ) and introduced the -integral .the -derivative of a function of one variable is defined as where is a real number different from and is different from . in the limit of ( or ) , the -derivative reduces to the classical derivative .let , for example . in this case , the classical derivative of is and the -derivative is ^{n-1} ] is the -analogue of given by = \frac{q^n -1}{q-1}.\ ] ] as , ] , as used in and .extensive comparisons between the gas g3-pcx ( results obtained from for ellipsoidal , schwefel , rosenbrock and rastrigin functions ; and from for ackley and rotated rastrigin ) , spc - vsbx and spc - pnx ( results obtained from ) and the -gradient method are presented in tables [ tab : unimodal ] and [ tab : multimodal ] . as in and ,the `` best '' , `` median '' and `` worst '' columns refer to the number of function evaluations required to reach the accuracy .when this condition is not achieved , the best value found so far for the test function after evaluations is given in column `` '' .the column `` success '' refers to how many runs reached the target accuracy , for unimodal functions , or ended up within the global minimum basin , for multimodal ones .the best performances are highlighted in bold in each table .the corresponding values of the best parameters , and used in each test function are given in table [ tab : parameters ] .llll functions & & & + ellipsoidal & & & + schwefel & & & + rosenbrock & & & + ackley & & & + rastrigin & & & + rotated rastrigin & & & + lllllll function & method & best & median & worst & & success + & * g3-pcx * & & & & & + & spc - vsbx & & & & & + & spc - pnx & & & & & + & * -gradient * & & & & & + & * g3-pcx * & & & & & + & spc - vsbx & & & & & + & spc - pnx & & & & & + & -gradient & & & & & + & * g3-pcx * & & & & & + & spc - vsbx & & - & - & & + & spc - pnx & & - & - & & + & -gradient & & - & - & & + lllllll function & method & best & median & worst & & success + & g3-pcx & & - & - & & + & spc - vsbx & & & & & 10/10 + & spc - pnx & & & & & 10/10 + & * -gradient * & & & & & + & g3-pcx & & - & - & & + & spc - vsbx & & & & & 6/10 + & spc - pnx & & - & - & & 0 + & * -gradient * & & & & & + & g3-pcx & & - & - & & + rotated & spc - vsbx & & - & - & & + rastrigin & spc - pnx & & - & - & & + & * -gradient * & & & & & + in table [ tab : unimodal ] , for the ellipsoidal function , the -gradient method achieved the required accuracy for all 50 runs , with an overall performance similar to the one displayed by the g3-pcx , the best algorithm among the gas .as for the schwefel s function , the -gradient method again attained the required accuracy for all runs but was outperformed by the g3-pcx in terms of the number of function evaluations .finally , for the rosenbrock s function , the -gradient was beaten by the g3-pcx ( the only to achieve the required accuracy ) but performed better then the two other gas .the overall evaluation of the -gradient method performance in these numerical experiments with unimodal ( or quasi - unimodal ) test functions indicates that it reaches the required accuracy ( or the minimum global basin ) in of the runs , but it is not faster than the g3-pcx .this picture improves a lot when it comes to tackle the multimodal ackley s and rastringin s functions . in table[ tab : multimodal ] , due to limited computing precision the required accuracy for the ackley s function was set to for the gas and in our simulations with double precision is equal to and not zero . ] .the -gradient method was here clearly better than the gas , reaching the required accuracy in more runs or in less functions evaluations .for the rastrigin s function , the g3-pcx and the spc - pnx were unable to attain the global minimum basin .the other two algorithms reached the required accuracy , but the -gradient method was the only to do it in of the runs ( 48 over 50 ) .finally , in the case of the rotated rastrigin s function , the -gradient was the only algorithm to reach the minimum , attaining the required accuracy in out of independent runs .summarizing the results with multimodal functions , we may say that the -gradient method outperformed the gas in all the three test cases considered , reaching the minimum with less function evaluations or in more independent runs .the main idea behind the -gradient method is the use of the negative of the -gradient of the objective function a generalization of the classical gradient based on the jackson s derivative as the search direction .the use of jackson s derivative provides an effective mechanism for escaping from local minima .the method has strategies for generating the parameter and the step length that makes the search process gradually shifts from global in the beginning to almost local search in the end . for testing this new approach , we considered six commonly used 20-variable test functions . these functions display features of real - world optimization problems ( multimodality , for example ) and are notoriously difficult for optimization algorithms to handle .we compared the -gradient method with gas developed by deb et al . , and ballester and carter with promising results .overall , the -gradient method clearly beat the competition in the hardest test cases , those dealing with the multimodal functions .it comes without suprise the ( relatively ) poor results of the -gradient method with the rosenbrock s function , a unimodal test function specially difficult to be solved by the steepest descent method .this result highlights the need for the development of a -generalization of the well - known conjugate - gradient method , a research line currently being explored .+ ballester , p.j . ,carter , j.n . : an effective real - parameter genetic algorithm with parent centric normal crossover for multimodal optimisation . in : proceedings of the genetic and evolutionary computation conference ,. 901913 .seattle , wa , usa ( 2004 ) soterroni , a. c. , galski , r. l. , ramos , f. m. : the -gradient vector for unconstrained continuous optimization problems . in : operationsresearch proceddings 2010 , pp . 365370 .springer - verlag berlin heidelberg ( 2011 ) locatelli , m. : simulated annealing algorithms for continuous global optimization . in : pardalos ,j.m . and romeijn , h. e. ( eds . ) handbook of global optimization ii , pp .kluwer academic publishers , dordrecht ( 2002 )
the -gradient is an extension of the classical gradient vector based on the concept of jackson s derivative . here we introduce a preliminary version of the -gradient method for unconstrained global optimization . the main idea behind our approach is the use of the negative of the -gradient of the objective function as the search direction . in this sense , the method here proposed is a generalization of the well - known steepest descent method . the use of jackson s derivative has shown to be an effective mechanism for escaping from local minima . the -gradient method is complemented with strategies to generate the parameter and to compute the step length in a way that the search process gradually shifts from global in the beginning to almost local search in the end . for testing this new approach , we considered six commonly used test functions and compared our results with three genetic algorithms ( gas ) considered effective in optimizing multidimensional unimodal and multimodal functions . for the multimodal test functions , the -gradient method outperformed the gas , reaching the minimum with a better accuracy and with less function evaluations .
invariant manifolds provide geometric structures for understanding dynamical behavior of nonlinear systems under uncertainty .some systems evolve on fast and slow time scales , and may be modeled by coupled singularly perturbed stochastic ordinary differential equations ( sdes ) .a slow - fast stochastic system may have a special invariant manifold called a random slow manifold that capture the slow dynamics .we consider a stochastic slow - fast system where and are matrices , is a small positive parameter measuring slow and fast scale separation , and are nonlinear lipschitz continuous functions with lipschitz constant and respectively , is a noise intensity constant , and is a two - sided -valued wiener process ( i.e. , brownian motion ) on a probability space . under a gap condition and for small , there exists a random slow manifold , , as in , for slow - fast stochastic system - .when the nonlinearities are only locally lipschitz continuous but the system has a random absorbing set ( e.g. , in mean - square norm ) , we conduct a cut - off of the original system .the new system will have a random slow manifold which captures the original system s slow dynamics .the random slow manifold is the graph of a random nonlinear mapping , with determined by a lyapunov - perron integral equation , with and .the random slow manifold exponentially attracts other solution orbits .we will find an analytically approximated random slow manifold for sufficiently small , in terms of an asymptotic expansion in , as in .this slow manifold may also be numerically computed as in . by restricting to the slow manifold ,we obtain a lower dimensional reduced system of the original slow - fast system - , for sufficiently small where and are defined in the next section .if the original slow - fast system - contains unknown system parameters , but only the slow component is observable , we conduct parameter estimation using the slow system . since the slow system is lower dimensional than the original system , this method offers an advantage in computational cost , in addition to the benefit of using only observations on slow variables .this paper is arranged as follows . in the next section ,we obtain an approximated random slow manifold and thus the random slow system .then in section 3 , we present a method for parameter estimation on the slow manifold .finally , a simple example is presented in section 4 to illustrate our method .by a random transformation we convert the sde system - to the following system with random coefficients where is the stationary solution of linear system . here is the wiener shift implicitly defined by .note that . define a mapping ( between random samples ) implicitly by .then is also a wiener process with the same distribution as .moreover , and are identically distributed with and , respectively . by a time change andusing the fact that and are identically distributed , the system - is reformulated as ,\\ \label{fast - equation - random000fast } & & y'=by+g ( x , y + \sigma \eta(\theta_\tau\psi_\varepsilon\omega)),\end{aligned}\ ] ] where .we make the following two hypotheses . : there are positive constants , and , such that for every and , the following exponential estimates hold : : .+ then there exists a random slow manifold for the random system - , with being expressed as follows , we can get a small approximation for .start with the expansion and the integral expression \,dr ] .a stochastic nelder - mead method is used to minimize the objective functions . in the next section ,we demonstrate this method with an example .in this section , we demonstrate our parameter estimation method based on random slow manifolds by a simple example .consider a slow - fast stochastic system where is a real unknown positive parameter , is a small positive scale separation constant , is a constant noise intensity , and is a scalar wiener process . in this system ,the nonlinear terms are not global lipschitz .but if additionally it has an absorbing set , we can cut - off the nonlinearities without affecting the long time , slow dynamics ( almost surely ) . indeed , for arbitrary constants and , we have \\{}&=&\big(-\frac{2m^2}{{\varepsilon}}y^2+\frac{m^2}{300{\varepsilon}}x^2y+ \frac{2mk}{{\varepsilon}}y-\frac{mk}{300{\varepsilon}}x^2+\frac{m^2\sigma^2}{{\varepsilon}}\big)dt+2m(my - k)\frac{\sigma}{\sqrt{{\varepsilon}}}\,dw_t.\end{aligned}\ ] ] therefore , taking , , then we see that and thus by the gronwall inequality , we conclude that this means , for fixed and , the dynamics of the system - will eventually stay in an ellipse ( almost surely ) , i.e. there is a random absorbing set .we can then cut - off the nonlinearities outside this absorbing set to obtain a modified system which has , almost surely , the same long time , slow dynamics as the original system . in the following calculations ,we actually have omitted this cut - off procedure for simplicity . by the random transformation , sdes system - are converted into the following system therefore , there exists an satisfying whose graph is a random slow manifold for the random system - .in fact , has an approximation ( with error ) , so the approximated slow system is .we will illustrate that the parameter estimator for based on this low dimensional , reduced system is a good approximation of the parameter estimator based on the original system - .the random slow manifold is the graph of , where is as in .it is a curve ( depending on random samples ) .the random orbits of the system - approaches to this curve exponentially fast . figures [ figure sdes - sms001e001][figure sdes - sms005e001 ] show some orbits of the slow - fast system - and the approximate random slow manifold , where is as in , with different and values .( red curve ) , where is as in , with and : ( left ) and ( right).,title="fig:",height=257 ] ( red curve ) , where is as in , with and : ( left ) and ( right).,title="fig:",height=257 ] ( red curve ) , where is as in , with and : ( left ) and ( right).,title="fig:",height=257 ] ( red curve ) , where is as in , with and : ( left ) and ( right).,title="fig:",height=257 ] , where is as in with : ( left ) and ( right).,title="fig:",height=257 ] , where is as in with : ( left ) and ( right).,title="fig:",height=257 ] the deterministic nelder - mead method ( nm ) is a geometric search method to find a minimizer of an objective function . starting from an initial guess point , it generates a new point ( reflection point , expand point , inside / outside contraction point or shrink point ) by comparing function values , and thus get a better and better estimator until the smallest objective function value in this iteration reaches the termination value ( prescribed error tolerance ) .the algorithm of the nm method and its improvements have been widely studied and utilized .the method has its advantage that the objective function need not to be differentiable , and it can thus be used in various applications .as noted in , the nelder - mead method is a widely used heuristic algorithm .only very limited convergence results exist for a class of low dimensional ( one or two dimensions ) problems , such as in the case when the objective function is strictly convex . for the numerical simulation , with analytical objective functions , matlab function _fminsearch _ can be used to find a minimizer . however , when dealing with problems with noise , nm method has the disadvantage that it lacks an effective sample size scheme for controlling noise , as shrinking steps are sensitive to the noise in the objective function values and then may lead to the search in a wrong direction .in fact an analytical and empirical evidence is known for the false convergence on stochastic function .so we use the stochastic nelder - mead simplex method ( snm ) to mitigate the possible mistakes in the stochastic setting .the newly developed adaptive random search in consists of a local search and a global search .it generates a new point and new objective function in the iteration with increasing number of the sample size scheme .a proper choice for is ] the largest integer not bigger than . here is the sum of objective function values for .snm leads to the convergence of at search points to ( and thus we obtain a minimizer ) , with probability one .we want to estimate the parameter by using both the original slow - fast system and the slow system , in order to demonstrate that the slow system is appropriate for parameter estimation , when is sufficiently small .take the true value ( say ) and numerically solve - with an initial condition to get samples of observational data , at time instants , ( save these data ) .take two initial guesses for the unknown system parameter and randomly and solve the original slow - fast system - , with the same initial condition and time points , .thus we obtain and values which depend on .take two initial guesses and randomly and solve the slow system with the same initial condition at the same time instants , .we thus obtain which depends on or . for slow - fast system - and slow manifold reduced system , with and : true values ( left ) and ( right).,title="fig:",height=264 ] for slow - fast system - and slow manifold reduced system , with and : true values ( left ) and ( right).,title="fig:",height=264 ] and estimator ( top ) for slow - fast system - ; and objective function and estimator ( bottom ) for slow manifold reduced system : , and true value .,title="fig:",height=264 ] and estimator ( top ) for slow - fast system - ; and objective function and estimator ( bottom ) for slow manifold reduced system : , and true value .,title="fig:",height=264 ] + and estimator ( top ) for slow - fast system - ; and objective function and estimator ( bottom ) for slow manifold reduced system : , and true value .,title="fig:",height=264 ] and estimator ( top ) for slow - fast system - ; and objective function and estimator ( bottom ) for slow manifold reduced system : , and true value .,title="fig:",height=264 ] and estimator ( top ) for slow - fast system - , and objective function and estimator ( bottom ) for slow manifold reduced system : , and true value .,title="fig:",height=264 ] and estimator ( top ) for slow - fast system - , and objective function and estimator ( bottom ) for slow manifold reduced system : , and true value .,title="fig:",height=264 ] + and estimator ( top ) for slow - fast system - , and objective function and estimator ( bottom ) for slow manifold reduced system : , and true value .,title="fig:",height=264 ] and estimator ( top ) for slow - fast system - , and objective function and estimator ( bottom ) for slow manifold reduced system : , and true value .,title="fig:",height=264 ] figure [ figure sdes - sm - fs001e001 ] shows the objective functions and with true ( left ) and ( right ) .here we used paths to get the expectation in the objective function . for and , figures[ s001e001a01 ] and [ s001e001a1 ] are objective function values and estimators of each iteration for slow - fast system - ( top ) and reduced system ( bottom ) with and , respectively .we observe that , as proved in , the objective function value tends to (=0 ) , while the minimizer or the parameter estimator provides an accurate estimation of the system parameter . for sufficiently small ,the objective function value for the slow system will also get closer and closer to , and the minimizer or the parameter estimator is a good approximation of .x. kan , j. duan , i. g. kevrekidis and a. j. roberts , simulating stochastic inertial manifolds by a backward - forwand approach ._ siam j. on applied dynamical systems _ , in press , vol . 12 , 2013 .arxiv:1206.4954 [ math.ds ] .
a parameter estimation method is devised for a slow - fast stochastic dynamical system , where often only the slow component is observable . by using the observations only on the slow component , the system parameters are estimated by working on the slow system on the random slow manifold . this offers a benefit of dimension reduction in quantifying parameters in stochastic dynamical systems . an example is presented to illustrate this method , and verify that the parameter estimator based on the lower dimensional , reduced slow system is a good approximation of the parameter estimator for original slow - fast stochastic dynamical system . [ [ mathematics - subject - classifications-2010 ] ] mathematics subject classifications ( 2010 ) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + primary 60h30 ; secondary 60h10 ; 37d10 . [ [ keywords ] ] keywords + + + + + + + + parameter estimation ; slow - fast system ; random slow manifold ; quantifying uncertainty ; numerical optimization
the standard model which , on some large scale , is defined as a homogeneous and isotropic solution of einstein s equations for gravitationally interacting matter , has proved to be remarkably robust against various observational challenges especially of the recent past .it is this robustness together with a list of theoretical and observational arguments which makes it hard to see any need for an alternative to the standard model .nevertheless , there exist some simple arguments which let the standard model appear dogmatic and a replacement overdue , while most scientific activity in the field is directed towards a consolidation of the standard model .it is fair to say that most of the work , which is directed towards consolidation , is already based implicitly on the assumption that the standard model gives the correct picture .i here scetch a possible dialogue which we can watch without risk of being biassed by some prejudice : we have two people who try to defend their points of view and both of them might be biassed , but both advance arguments which can be proved or disproved .this dialogue is mirrored in an ongoing debate in the field of astrostatistics about the existence of evidence for a scale of homogeneity ( e.g. , davis 1996 and pietronero 1996 ) , a subject which is also dealt with in several contributions to this volume ( see : kerscher et al ., martnez , sylos labini et al . ) and was the subject of a panel discussion held during the meeting .let us start with the advocate of the standard model red : `` the ( large scale ) standard model is a solution of friedmann s equations : where is the cosmological constant , is related to the constant curvature of the model at time , the value of the homogeneous density at time , and is the scale function of the isotropically expanding ( or contracting ) universe . ''also : `` this model is unstable to perturbations in the density field and/or the velocity field , respectively '' , which is the well known content of gravitational instability ; we may call this property _ local gravitational instability_. in spite of this instability , red supports the following conjecture about the global properties of the universe ( a statement which will come again in various refined versions later on ) : * conjecture * ( version 1 ) : `` the universe can be _ approximated _ by the standard model , if _ averaged _ on some large scale , e.g. for the inhomogeneous density field we would have : for all times . '' here , we may think for simplicity that the brackets denote euclidean spatial averages of some tensor field as a function of eulerian coordinates and time , .the newtonian case serves as a good illustration .below , we shall explain that the arguments carry over to riemannian spaces and general relativity .green replies : `` i do nt expect that spatial averaging and time evolution commute as a result of the nonlinearity of the basic system of equations . ''blue explains : `` if we average the cosmological fields of density and velocity at some initial time ( at recombination ) and use these average values ( which are remarkably isotropic according to the microwave background measurements ) as initial data of a homogeneous isotropic solution , then , e.g. , the value of at time is expected to differ from the average field of the inhomogeneous initial data evolved to the time . this has been particularly emphasized by ellis ( 1984 ) . '' indeed , green is right : the explanation of non commutativity is given by blue in terms of the _ commutation rule _ for the expansion scalar defined as the divergence of the velocity field ( buchert , ehlers 1997 ) : `` equation ( 2 ) shows that , on any spatial domain , the evolution of the average quantity differs from the averaged evolved one , the difference being given by a fluctuation term .for details and discussions of what follows see ( buchert 1996 and buchert & ehlers 1997 ) ; for the relation to dynamical models see ( ehlers & buchert 1997 ) . '' also : `` equation ( 2 ) only assumes mass conservation , i.e. , we follow a tube of trajectories so that the mass in the spatial averaging domain is conserved in time .this is a sensible assumption , since we want to extend the spatial domain to the whole universe later on . ''blue goes further by specifying the local dynamical law for the expansion scalar .this is furnished by raychaudhuri s equation : `` introducing a scale factor via the volume , ( so , we do not care about the shape of the spatial domain ; it may expand anisotropically ) , raychaudhuri s equation for may be inserted into the commutation rule ( 2 ) resulting in the _ generalized friedmann equations _ : with the _ effective _ source term which involves averages over fluctuation terms of the expansion , the shear scalar and the vorticity scalar : thus , as soon as inhomogeneities are present , they are sources of the equation governing the average expansion. they may be negative or positive giving rise to an additional effective ( dynamical ) density which we may measure by the dimensionless ratio for , the source due to ` backreaction ' is , on the averaging domain , equal to that of the averaged matter density .the effective density does , in general , not obey a continuity equation like the matter density ; an effective mass is either produced or destroyed in the course of structure formation . ''red : `` in principle you are right , but i doubt that the effect is quantitatively significant . '' green : `` irrespective of the global relevance of this term , it will play an interesting role on scales where its value is non negligible ; for dominating shear fluctuations the ` backreaction ' could fake a ` dark matter ' component , since the mass in the standard friedmann equation will be overestimated in this case . ''red is going to advance a strong argument in favour of the standard model : `` in newtonian theory the ` backreaction ' term ( 4 ) vanishes by averaging over the whole universe , if the latter is topologically closed , i.e. , compact and without boundary . '' indeed , he succeeds in writing the local term as a divergence of some vector field , , if he assumes space to be euclidean .`` hence , using gau s theorem , we may transform the volume integral in the average into a surface integral over the boundary of the averaging domain . for compact universes without boundary ( e.g. , a 3torus ) this surface integralis zero ; we obtain on the torus . ''blue : `` as a side result red proved that the currently employed models for large scale structure , analytical or n body simulations , are constructed correctly : the assumption _ of periodic boundary conditions _ is equivalent to using as the 3space a hypertorus , not .these models so far have been assumed to be friedmannian on average _ by construction _ rather than derivation . ''green adds a disclaimer : `` the above argument depends on the flatness of space . ''red : `` but , as was shown in ( buchert & ehlers 1997 ) the generalized friedmann equations ( 3 ) also hold for irrotational flows in general relativity . ''green : `` yes , but not the fact that the local term can be written as a divergence of some vector field , which makes , besides spatial curvature , a crucial difference . ''blue illustrates this last statement by green by some technical explanations : `` if one introduces normal ( gaussian ) coordinates and thus foliates spacetime into flow orthogonal hypersurfaces of constant time ( this is only possible for irrotational flows ) , then eqs .( 3 ) also hold . since eqs .( 3 ) are equations for scalar quantities , they are manifestly covariant and will hold in any coordinate system .the crucial difference to the newtonian treatment , however , is the non integrability of inhomogeneous deformations ( as defined below ) : write the metric of the spatial hypersurfaces as a quadratic form involving the one forms , then it is necessary and sufficient for the metric to be flat that the one forms are exact , i.e. , , and the coefficients reduce to a deformation gradient with respect to lagrangian coordinates , ; in other words , the coefficient matrix which measures the deformations is integrable .the non integrability of implies the non existence of a vector field .therefore , we can not shift a volume averaged quantity to a contribution on the boundary of the averaging domain and the conclusion on the vanishing of ` backreaction ' for models with non euclidean space sections can not be drawn using red s argument . ''green : `` this remark by blue is bad news in so far as we expect the valid theory to be general relativity on the large scales under consideration . ''blue : `` again , we can formulate a side result concerning current models of large scale structure : if we model structure in , e.g. , an undercritical density universe ( the total density parameter being ) , then the model _ has to be _ interpreted as a newtonian model .it makes sense to speak about hypersurfaces of constant negative curvature for the average model , but in that case there currently exists no proof that the average model is friedmannian for closed , curved spaceforms .moreover , it is then not even possible to introduce a simple hypertorus topology , since this is incompatible with a hyperbolic geometry ( compare , e.g. , lachize rey & luminet 1995 ) . ''red summarizes the preceeding findings and reformulates his statement : * conjecture * ( version 2 ) : on some large scale we still may approximate the metric of spatial hypersurfaces by the flat euclidean metric .then , we have two options ( which both are connected with the requirement of periodic boundary conditions ) : either , we live in a `` small universe '' ( ellis 1971 , ellis & schreiber 1986 ) , i.e. , space is genuinely compact without boundary and has a finite size .this we may call _ topological closure condition _ ( option a ) . or , the value of is numerically negligible on the scale .this would support the generally held view and we may call this _ technical closure condition _ ( option b ) , since then option a is a justified approximation . green : `` i agree that a compact universe is appealing , since only then we have some hope to explore a substantial fraction of its volume ; however , it may then not have flat space sections . ''green accepts the flatness of space as a working hypothesis in order not to complicate the discussion : `` if we do nt have option a ( in which case red s argument is exact ) , we have to examine the ` backreaction ' term in more detail quantitatively '' .green develops his argument : `` the terms in have positive contributions ( vorticity and expansion fluctuations ) , , and negative ones ( shear fluctuations ) , '' , and `` each individual fluctuation term has fixed sign and , thus , does not vanish on _ any _ scale '' .blue : `` an immediate consequence of the positivity of these terms is that their values may decay with scale until we reach a representative volume of the universe , but as soon as we have reached this , these terms approach a finite positive value , even on a scale on which we may assume periodic boundary conditions . ''green concludes that `` the requirement of vanishing or smallness of the sum implies a _ conspiracy _ between vorticity , shear and expansion fluctuations , which is not to be expected a priori . ''the final refinement of red s statement therefore assumes the form : * conjecture * ( version 3 ) : on some large scale we still may approximate the metric of spatial hypersurfaces by the flat euclidean metric . in general we may not expect that the universe is genuinely periodic on the scale . however , on that scale , the term has a negligible value : either , because : each of the terms and is numerically small , so that the conspiracy assumption does not matter . or , because ( if the terms are not numerically small ) : the inhomogeneities evolve such that for scales approaching _ and for all times_. blue : `` both options imply assumptions on the initial fluctuation spectrum , and both formulate properties of gravitational _ dynamics _ which is , in principle , testable . ''red : `` i agree that we want , under all circumstances , avoid `` fine tuning '' ; the standard model should be generic for a wide range of dynamical models for the evolution of inhomogeneities . ''green now argues on the grounds of gravitational dynamics : `` up until now there is no dynamical model which includes the full ` backreaction ' ( apart from perturbative studies which may capture some of the effect see futamase 1989 , 1996 , bildhauer 1990 and bildhauer & futamase 1991a , b , as well as russ et al .1997 ) ; the main problem to construct such a model is the following : not only the inhomogeneities affect the global expansion , but also any model for the evolution of inhomogeneities will depend on how the average evolves in time . ''blue details : the latter is principally known from the linear theory of gravitational instability : if the universe expands faster , then the inhomogeneities have a harder time to form . here , we are faced with a nonlinear self interaction problem : we may start with a flat friedmann model as background ( the average of the ricci scalar is zero ) , and some model for the inhomogeneities relative to this background . from the inhomogeneous modelwe calculate the ` backreaction ' ( this was recently attempted by russ et al .however , if there is a nonvanishing ` backreaction ' on the global scale , then this procedure gives us just the first step in the sense of an iteration ; in the second step we would have a curved background ( the average of the ricci scalar is nonzero ) , and we would have to construct an inhomogeneous model for a curved background including the ` backreaction ' from the first iteration . in turnthe second iteration would yield the ` backreaction ' for this model , and the full ` backreaction ' could be calculated after steps of this procedure provided there is convergence to a solution . green : `` it is clear that we are far from being able to investigate such a model . for example , in a curved background we can neither use simple periodic boundary conditions , nor can we work with the standard fourier transformation ; we would have to work with eigenfunctions on curved spaces and would have to respect the compatibility with some , in general , nontrivial topologies . ''one `` way out '' is to cheat : green bases his further argumentation on the standard newtonian model : `` we may use the standard model which is mathematically well defined as the average over a general inhomogeneous but periodic newtonian model , and let the box of the simulation extend to very large scales . then , the ` backreaction ' can be calculated for subensembles of the simulation box on scales which we consider representative for the universe . '' this possible study will at least give us some quantitative clues of the effect ; it is the subject of an ongoing work ( buchert et al .1997 ) which blue is going to scetch in the next section . let us assume , in agreement with conjecture 3 by red , that the space sections are euclidean and that the inhomogeneities can be subjected to periodic boundary conditions on some large scale .usually , this scale is set to be around / h , mainly because of limits on cpu power using n body simulations .however , a recent analysis of the iras 1.2 jy catalogue ( see kerscher et al . ,this volume ) has demonstrated that fluctuations in the matter distribution do not vanish on that scale .a mock catalogue of the iras sample produced by a simulation with a box size of / h enforces the fluctuations to vanish on the periodicity scale , and the corresponding analysis of the mock sample shows disagreement in all moments ( except the first ) of the matter distribution with the observed data ( see kerscher et al ., this volume and kerscher et al . 1997 ) .this example shows that not only fluctuations in the average density are an indication for inhomogeneities , but averages over higher order moments of the density field ( e.g. reflected by averaged shear fluctuations ) create huge ( phase correlated ) , possibly low amplitude structures .thus , even if we do not argue globally about the ` backreaction problem ' ( the ` backreaction ' is zero by construction due to the assumed newtonian description and the periodicity ) , this effect has to be seriously considered on scales of current all sky surveys .this study entails , from a dynamical point of view , a quantification of _ cosmic variance _ within the standard model .we therefore have to run simulations of a considerably larger spatial extent ; we may use for simplicity `` truncated lagrangian schemes '' .these schemes have been shown to agree with n body results down to scales around the correlation length ( melott et al .1994 , wei et al . 1996 ) and , thus , are suitable tools to realize boxes of gigaparsec extent . for this purposeit may be considered sufficient to use the `` truncated first order scheme '' ( known as tza ; `` truncated zeldovich approximation '' , coles et al . 1993 ) . in a work in progressbuchert et al . (1997 ) consider two cobe normalized cosmogonies , standard cdm and a cdm model with cosmological constant .both cosmogonies are realized for a box of 1.8 gpc / h with an effective resolution of lagrangian fluid elements .the simulation box is then subdivided into smaller boxes and the ensemble average is taken over values of the dimensionless relative ` backreaction ' ( 4 ) .other quantities like the expected hubble constant or the expected density parameter including ` backreaction ' are also studied both numerically and analytically .-6.2 true cm a plot of the absolute value of , normalized by the global mean density , for the initial conditions scdm against scale is shown in figure 1 , which already gives a clear representation of the scale dependence of the effect under study .the absolute values are shown here , because might be negative or positive in subsamples , and the overall sum is , by assumption , zero .the absolute value is then an estimate of the expected ` backreaction ' on some scale .however , in some subsamples the effect may be smaller or larger .three preliminary conclusions may be drawn from the first results of this study : the magnitude of the ` backreaction ' source term is of the same order as the mean density and higher on scales / h for scdm .it quickly drops to a effect on scales of / h . the magnitude of the ` backreaction ' source term is proportional to the r.m.s .density fluctuations almost independent of scale . compared to the density fluctuations it amounts to a factor of about for scdm . using the `` zeldovich approximation '' we can calculate analytically the ` backreaction ' .this calculation shows that is a _ growing _ function of time in an expanding universe . this debate brought up one fundamental result which supports red : any inhomogeneous newtonian cosmology , whose flat space sections are confined to a length scale on which the matter variables are periodic , averages out to the standard model . on this global scalethere is no ` backreaction ' and the cosmological parameters of the homogeneous isotropic solutions of friedmann s equations are well defined also for the average cosmology on that scale .setting up simulations of large scale structure in this way is correct , and a global comoving coordinate system can be introduced to scale the whole cosmology .this validates the common way of constructing inhomogeneous models of the universe . on the other hand ,green s arguments initiate two well justified ways of saying that this architecture is `` forced '' due to the settings of a ) excluding curvature of the space sections , and b ) requiring periodic boundary conditions : one way is to analyze the effect of ` backreaction ' on scales smaller than the periodicity scale , and base this analysis upon the standard model , however , by extending the spatial size of periodicity to very large scales .the results obtained in the framework of a well tested approximation scheme are three fold : first , they show the importance of the influence of the inhomogeneities on average properties of a chosen spatial domain , _ although _ the `` forcing conditions '' bring the effect down to zero on the scale .second , the ` backreaction ' value is numerically small on large scales , but it is _ always _ larger than the r.m.s .density fluctuations ; in other words : taking the amplitudes of density fluctuations serious ( e.g. by normalizing the cosmogony on some large scale ) always implies the presence of , e.g. , shear fluctuations which are neglected on that scale in the standard model .third , they show that the effect is a growing function of time . from the latter resultwe may establish the notion of _ global gravitational instability _ of the standard model as opposed to the well known local instability : it states that , as soon as the ` backreaction ' has a non zero value at some time, this value will be increasing ; the average model drifts away from the standard model .the second `` forcing '' is due to the newtonian treatment .being justified on smaller scales , a newtonian model is expected to fail just when we approach the _ large _ scales of periodicity which we have to consider to justify neglection of the ` backreaction ' effect .setting up a general relativistic model unavoidably implies the presence of local curvature and will , in general , yield a nonvanishing average curvature for inhomogeneous models .neither simple periodic boundary conditions can be employed , nor can be proved that the ` backreaction ' effect should vanish globally , at least for compact space sections without boundary . both , red and green , are right , but if red s assumptions are weakened , the resulting cosmology has much richer properties and can not be confined to a simple box : it will take its additional degrees of freedom to evolve away from the standard model .green s more general view suffers from the fact that an alternative model is yet not formulated , but it is definitely within reach .i would like to thank martin kerscher , herbert wagner and arno wei for discussions , arno wei for his allowance to present fig.1 prior to publication of a common work .some aspects of this contribution were subject of a letter correspondence with jrgen ehlers , to whom i am also thankful for discussing the manuscript .bildhauer s. 1990 , prog .84 , 444 bildhauer s. & futamase t. 1991a , m.n.r.a.s .249 , 126 bildhauer s. & futamase t. 1991b , g.r.g . 23 , 1251 buchert t. 1996 , in `` mapping , measuring and modelling the universe '' , valncia 1995 , p. coles , v.j .martnez & m.j .bordera ( eds . ) , asp conference series , pp .349356 buchert t. , & ehlers j. 1997 , astron .320 , 1 buchert t. & kerscher m. , wei a.g .1997 , work in progress coles p. , melott a.l . & shandarin s.f .1993 , m.n.r.a.s .260 , 765 davis m. , 1996 , in `` critical dialogues in cosmology '' , princeton 1996 , n. turok ( ed . ), in press ehlers j. & buchert t. 1997 , g.r.g . , in press ellis g.f.r .1971 , g.r.g . 2 , 7 ellis g.f.r .1984 , in proc .10th international conference on `` general relativity and gravitation '' , b. bertotti et al ., dordrecht : reidel , 215288 ellis g.f.r . , & schreiber g. 1986 , phys .lett . a 115 , 97 ellis g.f.r . , ehlers j. , brner g. , buchert t. , hogan c.j ., kirshner r.p . , press w.h . , raffelt g. , thielemann f k . &van den bergh s. 1997 , in dahlem workshop report es19 `` the evolution of the universe '' , berlin 1995 , s. gottlber & g. brner ( eds . ) , chichester : wiley , in press futamase t. 1989 , m.n.r.a.s .237 , 187 futamase t. 1996 , phys .d. 53 , 681 kerscher m. , schmalzing j. , buchert t. & wagner h. 1997 , m.n.r.a.s . , submitted lachize rey m. & luminet j.p .1995 , phys .254 , 135 melott a.l ., buchert t. & wei a.g .1994 , astron .293 , 641 mukhanov v.f ., abramo r.w . &brandenberger r.h . , 1997 ,78 , 1624 pietronero l. 1996 , in `` critical dialogues in cosmology '' , princeton 1996 , n. turok ( ed . ) , in press russ h. , soffel m.h ., kasai m. & brner g. 1997 , phys .d , in press wei a.g . , gottlber s. & buchert t. 1996 , m.n.r.a.s .
the averaging problem for inhomogeneous cosmologies is discussed in the form of a disputation between two cosmologists , one of them ( red ) advocating the standard model , the other ( green ) advancing some arguments against it . technical explanations of these arguments as well as the conclusions of this debate are given by blue . -0.8 true cm
tumors develop by accumulating different mutations within a cell , which affect the cell s reproductive fitness . as , we refer to the fitness of a mutated cell as the ratio between a cell s rate to proliferate and the cell s rate of apoptosis compared to wild type cells .the higher the fitness , the more likely it is for the cell to proliferate . for high fitness values ,the population of cells growths very fast and stochastic effects play a minor role . in our model , this can be thought of as the formation of a tumor .however , many mutations have no impact on the cell s fitness , e.g. mutations not affecting coding or regulatory sequences . other mutations may lead to a fitness disadvantage , which implies that the cell s risk of apoptosis is higher than its chance of proliferation .however , the same mutations in combination with other mutations within the same cell might lead to a large fitness advantage .we were motivated by genetic studies in burkitt lymphoma , a highly aggressive tumor , where a single genetic alteration has an impact on a wide range of other genes , some of them affect cell growth while others induce apoptosis . more specifically , a chromosomal translocation between the _ myc _ protooncogene on chromosome 8 and one of three immunoglobulin ( _ ig _ ) genes is found in almost every case of burkitt lymphoma .this leads to deregulated expression of the _ myc _ rna and in consequence , to deregulated myc protein expression .the myc protein acts as a transcription factor and has recently been shown to be a general amplifier of gene expression , targeting a wide range of different genes .most importantly , _ myc _expression induces cell proliferation . in burkittlymphoma , the _ ig - myc _fusion is evidently the key mutation for tumorigenesis .however , _ myc _plays also a key role in inducing apoptosis .thus , the _ ig - myc _translocation alone would lead to cell death .therefore , the _ ig - myc _translocation has to be accompanied by additional mutations , which deregulate the apoptosis pathways , such as mutations affecting e.g. _ tp53 _ or _ arf _ .these additional mutations have probably only little direct impact on the cell s fitness , since apoptosis is rare .hence , these mutations can not be considered as primary driver mutations in the context of burkitt lymphoma . however , in combination with the _ myc _mutation these additional mutations decrease the apoptosis rate .consequently , the cells proliferate fast and the population grows accordingly , leading to tumorigenesis . because all cells carry the _mutation in burkitt lymphoma , but fast growth does not start immediately with that mutation , it seems to confer its large fitness advantage only in a certain genetical context .thus , interactions between different mutations may crucially affect the dynamics of cancer progression .due to the fact that those additional mutations do not confer a direct fitness advantage , they can not be considered as driver mutations . nevertheless , at least some of them are necessary in order for the _ myc _ mutation to become advantageous for the cell .therefore , they can not be regarded as true passenger mutations , either . throughout this manuscript, we therefore call these additional mutations `` secondary driver mutations '' . besides burkitt lymphoma ,epistatic effects in cancer initiation seem also to be relevant for other cancers .for example , we can think of the inactivation of a tumor suppressor gene discussed by knudson in the context of retinoblastoma .this inactivation is neutral for the first hit but highly advantageous for the second hit , and can hence be viewed as an interaction of genes .another case is found in lung carcinomas , where activation of each of two oncogenes ( _ sox2 _ and _ prkci _ ) alone is insufficient , but in concert they initiate cancer . in other cases , there is clear evidence for sign epistasis : the _ ras _ family of proto - oncogenes is also discussed to underlie epistatic effects .amplification of _ ras _ leads to senescence in the cell . nevertheless , _ras _ is a well known oncogenic driver gene .hence , the _ ras _ mutation needs to be accompanied by other mutations .moreover , the difficulty to distinguish between drivers and passengers suggests that for a full understanding of cancer initiation it is insufficient to think of these two types of mutations only .so far , most models have focused on the idea that passenger mutations have no effect or only a little effect , whereas each driver mutation increases the fitness of the cell .other models focus on the neutral accumulation of mutations .moreover , different mutations are typically treated as independent , which is a strong assumption that will often not be fulfilled . in our model, mutations are interacting in an epistatic way : the change in fitness induced by the driver mutation depends strongly on the genetic environment , i.e. in our case on the number of secondary driver mutations that are present in that cell .in addition we assume that the secondary driver mutations alone have almost no fitness advantage .such a dependence between mutations can strongly affect the dynamics of cancer initiation . in evolutionary biology ,epistatic systems are often analyzed regarding the structure or ruggedness of the landscape and the accessibility of different pathways .the experimental literature also studies which factors can lead to epistasis . here , we are interested in the dynamics of such an epistatic model , which we illustrate by stochastic , individual based simulations . in addition , we derive analytical results for the average number of cells with different combinations of mutations and find a good agreement with the average dynamics in individual based computer simulations .furthermore , we discuss the computation of the waiting time until cancer initiation .our results show that the dynamics in such systems of epistatic interactions are distinct from previous models of cancer initiation , which may have important consequences for the treatment of such cancers . while in previous models there is a steady increase in growth with every new mutation , in our modelthere is a period of stasis followed by a rapid tumor growth .of course , the biology of burkitt lymphoma is much more complex than modelled herein . to make the model more realisticone would have to distinguish between the different secondary driver mutations , since different genes contribute differently to the cells fitness , especially in a cell where the _ ig - myc _fusion is present .our model is not aimed to realistically describe such a situation in detail .instead , we focus on the extreme case of the so called _ all - or - nothing _epistasis to illustrate its effect on the dynamics of cancer initiation .as there is no theoretical analysis of epistatic effects in cancer initiation so far , a well understood minimalistic model seems to be necessary in order to illustrate the potential impact of epistasis on cancer progression .our minimalistic model clearly shows that epistasis can lead to a qualitatively different dynamics of cancer initiation .we analyze cancer initiation in a homogenous population of initially cells with discrete generations . in every generation ,each of the cells can either die or divide .if a cell divides , its two daughter cells can mutate with mutation probabilities for the driver mutation and for secondary driver mutations ( where the indicates that these would be called passenger mutations in closely related models ) . in principle, we could drop the assumption that these two mutation probabilities are independent on the cell of origin , but this would lead to inconvenient notation .we neglect back mutations and multiple mutations within one time step , because their probabilities are typically very small .figure [ fig : model ] summarizes the possible mutational pathways of the model .the variables denote the number of cells with or without the primary driver mutation ( or respectively ) , and secondary driver mutations .denote the number of cells with or without the primary driver mutation ( , or respectively ) , and secondary driver mutations .right : cells with only secondary driver mutations have neutral or nearly neutral fitness .the fitness of cells with the primary driver mutation depends on the number of secondary driver mutations within the cell , leading to an epistatic fitness landscape.,title="fig:",scaledwidth=50.0% ] + denote the number of cells with or without the primary driver mutation ( , or respectively ) , and secondary driver mutations .right : cells with only secondary driver mutations have neutral or nearly neutral fitness .the fitness of cells with the primary driver mutation depends on the number of secondary driver mutations within the cell , leading to an epistatic fitness landscape.,title="fig:",scaledwidth=49.0% ] a cell s probability for apoptosis and proliferation depends on the presence of the primary driver mutation and on the number of secondary driver mutations it has accumulated . for cells with no mutations , the division and apoptosis probabilities are both equal to .this implies that the number of cells is constant on average as long as no further mutations occur .we assume that the initial number of cells is high and thus we can neglect that the population would go extinct . for our parameter values ,the expected extinction time of our critical branching process exceeds the average life time of the organism by far . for cells without the primary driver mutation, each secondary driver mutation leads to a change in the cell s fitness by . for cells with the primary driver mutation ,the fitness advantage obtained with each secondary driver mutation is .the driver mutation increases both the apoptosis rate and the proliferation rate .the increase in the apoptosis rate is and the increase in the division rate is . with these parameters ,the proliferation rate for cells with secondary driver mutations but without the primary driver mutation is whereas the proliferation rate for such cells with the primary driver mutation is the apoptosis rates , denoted as and are simply one minus the proliferation rate occur at fixed rates and for primary and secondary drivers , respectively . for a long time , the overall fitness does not increase noticeably . for , it stays on average constant .hence , the total number of cells stays approximately constant .only when a cell with enough secondary driver mutations and also the primary driver mutation arises , the cell s fitness is increased substantially beyond the fitness of other cells and its chance of proliferation is significantly increased . at that point ,the total number of cells starts to increase rapidly , see figure [ fig : total ] . in models presented in literature so far ,the cell s fitness is increased independently with every ( driver ) mutation ( see e.g. * ? ? ?* ; * ? ? ?although the total number of cells increases exponentially , these models do not find a sudden burst in the number of cells ., secondary driver fitness advantage , the primary driver fitness advantage , primary driver disadvantage , advantage of a secondary driver mutation in the presence of the primary driver mutation , mutation rates for secondary driver mutations , mutation rate for the primary driver mutation ) . , scaledwidth=80.0% ] , , , , , , ),scaledwidth=100.0% ] in figure [ fig : single ] , the total number of cells is subdivided into the number of cells with different numbers of mutations .the left panel presents the cells that have not acquired the primary driver mutation , the right one shows cells with the primary driver mutation .cells with the primary driver mutation , but not enough secondary driver mutations , arise occasionally , but those cells die out quickly again thus , their average abundance is small .cells without the primary driver mutation do not die out , they also do not induce fast growth , cf . figure [ fig : single ] . only cells that have obtained enough secondary driver mutations andin addition acquire the primary driver mutation , divide so quickly that the population size increases rapidly .the parameters in our figures have been chosen such that a cells acquires a substantial growth advantage once the primary driver mutation co - occurs with 4 secondary driver mutations .this event can occur at any time and hence , in some simulation the number of cells can increase very early , whereas in other simulations the number of cells does not undergo fast proliferation for many generations .consequently , the rate of progression has an enormous variation .for the parameters from our figures , the time at which rapid proliferation occurs varied between and generations in 500 simulations .the distribution of these times is discussed in more detail below .we can calculate the average number of cells with a certain number of mutations at a given generation .the number of cells which do not have the primary driver mutation and secondary driver mutations ( i.e. ) changes on average by means of the cell s fitness and it decreases by the mutation rate where .the solution of equation for , i.e.if the secondary driver mutations are not neutral , is where denotes the initial number of cells .the mathematical proof of equation is given in [ subsec : withoutdriver ] .note , that the product can be written in terms of a -binomial coefficient , for the case , we take the limit of the -binomial coefficient ( e.g. * ? ? ?* ) and obtain which is the result that is also expected if the secondary driver mutations are neutral and accumulated independently of each other . intuitively , the term describes the probability of obtaining exactly mutations in generations .there are different possibilities when the mutations happen , these possibilities are captured by the binomial coefficient .thus , we have a growing polynomial term in and a declining exponential term in , since . in the case of ,the interpretation is similar . here, additionally the fitness advantage for secondary driver mutations has to be taken into account .since the number of cells with secondary driver mutations grows with , also the number of cells that can mutate grows .hence , the factor is multiplied to the expression and the binomial coefficient turns into the -binomial coefficient .[ tbl : abbrev ] .summary of our abbreviations [ cols= " < , < " , ] for cells that have obtained the primary driver mutation and secondary driver mutations , the situation is slightly more complex .there are different possibilities on how to obtain secondary driver and the primary driver mutation , since some of the secondary driver mutations may have occurred before the primary driver mutation has been acquired whereas others may have occurred afterwards .let denote the number of cells with the primary driver mutation and secondary driver mutations , when the primary driver mutation has happened in a cell with secondary driver mutations .note that .the change in the number of cells now depends on . using the abbreviations from table [ tbl : abbrev ] to simplify our notation, we have to express the average number of cells in total we need to sum over all possible pathways , in [ subsec : withdriver ] , we proof that the analytical solution of equation is , \end{split}\end{aligned}\ ] ] if .the summation over indicates the different mutational pathways .an intuitive explanation of this somewhat lengthy equation is given in [ sec : intuitive ] .interestingly , the case for is much more challenging .the underlying problem is that the normal binomial coefficient can not be expressed in a sum in the way the -binomial coefficient can be expressed , when summing over all generations of the population with secondary driver mutations to derive the expression for the population of cells with secondary driver mutations , we have to calculate the sum when we go further and try to calculate the expression for the population with secondary driver mutations , we need to apply this sum -times and hence we obtain a multi sum , only an analytical expression for this multi sum would allow a closed solution of the problem with .also taking the limit of our expression for is a substantial mathematical challenge .however , we can use our solution for for arbitrarily small values of .moreover , numerical considerations show that the result for is very close to the case of . in figure [fig : dyn ] , the dynamics of the average number of cells with a certain number of mutations , is shown , both without and with the primary driver mutation .simulation results for agree very well with the analytical result obtained for .( circles ) agree almost perfectly with the analytical result obtained for .the bars represent the standard deviation .cells with no mutation have a very small relative standard deviation and cells with one mutation ( i.e. one passenger only or the driver only ) have a relatively small standard deviation .in contrast , cells with two passenger mutations for instance have a very broad standard deviation in the beginning that is approximately four times the average number . only in few realizations , a primary driver mutations co -occurs with several secondary drivers , hence the simulation data for these cases shows a large spread ( parameters : , , , , , , ).,scaledwidth=100.0% ] next , let us calculate the distribution of the time it takes until rapid proliferation occurs .since we use a multi type , time discrete branching process , we can make use of the probability generating function to recursively calculate the probability for a certain cell type cell to be present at a certain time . of particular interest is the probability that a tumor initiating cell is present , i.e. in the example above a cell with the primary driver mutation and 4 secondary driver mutations .let be the probability generation function for the cell type in the branching process described above .the probability that there is no cell with the primary driver mutation and secondary driver mutations at time , when the process starts with one -cell , can be computed by recursively calculating , where starting with cells , we have to consider .thus , the probability that there is at least one tumor cell at time is . if we are interested in the probability that there is at least one tumor cell exactly at time , we need to subtract the probability , that there has been a tumor cell before .figure [ fig : timecompare ] shows the good agreement between simulations and the recursive calculation . shown as a dashed line ( parameters : , , , , , , , distribution over 20000 independent realizations ) ., scaledwidth=70.0% ] the time distribution for low follows a power law , as shown in the inset of figure [ fig : timecompare ] .the exponent of the power law is approximately .if all mutations were neutral , one would expect a lead coefficient of approximately 4 to accumulate five mutations , as derived by . in our case ,the curve increases slower .numerical considerations show that the main reason for this is that , in contrast to , we allow extinction : many lineages that have accumulated mutations go extinct before the final , cancer causing mutation arises .most models in literature assume that each mutation leads to an independent and steady increase in the cells fitness . in this context ,neutral passenger mutations have no causal impact on cancer progression . only recently, some authors have considered passenger mutations not only as neutral byproducts of the clonal expansion of mutagenic cells , but as having a deleterious impact on the cells fitness . here , we have described a model in which the fitness of the driver mutation strongly depends on the number of passenger mutations the cell has acquired .these passenger mutations , which we have termed secondary driver mutations , lead only to a small change in fitness or no change in fitness at all . as illustrated in figures [ fig : total ] , [ fig : single ] , and [ fig : dyn ] , the number of cells stays roughly constant for a long time before it rapidly increases , despite the fact that mutations occur in the process permanently .this dynamic effect of cancer initiation is very different from models in which mutations do not interact with each other .we speculate that this kind of dynamics can have important implications for diagnosis and treatment . in principle, the dynamics presented in figure [ fig : total ] can also be the result of one highly advantageous , but very unlikely driver mutation .but in such a case , cells with the driver mutation should not be present in the population before tumorigenesis .this contradicts with current knowledge about the _ myc _translocation which has also been detected in humans without lymphoma .this effect is well captured by our model , as shown in figure [ fig : single ] . in some tumors , such as burkitt lymphoma, the neoplasms is only diagnosed after fast tumor growth has started . in this case , sequencing studies have shown that several mutations are present at the time of examination . since the patients typically do not have any symptoms in before diagnosis of the cancer ,it is possible that some mutations have virtually no direct impact on the cells fitness .nevertheless , they are necessary for the initiation of the cancer , as they indirectly allow the driver mutation to initiate rapid cell growth .this agrees well with our epistatic model , where ( nearly ) neutral secondary driver mutations occur at a fixed rate before the cancer can be diagnosed .of course , not all mutations have such an epistatic effect on primary driver mutations , some might even be considered deleterious .nevertheless , our work shows that mutations that appear to be neutral in one context should not only be regarded as a neutral byproduct of the clonal expansion of mutagenic cells . instead , in some cases passenger mutations can have a serious impact in cancer initiation , in particular when there are non - trivial interactions between different mutations . in this case the term passenger " may not be the most appropriate one . to understandthe impact of those interactions can be essential for a deeper understanding of the initiation of cancer .we thank j. richter for stimulating discussions on burkitt lymphoma and b. werner for helpful comments on our manuscript .generous funding by the max - planck society is gratefully acknowledged .r.s . is supported the german ministry of education and science ( bmbf ) through the mmml - myc - sys network systems biology of myc - positive lymphomas ( 036166b )we first consider the case without the primary driver mutation .we assume ( and consequently ) , as discussed in the main text .the rate change of cells with secondary driver mutations is where .the solution of is formulated in the following theorem : [ thm : x0k1 ] for any integer , the number of cells with secondary driver mutations and no primary driver mutation is if is a solution , then it must satisfy .since solutions for recursive functions are always unique , would be the only solution .hence , we proof by inserting the equation on the right hand side of . we can write each product as a -binomial coefficient , . thus , with the -pascal rule equation simplifies to which concludes the proof .we now turn to the cells which have obtained the primary driver mutation .as discussed in the main text , we only look at the case where the fitness change of the secondary driver mutation is not equal to zero , . while for cells without the primary driver mutation there is only one mutational pathway , cells with the primary driver mutationcan be reached via different mutational pathways , because cells that get the primary driver mutation might have different amounts of secondary driver mutations .hence , we need to sum over all those possible pathways .let be the number of secondary drivers that are present in the cell which acquires the primary driver mutation .then denotes the number of cells with the primary driver mutation and secondary driver mutations , when the primary driver mutation has happened in a cell with secondary driver mutations ( ) . with this , the total number of cells with the primary driver mutation is the change in the number of cells now depends on .we have the solution of is given by the following theorem : [ thm : x1kp ] the average number of cells with the primary driver mutation and secondary driver mutations , given that the primary driver mutation happens in a cell with secondary driver mutations , is given by , \end{split } \label{driverexp}\end{aligned}\ ] ] where the function is defined again we proof the theorem by inserting in and showing that the equality holds true .we need to distinguish between the two cases as in .first , we proof the theorem for the case . \\ &\quad + \mu_\mathrm{p}\varsigma_\mathrm{d}\varsigma_\mathrm{dp}^{k-1 } n\mu_d\mu_p^{k-1 } \varsigma_\mathrm{d}^{k - p-1 } \varsigma_\mathrm{dp}^{((k-1)(k-2)-p(p-1))/2 } \frac{\varsigma_\mathrm{p}^{p(p+1)/2}}{\prod_{n=0}^{p-1}\left(1-\varsigma_\mathrm{p}^{n+1}\right ) } \\ & \quad \times \left [ \nu_{\mathrm{p}}^{t - p-1 } \psi_{p , k-1}(t-1 ) - \sum_{j = p}^{k-1}\nu_{\mathrm{p}}^{j - p}\left(\nu_\mathrm{d}\varsigma_\mathrm{dp}^j\varsigma_\mathrm{d}\right)^{t - k } \psi_{p , j}(j)\prod_{n = j}^{k-2}\frac{1-\varsigma_\mathrm{dp}^{t - n-2}}{1-\varsigma_\mathrm{dp}^{k - n-1 } } \right ] \\ &\quad = n\mu_d\mu_p^k \varsigma_\mathrm{d}^{k - p } \varsigma_\mathrm{dp}^{(k(k-1)-p(p-1))/2 } \frac{\varsigma_\mathrm{p}^{p(p+1)/2}}{\prod_{n=0}^{p-1}\left(1-\varsigma_\mathrm{p}^{n+1}\right ) } \\ & \quad \times \left [ \nu_\mathrm{p}^{t - p-1}\left ( \nu_\mathrm{d}\varsigma_\mathrm{d}\varsigma_\mathrm{dp}^k \psi_{p , k}(t-1 ) + \psi_{p , k-1}(t-1 )\right ) - \nu_\mathrm{p}^{k - p}(\nu_\mathrm{d}\varsigma_\mathrm{d}\varsigma_\mathrm{dp}^k)^{t - k}\psi_{p , k}(k ) \right.\\ & \quad - \left .\sum_{j = p}^{k-1 } \nu_\mathrm{p}^{j - p } \psi_{p , j}(j ) \left(\nu_\mathrm{d}\varsigma_\mathrm{d}\varsigma_\mathrm{dp}^j \right)^{t - k}\left ( \varsigma_\mathrm{dp}^{k - j}\prod_{n = j}^{k-1}\frac{1-\varsigma_\mathrm{dp}^{t - n-2}}{1-\varsigma_\mathrm{dp}^{k - n}}+\prod_{n = j}^{k-2}\frac{1-\varsigma_\mathrm{dp}^{t - n-2}}{1-\varsigma_\mathrm{dp}^{k - n-1 } } \right ) \right ] \end{split}\end{aligned}\ ] ] when we compare and we see , that the two equations are equal if and for equation , we have for equation , we need to insert the definition of this concludes the proof for the case .now we look at the case .we have \\ & \quad + \mu_\mathrm{d}\varsigma_\mathrm{p}^k n\mu_\mathrm{p}^k \nu_\mathrm{p}^{t - k-1 } \varsigma_\mathrm{p}^{k(k-1)/2 } \prod_{n=0}^{k-1}\frac{1-\varsigma_\mathrm{p}^{t - n-1}}{1-\varsigma_\mathrm{p}^{n+1}}\\ & \quad = n \mu_\mathrm{d}\mu_\mathrm{p}^k \frac{\varsigma_\mathrm{p}^{k(k+1)/2}}{\prod_{n=0}^{k-1}(1-\varsigma_\mathrm{p}^{n+1})}\left [ \nu_\mathrm{d}\varsigma_\mathrm{d}\varsigma_\mathrm{dp}^k\nu_\mathrm{p}^{t - k-1 } \psi_{k , k}(t-1 ) - \left(\nu_\mathrm{d}\varsigma_\mathrm{d}\varsigma_\mathrm{dp}^k\right)^{t - k } \psi_{k , k}(k ) + \nu_\mathrm{p}^{t - k-1}\prod_{n=0}^{k-1}\left ( 1-\varsigma_\mathrm{p}^{t - n-1 } \right ) \right].\end{aligned}\ ] ] in order for this to be equal to , we need analogue to this equation holds true if by writing the summation as a -pochhammer symbol , we have this concludes the proof also for .here , we try to understand this equation in a more intuitive way . for each generation , the number of possibilities to distribute the secondary driver mutations over time steps is given by the -binomial coefficient .but the growth of the cells depends on the time when the secondary driver mutations are first acquired .due to fitness advantage , the earlier the mutations have been acquired , the faster the population grows , and also the sooner the primary driver mutation can be obtained . as in equation( 5 ) , the effect of the fitness advantage on the cells without the primary driver mutation itself is captured by multiplying .the effect on the primary driver mutation is more intricate . to capture this effect , we start from a -binomial coefficient and rewrite the -pochhammer symbol in the numerator in terms of a sum , to make this resemble the term in the parentheses in the second line of equation ( 11 ), we divide the numerator by and we obtain with equation can be written as for the numerator of this modified -binomial coefficient , we introduce the abbreviation in terms of this -function equation ( 11 ) can be written in a more compact form as .\end{split}\end{aligned}\ ] ] alexandrov , l. b. , nik - zainal , s. , wedge , d. c. , aparicio , s. a. j. r. , behjati , s. , biankin , a. v. , bignell , g. r. , bolli , n. , borg , a. , brresen - dale , a .-l . , boyault , s. , burkhardt , b. , butler , a. p. , caldas , c. , davies , h. r. , desmedt , c. , eils , r. , eyfjrd , j. e. , foekens , j. a. , greaves , m. , hosoda , f. , hutter , b. , ilicic , t. , imbeaud , s. , imielinski , m. , imielinsk , m. , jger , n. , jones , d. t. w. , jones , d. , knappskog , s. , kool , m. , lakhani , s. r. , lpez - otn , c. , martin , s. , munshi , n. c. , nakamura , h. , northcott , p. a. , pajic , m. , papaemmanuil , e. , paradiso , a. , pearson , j. v. , puente , x. s. , raine , k. , ramakrishna , m. , richardson , a. l. , richter , j. , rosenstiel , p. , schlesner , m. , schumacher , t. n. , span , p. n. , teague , j. w. , totoki , y. , tutt , a. n. j. , valds - mas , r. , van buuren , m. m. , van t veer , l. , vincent - salomon , a. , waddell , n. , yates , l. r. , australian pancreatic cancer genome initiative , icgc breast cancer consortium , icgc mmml - seq consortium , icgc pedbrain , zucman - rossi , j. , futreal , p. a. , mcdermott , u. , lichter , p. , meyerson , m. , grimmond , s. m. , siebert , r. , campo , e. , shibata , t. , pfister , s. m. , campbell , p. j. , stratton , m. r. , 2013 .signatures of mutational processes in human cancer .nature 500 , 415421 .beerenwinkel , n. , antal , t. , dingli , d. , traulsen , a. , kinzler , k. w. , velculescu , v. e. , vogelstein , b. , nowak , m. a. , 2007 . genetic progression and the waiting time to cancer .plos computational biology 3 , e225 .birch , j. m. , blair , v. , kelsey , a. m. , evans , d. g. , harris , m. , tricker , k. j. , varley , j. m. , sep 1998 .cancer phenotype correlates with constitutional tp53 genotype in families with the li - fraumeni syndrome .oncogene 17 ( 9 ) , 10618 . bozic , i. , antal , t. , ohtsuki , h. , carter , h. , kim , d. , chen , s. , karchin , r. , kinzler , k. w. , vogelstein , b. , nowak , m. a. , 2010. accumulation of driver and passenger mutations during tumor progression .proceedings of the national academy of sciences usa 107 , 1854518550 .frhling , s. , scholl , c. , levine , r. l. , loriaux , m. , boggon , t. j. , bernard , o. a. , berger , r. , dhner , h. , dhner , k. , ebert , b. l. , teckie , s. , golub , t. r. , jiang , j. , schittenhelm , m. m. , lee , b. h. , griffin , j. d. , stone , r. m. , heinrich , m. c. , deininger , m. w. , druker , b. j. , gilliland , d. g. , 2007 .identification of driver and passenger mutations of flt3 by high - throughput dna sequence analysis and functional assessment of candidate alleles .cancer cell 12 , 501513 .greenman , c. , stephens , p. , smith , r. , dalgliesh , g. l. , hunter , c. , bignell , g. , davies , h. , teague , j. , butler , a. , stevens , c. , edkins , s. , omeara , s. , vastrik , i. , schmidt , e. e. , avis , t. , barthorpe , s. , bhamra , g. , buck , g. , choudhury , b. , clements , j. , cole , j. , dicks , e. , forbes , s. , gray , k. , halliday , k. , harrison , r. , hills , k. , hinton , j. , jenkinson , a. , jones , d. , menzies , a. , mironenko , t. , perry , j. , raine , k. , richardson , d. , shepherd , r. , small , a. , tofts , c. , varian , j. , webb , t. , west , s. , widaa , s. , yates , a. , cahill , d. p. , louis , d. n. , goldstraw , p. , nicholson , a. g. , brasseur , f. , looijenga , l. , weber , b. l. , chiew , y .- e . ,defazio , a. , greaves , m. f. , green , a. r. , campbell , p. , birney , e. , easton , d. f. , chenevix - trench , g. , tan , m .- h . ,khoo , s. k. , teh , b. t. , yuen , s. t. , leung , s. y. , wooster , r. , futreal , p. a. , stratton , m. r. , mar 2007 .patterns of somatic mutation in human cancer genomes .nature 446 , 1538 .hummel , m. , bentink , s. , berger , h. , klapper , w. , wessendorf , s. , barth , t. f. e. , bernd , h .- w . ,cogliatti , s. b. , dierlamm , j. , feller , a. c. , hansmann , m .-haralambieva , e. , harder , l. , hasenclever , d. , khn , m. , lenze , d. , lichter , p. , martin - subero , j. i. , mller , p. , mller - hermelink , h .- k . , ott , g. , parwaresch , r. m. , pott , c. , rosenwald , a. , rosolowski , m. , schwaenen , c. , strzenhofecker , b. , szczepanowski , m. , trautmann , h. , wacker , h .- h ., spang , r. , loeffler , m. , trmper , l. , stein , h. , siebert , r. , molecular mechanisms in malignant lymphomas network project of the deutsche krebshilfe , jun 2006 . a biologic definition of burkitt s lymphoma from transcriptional and genomic profiling .new england journal of medicine 354 ( 23 ) , 241930 .jones , s. , chen , w .- d . , parmigiani , g. , diehl , f. , beerenwinkel , n. , antal , t. , traulsen , a. , nowak , m. a. , siegel , c. , velculescu , v. , kinzler , k. w. , vogelstein , b. , willis , j. , markowitz , s. , 2008 .comparative lesion sequencing provides insights into tumor evolution .proceedings of the national academy of sciences usa 105 , 42834288 .justilien , v. , walsh , m. p. , ali , s. a. , thompson , e. a. , murray , n. r. , fields , a. p. , 2014 .the prkci and sox2 oncogenes are coamplified and cooperate to activate hedgehog signaling in lung squamous cell carcinoma .cancer cell 25 , 139151 .love , c. , sun , z. , jima , d. , li , g. , zhang , j. , miles , r. , richards , k. l. , dunphy , c. h. , choi , w. w. l. , srivastava , g. , lugar , p. l. , rizzieri , d. a. , lagoo , a. s. , bernal - mizrachi , l. , mann , k. p. , flowers , c. r. , naresh , k. n. , evens , a. m. , chadburn , a. , gordon , l. i. , czader , m. b. , gill , j. i. , hsi , e. d. , greenough , a. , moffitt , a. b. , mckinney , m. , banerjee , a. , grubor , v. , levy , s. , dunson , d. b. , dave , s. s. , 2012 . the genetic landscape of mutations in burkitt lymphoma .nat genet 44 , 13211325 .mcfarland , c. d. , korolev , k. s. , kryukov , g. v. , sunyaev , s. r. , mirny , l. a. , feb 2013 .impact of deleterious passenger mutations on cancer progression .proceedings of the national academy of sciences usa 110 , 29105 .mller , j. r. , janz , s. , goedert , j. j. , potter , m. , rabkin , c. s. , jul 1995 .persistence of immunoglobulin heavy chain / c - myc recombination - positive lymphocyte clones in the blood of human immunodeficiency virus - infected homosexual men .proceedings of the national academy of sciences usa 92 ( 14 ) , 657781 .nie , z. , hu , g. , wei , g. , cui , k. , yamane , a. , resch , w. , wang , r. , green , d. r. , tessarollo , l. , casellas , r. , zhao , k. , levens , d. , 2012 .c - myc is a universal amplifier of expressed genes in lymphocytes and embryonic stem cells .cell 151 , 6879 .nowak , m. , komarova , n. l. , sengupta , a. , jallepalli , p. , shih , i. , vogelstein , b. , lengauer , c. , 2002 .the role of chromosomal instability in tumour initiation .proceedings of the national academy of sciences usa 99 ( 25 ) , 1622616231 .pelengaris , s. , khan , m. , evan , g. i. , may 2002 . suppression of myc - induced apoptosis in beta cells exposes multiple oncogenic properties of myc and triggers carcinogenic progression .cell 109 ( 3 ) , 32134 . richter , j. , schlesner , m. , hoffmann , s. , kreuz , m. , leich , e. , burkhardt , b. , rosolowski , m. , ammerpohl , o. , wagener , r. , bernhart , s. h. , lenze , d. , szczepanowski , m. , paulsen , m. , lipinski , s. , russell , r. b. , adam - klages , s. , apic , g. , claviez , a. , hasenclever , d. , hovestadt , v. , hornig , n. , korbel , j. o. , kube , d. , langenberger , d. , lawerenz , c. , lisfeld , j. , meyer , k. , picelli , s. , pischimarov , j. , radlwimmer , b. , rausch , t. , rohde , m. , schilhabel , m. , scholtysik , r. , spang , r. , trautmann , h. , zenz , t. , borkhardt , a. , drexler , h. g. , mller , p. , macleod , r. a. f. , pott , c. , schreiber , s. , trmper , l. , loeffler , m. , stadler , p. f. , lichter , p. , eils , r. , kppers , r. , hummel , m. , klapper , w. , rosenstiel , p. , rosenwald , a. , brors , b. , siebert , r. , icgc mmml - seq project , 2012 .recurrent mutation of the id3 gene in burkitt lymphoma identified by integrated genome , exome and transcriptome sequencing .nature genetics 44 ( 12 ) , 131620 .sander , s. , calado , d. p. , srinivasan , l. , kchert , k. , zhang , b. , rosolowski , m. , rodig , s. j. , holzmann , k. , stilgenbauer , s. , siebert , r. , bullinger , l. , rajewsky , k. , aug 2012 .synergy between pi3k signaling and myc in burkitt lymphomagenesis .cancer cell 22 ( 2 ) , 16779 . schmitz , r. , young , r. m. , ceribelli , m. , jhavar , s. , xiao , w. , zhang , m. , wright , g. , shaffer , a. l. , hodson , d. j. , buras , e. , liu , x. , powell , j. , yang , y. , xu , w. , zhao , h. , kohlhammer , h. , rosenwald , a. , kluin , p. , mller - hermelink , h. k. , ott , g. , gascoyne , r. d. , connors , j. m. , rimsza , l. m. , campo , e. , jaffe , e. s. , delabie , j. , smeland , e. b. , ogwang , m. d. , reynolds , s. j. , fisher , r. i. , braziel , r. m. , tubbs , r. r. , cook , j. r. , weisenburger , d. d. , chan , w. c. , pittaluga , s. , wilson , w. , waldmann , t. a. , rowe , m. , mbulaiteye , s. m. , rickinson , a. b. , staudt , l. m. , oct 2012 .burkitt lymphoma pathogenesis and therapeutic targets from structural and functional genomics .nature 490 ( 7418 ) , 11620 .sjblom , t. , jones , s. , wood , l. , parsons , d. , lin , j. , barber , t. , mandelker , d. , leary , r. , ptak , j. , silliman , n. , szabo , s. , buckhaults , p. , farrell , c. , meeh , p. , markowitz , s. , willis , j. , dawson , d. , willson , j. , gazdar , a. , hartigan , j. , wu , l. , liu , c. , parmigiani , g. , park , b. , bachman , k. , papadopoulos , n. , vogelstein , b. , kinzler , k. , velculescu , v. , 2006 .the consensus coding sequences of human breast and colorectal cancers .science 314 , 268274 .szappanos , b. , kovcs , k. , szamecz , b. , honti , f. , costanzo , m. , baryshnikova , a. , gelius - dietrich , g. , lercher , m. j. , jelasity , m. , myers , c. l. , andrews , b. j. , boone , c. , oliver , s. g. , pl , c. , papp , b. , jul 2011 .an integrated approach to characterize genetic interaction networks in yeast metabolism .nature genetics 43 ( 7 ) , 65662 .szendro , i. g. , franke , j. , de visser , j. a. g. m. , krug , j. , 2013 . predictability of evolution depends nonmonotonically on population size .proceedings of the national academy of sciences usa 110 , 571576 .wang , c. , tai , y. , lisanti , m. p. , liao , d. j. , 2011 .c - myc induction of programmed cell death may contribute to carcinogenesis : a perspective inspired by several concepts of chemical carcinogenesis .cancer biology and therapy 11 , 615626 .wood , l. d. , parsons , d. w. , jones , s. , lin , j. , sjoblom , t. , leary , r. j. , shen , d. , boca , s. m. , barber , t. , ptak , j. , silliman , n. , szabo , s. , dezso , z. , ustyanksky , v. , nikolskaya , t. , nikolsky , y. , karchin , r. , wilson , p. a. , kaminker , j. s. , zhang , z. , croshaw , r. , willis , j. , dawson , d. , shipitsin , m. , willson , j. k. v. , sukumar , s. , polyak , k. , park , b. h. , pethiyagoda , c. l. , pant , p. v. k. , ballinger , d. g. , sparks , a. b. , hartigan , j. , smith , d. r. , suh , e. , papadopoulos , n. , buckhaults , p. , markowitz , s. d. , parmigiani , g. , kinzler , k. w. , velculescu , v. e. , vogelstein , b. , 2007 .the genomic landscapes of human breast and colorectal cancers .science 318 ( 5853 ) , 11081113 .zech , l. , haglund , u. , nilsson , k. , klein , g. , 1976 .characteristic chromosomal abnormalities in biopsies and lymphoid - cell lines from patients with burkitt and non - burkitt lymphomas .int j cancer 17 , 4756 .
we investigate the dynamics of cancer initiation in a mathematical model with one driver mutation and several passenger mutations . our analysis is based on a multi type branching process : we model individual cells which can either divide or undergo apoptosis . in case of a cell division , the two daughter cells can mutate , which potentially confers a change in fitness to the cell . in contrast to previous models , the change in fitness induced by the driver mutation depends on the genetic context of the cell , in our case on the number of passenger mutations . the passenger mutations themselves have no or only a very small impact on the cell s fitness . while our model is not designed as a specific model for a particular cancer , the underlying idea is motivated by clinical and experimental observations in burkitt lymphoma . in this tumor , the hallmark mutation leads to deregulation of the _ myc _ oncogene which increases the rate of apoptosis , but also the proliferation rate of cells . this increase in the rate of apoptosis hence needs to be overcome by mutations affecting apoptotic pathways , naturally leading to an epistatic fitness landscape . this model shows a very interesting dynamical behavior which is distinct from the dynamics of cancer initiation in the absence of epistasis . since the driver mutation is deleterious to a cell with only a few passenger mutations , there is a period of stasis in the number of cells until a clone of cells with enough passenger mutations emerges . only when the driver mutation occurs in one of those cells , the cell population starts to grow rapidly . cancer , modeling , somatic evolution , population dynamics
complex dynamical systems in science and engineering often involve multiple time scales , such as slow and fast time scales , as well as uncertainty caused by noisy fluctuations .for example , aerosol and pollutant particles , occur in various natural contexts ( e.g. , in atmosphere and ocean coasts ) and engineering systems ( e.g. spray droplets ) , are described by coupled system of differential equations .some particles move fast while others move slower , and they are usually subject to random influences , due to molecular diffusion , environmental fluctuations , or other small scale mechanisms that are not explicitly modeled .invariant manifolds are geometric structures in state space that help describe dynamical behaviors of dynamical systems .a slow manifold is a special invariant manifold , with an exponential attracting property and with the dimension the same as the number of slow variables .the reduced system on a slow manifold thus characterizes the long time dynamics in a lower dimensional setting , facilitating geometric and numerical investigation .existence for slow manifolds of stochastic dynamical systems with slow - fast time scales has been investigated recently .however , stochastic slow manifolds are difficult to depict or visualize . therefore ,in this paper , we approximate these random geometric invariant structures in the case of large time scale separation .we derive an asymptotic approximation for these stochastic manifolds , and illustrate the random slow manifold reduction by considering the motion of aerosol particles in a random cellular fluid flow .the reduced slow system , being lower dimensional , facilitates our understanding of particle settling .we comment that approximations for individual solution paths ( not a stochastic slow manifold ) for stochastic slow - fast systems have been well investigated .approximations for deterministic slow manifolds have also been considered .this paper is organized as follows .an approximation method for random slow manifolds is considered in [ slow888 ] , and the dynamics of aerosol particles in a random flow fieldis investigated in [ settle888 ] .we first examine the existence of a random slow manifold for a slow - fast stochastic dynamical system , then devise an approximation method for this slow manifold , and thus obtain a low dimensional , reduced system for the evolution of slow dynamics .we consider the following slow - fast system of stochastic differential equations ( sdes ) here and are respectively and matrices .the nonlinear functions and are -smooth and lipschitz continuous with lipschitz constants and , respectively . the parameter is a positive number and the parameter is small ( representing scale separation ) .the stochastic process is a two - sided -valued wiener process . when and are locally lipschitz but the system has a bounded ( i.e. , in mean square norm ) absorbing set , a useful trick is to cut - off the nonlinearities to zero outside the absorbing set , so that the new system has global lipscitz nonlinearities and has the same long time random dynamics as the original system .the existence of a random slow manifold for this system has been considered in but we adopt a method from our earlier work .we recall the definition of a random dynamical system ( rds ) in a probability space .let be a , flow , i.e. , additionally , the measure is supposed to be an invariant measure for , i.e. , for all . for a wiener process driving system , we take consisting of all continuous sample paths of on with values in and . on ,the flow is given by the wiener shift a measurable map is said to satisfy the cocycle property if a random dynamical system consists of a driving system and a measurable map with the cocycle property .introduce a banach space as our working space for random slow manifolds .for , define + \rightarrow r^n : \nu\ ; \text { is continuous and } \sup\limits_{t\leq 0 } |e^{\lambda t}\nu(t)|_{r^n } < \infty\big\},\]]and\rightarrow r^m : \nu\ ; \text { is continuous and } \sup\limits_{t\leq 0 } |e^{\lambda t}\nu(t)|_{r^m } < \infty\big\},\ ] ] + with the following norms respectively and let be the product banach space , with the norm for matrices and , we make the following assumptions : : there are constants , and , satisfying and , such that for every and , the following exponential estimates hold : : .+ in order to use the random invariant manifold framework , we transfer an sde system into a random differential equation ( rde ) system .introduce the following linear langevin system it is known that the following process is the stationary solution of the linear system moreover , similarly , is the stationary solution of the following linear sde system with and denoting , which is also a wiener process , and has the same distribution as , with .therefore , by a transformation at the second equal sign and then omitting the prime in , we have and moreover , by defining at the second equal sign below , we get the equations and indicate that and are identically distributed with and , respectively . and by and , and have the same distribution .we then introduce a random transformation where satisfies system .+ then the sde system is transferred into the following rde system , by the variation of constants formula , this rde system is further rewritten as as , we have the following estimation , for satisfying .+ letting , we get the expression of the rde system , we rescale the time by letting , from system and by we get , , \,\,\,x\in \mathbb{r}^n,\\\label{rde - var - y } y'(\tau\varepsilon ) & = & by + g(x(\tau\varepsilon ) , y(\tau\varepsilon)+\sigma\eta(\theta_\tau\psi_\varepsilon\omega)),\,\ , y\in \mathbb{r}^m,\end{aligned}\ ] ] where . + we can rewrite these as the integral form below , \,ds,\\ y(\tau\varepsilon ) & = & \int_{-\infty}^\tau e^{b(\tau - s ) } g(x(s\varepsilon ) , y(s\varepsilon)+\sigma\eta(\theta_s\psi_\varepsilon\omega))\,ds.\end{aligned}\ ] ] we now recall some basic facts about random slow manifolds and dimension - reduced systems , when the scale separation is sufficiently large . a random set is called a random slow manifold ( a special random inertial manifold ) for the system , if it satisfies the following conditions : + is invariant with respect to a random dynamical system , i.e. is globally lipschitz in for all and for any the mapping is a random variable .+ the distance of and tends to with exponential rate , for , as tends to infinite .a random slow manifold , which is lower dimensional , retains the long time dynamics of the original system , when is sufficiently small . in , a random hadamard graph transformwas used to prove the existence of a random inertial manifolds , here we use lyapunov- perron method to achieve our result as in .assume that and hold and that there exists a such that .then , for sufficiently small , there exists a random slow manifold for the random slow - fast system .this proof is adapted from for our finite dimensional setting . for completeness, we include the essential part here . for a , we use the banach space as defined in the beginning of this section . + denote a nonlinear mapping note that is well - defined from .we will show that for every initial data , have a unique solution in . for , , we have that + \,ds\big|_{\mathbb{r}^n}\\{}&&+\sup\limits_{t\leq 0}e^{\lambda t}\big|\frac{1}{\varepsilon}\int_{-\infty}^te^{b\frac{t - s}{\varepsilon}}\big[g(x(s ) , y(s)+\sigma\eta^\varepsilon(\theta_s\omega))-g(\bar{x}(s ) , \bar{y}(s)+\sigma\eta^\varepsilon(\theta_s\omega))\big]\,ds\big|_{\mathbb{r}^m}\\ & \leq & \big|(x , y)-(\bar{x } , \bar{y})\big|_{c_\lambda } \sup\limits_{t\leq 0}\big\{e^{(\lambda+\alpha ) t}kl_f\int_t^0e^{-(\lambda+\alpha ) s}\,ds\big\}\\{}&&+\big|(x , y)-(\bar{x } , \bar{y})\big|_{c_\lambda } \sup\limits_{t\leq 0}\big\{\frac{1}{\varepsilon}e^{(\lambda-\frac{\beta}{\varepsilon } ) t}kl_g\int_{-\infty}^te^{(\frac{\beta}{\varepsilon}-\lambda)s}\,ds\big\}\\ & \leq & \big(\frac{kl_f}{\alpha+\lambda } + \frac{kl_g}{\beta-\varepsilon\lambda}\big)\big|(x , y)-(\bar{x } , \bar{y})\big|_{c_\lambda } .\end{aligned}\ ] ] the first inequality is by and the lipschitz continuity of and , while the second inequality comes from direct calculation .taking , which satisfies , we conclude that by the assumption , , and as .therefore , for small enough , .the contraction map theorem implies that for every , has a fixed point which is the unique solution of the differential equation system .moreover , the fixed point has the property denoting , we obtain with the help of inequality , we further have thus , is lipschitz continuous . by the fact that if and only if there exists and satisfies and , it follows that if and only if there exists such that , there exists a random slow manifold by the random transformation and noting that and have the same distribution , the dynamics on the slow manifold is now described by the following dimension - reduced system in ( from equation ( [ sde ] ) ) , for sufficiently small : we now approximate the slow manifolds for sufficiently small .expand the solution of system as and the initial conditions as and . with the help of and, we have the expansions \,ds\\{}&&+f_y(\xi , y_0(\tau)+\sigma\eta(\theta_\tau\psi_\varepsilon\omega))\cdot\big[\varepsilon y_1(\tau)+\cdots\big]+\cdots\\&=&f(\xi , y_0(\tau)+\sigma\eta(\theta_\tau\psi_\varepsilon\omega))\\{}&&+f_x(\xi , y_0(\tau)+\sigma\eta(\theta_\tau\psi_\varepsilon\omega))\cdot \varepsilon\int_0^\tau\big[a \xi+f(\xi , y_0(s)+\sigma\eta(\theta_s\psi_\varepsilon\omega))\big]\,ds\\{}&&+f_y(\xi , y_0(\tau)+\sigma\eta(\theta_\tau\psi_\varepsilon\omega))\cdot\varepsilon y_1(\tau)+\cdots,\\ g(x(\tau\varepsilon ) , y(\tau\varepsilon)+\sigma\eta(\theta_\tau\psi_\varepsilon\omega))&=&g(\xi , y_0(\tau)+\sigma\eta(\theta_\tau\psi_\varepsilon\omega))\\{}&&+g_x(\xi , y_0(\tau)+\sigma\eta(\theta_\tau\psi_\varepsilon\omega))\cdot \varepsilon\int_0^\tau\big[a \xi+f(\xi , y_0(s)+\sigma\eta(\theta_s\psi_\varepsilon\omega))\big]\,ds\\{}&&+g_y(\xi , y_0(\tau)+\sigma\eta(\theta_\tau\psi_\varepsilon\omega))\cdot\varepsilon y_1(\tau)+\cdots.\end{aligned}\ ] ] inserting into , expanding and then matching the terms of the same power of , we get and (\tau)\\ \quad\quad\quad\quad+g_x(\xi , y_0(\tau)+\sigma\eta(\theta_\tau\psi_\varepsilon\omega))\big\{a\tau\xi+\int_0^\tau f(\xi , y_0(s)+\sigma\eta(\theta_s\psi_\varepsilon\omega))\,ds\big\ } , \\ y_1(0)=\tilde h^{(1)}(\xi , \omega ) .\end{cases}\end{aligned}\ ] ] solving the two equations for and , we obtain and \,ds.\nonumber\\\end{aligned}\ ] ] with the help of and , the expression can be calculated as follows + g_y(\xi , y_0(s)+\sigma\eta(\theta_s\psi_\varepsilon\omega))\varepsilon y_1(s)\big\}\,ds+\mathcal{o}(\varepsilon^2)\\ & = & \int_{-\infty}^0 e^{-bs}g(\xi , y_0(s)+\sigma\eta(\theta_s\psi_\varepsilon\omega))\,ds \\{}&&+ \varepsilon\int_{-\infty}^0 e^{-bs}\big\{g_x(\xi , y_0(s)+\sigma\eta(\theta_s\psi_\varepsilon\omega))\big[as\xi+\int_0^s\big(f(\xi , y_0(r)+\sigma\eta(\theta_r\psi_\varepsilon\omega))\big)\,dr\big]\\{}&&\quad\quad\quad\quad\quad\quad + g_y(\xi , y_0(s)+\sigma\eta(\theta_s\psi_\varepsilon\omega))y_1(s)\big\}\,ds+\mathcal{o}(\varepsilon^2).\end{aligned}\ ] ] to get the second equation , we used and then used to replace .thus the zero and first order terms in , of in the random slow manifold for , are respectively and \\{}&&\quad\quad\quad\quad\quad\quad + g_y(\xi , y_0(s)+\sigma\eta(\theta_s\psi_\varepsilon\omega))y_1(s)\big\}\,ds.\end{aligned}\ ] ] that is , the slow manifolds of up to the order is represented by .this produces an approximation of the random slow manifold .therefore , we have the following result .[ slow999 ] assume that and hold and assume that there is a such that .then , for sufficiently small , there exists a slow manifold where with and expressed in and , respectively . with the approximated random slow manifold obtain the following dimension - reduced approximate random system in ( from equation ( [ slowdynamics ] ) ) , for sufficiently small : the motion of aerosol particles in a cellular flow field , stommel once observed that , ignoring particle inertial ( ) , some particles follow closed paths and are permanently suspended in the flow .rubin , jones and maxey showed that any small amount inertial ( small ) will cause almost all particles to settle . via a singular perturbation theory ,jones showed the existence of an attracting slow manifold . by analyzing the equations of motion on the slow manifold , especially heteroclinic orbits , they established the presence of mechanisms that inhibit trapping and enhance settling of particles .consider a model for the motion of aerosol particles in a cellular flow field , under random environmental influences where and are position and velocity , respectively , of a particle in the horizontal- vertical plane ( positive axis points to the settling / gravitational direction ) , is a velocity scale , and is the settling velocity in still fluid .moreover , are independent scalar wiener processes , is a positive parameter , and is the inertial response time scale of the particle .note that is the so - called cellular flow field velocity components ( horizontal and vertical ) on the domain ( a ` cell ' ) . as in section [ slow888] , this four dimensional sde system can be converted to the following rde system where denoting and , and we examine the motion of the particle . by using and , we get and owing to , \,ds\\{}&= & -a^2 \sin \xi_1 \cos \xi_1 + a v \sin \xi_1 \sin \xi_2 + a \sigma \cos \xi_1 \cos \xi_2 \int_{-\infty}^0 s e^s \,dw_s^1 \\ & & { } -a \sigma \sin \xi_1 \sin \xi_2 \int_{-\infty}^0 s e^s \,dw_s^2,\\ \tilde h_2^{(1)}(\xi_1,\xi_2,\omega ) & = & -a^2 \sin \xi_2 \cos \xi_2 + a v \cos \xi_1 \cos \xi_2 + a \sigma \sin \xi_1 \sin \xi_2 \int_{-\infty}^0 s e^s \,dw_s^1 \\ & & { } -a \sigma\cos \xi_1 \cos \xi_2 \int_{-\infty}^0 s e^s \,dw_s^2.\end{aligned}\ ] ] therefore , from ( [ slowdynamics ] ) , the dynamics on the random slow manifold is described by the following dimension - reduced system : note that is the particle position . for random slow manifold reduction , it is customary to use a notation different from the original one .the positive direction points toward the bottom of the fluid . in this section ,we conduct numerical simulations for this reduced or slow system - . when , , this reduced system becomes the classical system for the motion of particles in the cellular flow .when , indicates no noise , while a non - zero means noise is present . motivated by understanding the settling of particles as in ,we first calculate first exit time of particles , described by the random system - , from the domain and then examine how particles , exit or escape the fluid domain . to this end , we introduce two concepts : first exit time and escape probability .the first exit time is the time when a particle , initially at , first exits the domain : let be a subboundary .the escape probability , for a particle initially at , through a subboundary , is the likelihood that this particle first escapes the domain by passing through .we will take to be one of the four sides of the fluid domain .the escape probability of a particle through the top side means the likelihood that this particle settles directly to the bottom of the fluid ( note that the positive direction points to the bottom of the fluid ) . to compute the first exit time from the domain , we place particles on a lattice of grid points in and on its boundary , and set a large enough threshold time .as soon as a particle reaches boundary of , it is regarded as ` having exited ' from .if a particle leaves before , then the time of leaving is taken as the first exit time , but if it is still in the domain at time , we take as the first exit time .when a particle s first exit time is , we can see it as trapped in the cell . in order to calculate the escape probability of a particle under noise through a subboundary , one of the four sides of the domain, we calculate a large number , , of paths for each particle to see how many ( say ) of them exit through the subboundary , and then we get the escape probability .we do this for particles placed on a lattice of grid points in and on its boundary .when a particle reaches or is on a side subboundary , it is regarded as ` having escaped through ' that part of the boundary . , and ( no noise ) : ( top ) ; ( bottom).,title="fig:",height=226 ] + , and ( no noise ) : ( top ) ; ( bottom).,title="fig:",height=226 ] and : and ( top , no noise ) ; and ( middle , no noise ) ; and ( bottom , with noise).,title="fig:",height=226 ] + and : and ( top , no noise ) ; and ( middle , no noise ) ; and ( bottom , with noise).,title="fig:",height=226 ] + and : and ( top , no noise ) ; and ( middle , no noise ) ; and ( bottom , with noise).,title="fig:",height=226 ] ( a ) , and : ( a ) , ( b ) , ( c ) and ( d ) . , title="fig:",height=226 ] ( b ) , and : ( a ) , ( b ) , ( c ) and ( d ) . , title="fig:",height=226 ] + ( c ) , and : ( a ) , ( b ) , ( c ) and ( d ) . ,title="fig:",height=226 ] ( d ) , and : ( a ) , ( b ) , ( c ) and ( d ) . , title="fig:",height=226 ] ( a ) , , and : ( a ) escape through ( settling direction or physical bottom boundary ) , ( b ) escape through ( physical top boundary ) , ( c ) escape through ( left boundary ) , and ( d ) escape through ( right boundary).,title="fig:",height=226 ] ( b ) , , and : ( a ) escape through ( settling direction or physical bottom boundary ) , ( b ) escape through ( physical top boundary ) , ( c ) escape through ( left boundary ) , and ( d ) escape through ( right boundary).,title="fig:",height=226 ] + ( c ) , , and : ( a ) escape through ( settling direction or physical bottom boundary ) , ( b ) escape through ( physical top boundary ) , ( c ) escape through ( left boundary ) , and ( d ) escape through ( right boundary).,title="fig:",height=226 ] ( d ) , , and : ( a ) escape through ( settling direction or physical bottom boundary ) , ( b ) escape through ( physical top boundary ) , ( c ) escape through ( left boundary ) , and ( d ) escape through ( right boundary).,title="fig:",height=226 ] ( a ) , , and : ( a ) escape through ( settling direction or physical bottom boundary ) , ( b ) escape through ( physical top boundary ) , ( c ) escape through ( left boundary ) , and ( d ) escape through ( right boundary).,title="fig:",height=226 ] ( b ) , , and : ( a ) escape through ( settling direction or physical bottom boundary ) , ( b ) escape through ( physical top boundary ) , ( c ) escape through ( left boundary ) , and ( d ) escape through ( right boundary).,title="fig:",height=226 ] + ( c ) , , and : ( a ) escape through ( settling direction or physical bottom boundary ) , ( b ) escape through ( physical top boundary ) , ( c ) escape through ( left boundary ) , and ( d ) escape through ( right boundary).,title="fig:",height=226 ] ( d ) , , and : ( a ) escape through ( settling direction or physical bottom boundary ) , ( b ) escape through ( physical top boundary ) , ( c ) escape through ( left boundary ) , and ( d ) escape through ( right boundary).,title="fig:",height=226 ] ( a1 ) , , and : ( a1 ) and ( a2 ) are figure [ ep - w01](a ) splitting at the deterministic heteroclinic orbit ; ( c1 ) and ( c2 ) are figure [ ep - w01](c ) splitting at the same heteroclinic orbit.,title="fig:",height=226 ] ( a2 ) , , and : ( a1 ) and ( a2 ) are figure [ ep - w01](a ) splitting at the deterministic heteroclinic orbit ; ( c1 ) and ( c2 ) are figure [ ep - w01](c ) splitting at the same heteroclinic orbit.,title="fig:",height=226 ] + ( c1 ) , , and : ( a1 ) and ( a2 ) are figure [ ep - w01](a ) splitting at the deterministic heteroclinic orbit ; ( c1 ) and ( c2 ) are figure [ ep - w01](c ) splitting at the same heteroclinic orbit.,title="fig:",height=226 ] ( c2 ) , , and : ( a1 ) and ( a2 ) are figure [ ep - w01](a ) splitting at the deterministic heteroclinic orbit ; ( c1 ) and ( c2 ) are figure [ ep - w01](c ) splitting at the same heteroclinic orbit.,title="fig:",height=226 ] ( blue solid curve ) and unstable manifold ( blue dashed curve ) for the slow system and : , , , and ( no noise ) .two equilibrium points are and .,width=302,height=264 ] when the settling velocity in still fluid is zero , i.e. , ( and also ) , particles are trapped in the cell either in circular motion ( with no inertial , ) or spiralling motion ( with inertial , ) , as shown in figure [ w0 ] ( top ) and ( bottom ) , respectively . when the settling velocity in still fluid is non - zero , i.e. , , in the case with inertial absent ( ) and noise absent ( ) , the particles in the area surrounded by the heteroclinic orbit connecting the equilibrium points and , are trapped inside it , with the equilibrium point as a center .but the particles in the remaining area settle to the bottom of the fluid . with an arbitrarily small inertial effect ( and also ) , the heteroclinic orbit breaks andit leads to the settling of almost all particles , with the equilibrium point becoming an unstable spiral .figure [ stable - unstable manifolds v01 ] is the stable manifold ( blue solid curve ) and unstable manifold ( blue dashed curve ) when inertial presents ( and also ) . for non - zerosettling velocity in still fluid ( ) , when noise is absent ( ) , all particles settle to the fluid bottom ; see figure [ w03 ] ( top , middle ) .but when noise is present ( ) , some particles exit the cell not only by settling .figure [ w03 ] ( bottom ) shows that , with small noise ( ) , some particles indeed exit the cell from the vertical side boundary .in fact , when noise is present , all particles will exit , no matter the settling velocity in still fluid is zero or non - zero .figure [ first exit time ] indicates that with noise , particles will all exit from a fluid cell in finite time , almost surely . in the followingwe only consider the case with noise .figures [ ep - w0 ] [ ep - w01 ] plot the escape probability through four side boundaries , for zero or non - zero ( settling velocity in still fluid ) values .when a particle reaches or is on a side boundary , it is regarded as ` having escaped through ' that part of the boundary . in other words , particles on a side boundary have escape probability ( you see this in these figures ) .when , the particles escape the cell through each of the four side boundaries with similar or equal likelihood ( figure [ ep - w0 ] ) , as there is no preferred direction for particles due to zero settling velocity ( in still fluid ) . with non - zero , particles almost surely do not escape through the right side boundary .in fact , the inertial particles either settle to the physical bottom or exit from the left side boundary .figure [ ep - w01 ] displays the escape probability for , through each of the four side boundaries of the fluid cell .although most particles settle ( figure [ ep - w01 ] ( a ) ) , some particles escape the fluid cell through the left side boundary ( figure [ ep - w01 ] ( c ) ) .see figure [ ep - heter - w01 ] for a split view of this phenomenon . to examine this phenomenon more carefully ,we draw the stable manifold and unstable manifold for the deterministic system ( ) in figure [ stable - unstable manifolds v01 ] .as shown in figure [ plot3 + mesh v01 ] , inertial particles with significant likelihood of escaping through the left side boundary are near or on the stable manifold . in other words , some ( but not all ) inertial particles near or on this stable manifold are resistent to settling in the stochastic case .this resistance is quantified by the escape probability for a particle to get out of the fluid cell through the left side boundary .more specifically , the difference between the inertial particle settling times for deterministic case ( ) and a random case ( ) is shown in figure [ difference ] .we observe that the inertial particles near or on the stable manifold could have either a longer or shorter settling time , compared with the deterministic case .this indicates that the noise could either delay or enhance the settling ( although we do not know the reason ) , and the stable manifold is an agent facilitating this behavior .however , the overall impact of noise appears to delay the settling , as the averaged difference over the cell is for noise intensity , while this averaged value is for a stronger noise with .let the settling velocity in still fluid be non - zero ( i.e. , ) .+ ( i ) in the classical case ( no inertial : and no noise : ) , the particles surrounded inside a heteroclinic orbit are trapped inside it and all the other particles settle to the bottom of this cellular fluid flow . + ( ii ) in the case with only small inertia influence ( ) , the heteroclinic orbit breaks up to form a stable manifold and an unstable manifold , the trapped particles then settle , i.e. , all inertial particles settle . + ( iii ) however , when the noise is present ( ) , although most inertial particles still settle , some particles near or on the deterministic stable manifold escape the fluid cell through the left side boundary , with non - negligible likelihood .thus , inertial particle motions occur in two adjacent fluid cells in random cases , but confine in single cells in the deterministic case .in fact , noise could either delay settling for some particles or enhance settling for others , and the deterministic stable manifold is an agent to facilitate this phenomenon .overall , noise appears to delay the settling in an averaged sense .
a method is provided for approximating random slow manifolds of a class of slow - fast stochastic dynamical systems . thus approximate , low dimensional , reduced slow systems are obtained analytically in the case of sufficiently large time scale separation . to illustrate this dimension reduction procedure , the impact of random environmental fluctuations on the settling motion of inertial particles in a cellular flow field is examined . it is found that noise delays settling for some particles but enhances settling for others . a deterministic stable manifold is an agent to facilitate this phenomenon . overall , noise appears to delay the settling in an averaged sense . * key words : * random slow manifolds , dimension reduction , stochastic differential equations ( sdes ) , approximation under big scale - separation , inertial particles in flows * mathematics subject classifications ( 2010 ) * : 37h10 , 37m99 , 60h10
the birnbaum saunders ( ) distribution has received considerable attention in the last few years . it was proposed by and is also known as the fatigue life distribution .it describes the total time until the damage caused by the development and growth of a dominant crack reaches a threshold level and causes a failure .the random variable is said to have a distribution with parameters and , denoted by , if its probability density function is given by where , , ( shape parameter ) and ( scale parameter ) .it is positively skewed , the skewness decreasing with . for any constant , it follows that .it is also noteworthy that the reciprocal property holds : , which is in the same family of distributions .there are several recent articles considering the distribution ; see for example , , , , , , , , , , , , among others . introduced a log - linear regression model based on the distribution by showing that if , then has a sinh - normal distribution with shape , location and scale parameters given by , and , respectively , say .the regression model proposed by the authors is given by where is the logarithm of the observed lifetime , contains the observation on covariates ( ) , is a vector of unknown regression parameters , and .diagnostic tools for the regression model can be found in , and . in the regression model hypothesis testing inferenceis usually performed using the likelihood ratio , rao score and wald tests .a new criterion for testing hypothesis , referred to as the _ gradient test _ , has been proposed by .its statistic shares the same first order asymptotic properties with the likelihood ratio , wald and score statistics and is very simple when compared with the other three classic tests .in fact , wrote : `` the suggestion by terrell is attractive as it is simple to compute .it would be of interest to investigate the performance of the [ gradient ] statistic . '' to the best of our knowledge, however , there is no mention in the statistical literature on the use of the gradient test in regressions . in this paperwe compare the four rival tests from two different points of view .first , we invoke asymptotic arguments .we then move to a finite - sample comparison , which is accomplished by means of a simulation study .our principal aim is to help practitioners to choose among the different criteria when performing inference in regressions . on asymptotic grounds, it is known that , to the first order of approximation , the likelihood ratio , wald , score and gradient statistics have the same asymptotic distributional properties either under the null hypothesis or under a sequence of local alternatives , i.e. a sequence of pitman alternatives converging to the null hypothesis at a convergence rate . on the other hand ,up to an error of order the corresponding criteria have the same size properties but their local powers differ in the term. a meaningful comparison among the criteria can be performed by comparing the nonnull asymptotic expansions to order of the distribution functions of these statistics under a sequence of pitman alternatives . in this regard, we can benefit from the work by , and . derived the nonnull asymptotic expansions up to order for the densities of the likelihood ratio and wald statistics , while an analogous result for the score statistic was obtained by .recently , the asymptotic expansion up to order for the density of the gradient statistic was derived by .the expansions obtained by these authors are extremely general but it can be very difficult or even impossible to particularize their formulas for specific regression models .as we shall see below , we have been able to apply their results for the regression model .the rest of the paper is organized as follows .section [ tests ] briefly describes the likelihood ratio , wald , score and gradient tests . in section [ inference_bs ] these testsare applied for testing hypotheses on the parameters of the regression model . in section [ main_result ]we obtain and compare the local powers of the tests .monte carlo simulation results on the finite - sample performance of the tests are presented and discussed in section [ mcsimulation ] .section [ application ] contains an application to a real fatigue data set .finally , section [ conclusions ] discusses our main findings and closes the paper with some conclusions .consider a parametric model with corresponding log - likelihood function , where is a -vector of unknown parameters .the dimensions of and are and , respectively .suppose the interest lies in testing the composite null hypothesis against , where is a specified vector .hence , is a vector of nuisance parameters .let and denote the score function and the fisher information matrix for , respectively .the partition for induces the corresponding partitions where is the inverse of .let and denote the maximum likelihood estimators of under and , respectively .the likelihood ratio ( ) , wald ( ) , score ( ) and gradient ( ) statistics for testing versus are given by respectively , where , and .the limiting distribution of , , and is under and , i.e. a noncentral chi - square distribution with degrees of freedom and an appropriate noncentrality parameter , under .the null hypothesis is rejected for a given nominal level , say , if the test statistic exceeds the upper quantile of the distribution . clearly , has a very simple form and does not involve knowledge of the information matrix , neither expected nor observed , and any matrix , unlike and .in what follows , we shall consider the tests which are based on the statistics , , and in the class of regression models for testing a composite null hypothesis .the log - likelihood function for the vector parameter from a random sample obtained from model ( [ eq1 ] ) , except for constants , can be written as where /2) ] and , for .it is assumed that the model matrix has full column rank , i.e. , rank .the score function and the fisher information matrix for are , respectively , given by where , , , , with and , denoting the _ error function _ : ( see , for instance , * ? ? ?* ) . from the block - diagonal form of we havethat and are globally orthogonal .the hypothesis of interest is , which will be tested against the alternative hypothesis , where is partitioned as , with and . here, is a fixed column vector of dimension .the partition for induces the corresponding partitions , with and , with the matrix partitioned as .the likelihood ratio , wald , score and gradient statistics for testing can be expressed , respectively , as where , , and .the limiting distribution of all these statistics under is .notice that , unlike the wald and score statistics , the gradient statistic does note involve the error function .now , the problem under consideration is that of testing a composite null hypothesis against , where is a positive specified value for , and acts as a nuisance parameter .the four statistics are expressed as follows : where , with and representing the unrestricted and restricted maximum likelihood estimators of under and , respectively .in this section we shall assume the following local alternative hypothesis , where with for .we follow the notation in .let where it then follows that , , for , and , where and are the elements of the matrices and , respectively , and and are the elements of the matrices and , respectively .additionally , let where is a identity matrix .the nonnull distributions of the statistics , , and under pitman alternatives for testing in the regression model can be expressed as where is the cumulative distribution function of a non - central chi - square variate with degrees of freedom and non - centrality parameter . here , andthe coefficients s ( and ) can be written as where the s are defined as , , , , etc . , and is the ( )th element of inverse of .the coefficients are obtained from , for .after some algebra , it is possible to show that , in the regression model , therefore , for and , and we can write this is a very interesting result , which implies that the likelihood ratio , score , wald and gradient tests for testing the composite null hypothesis have exactly the same local power up to an error of order .we now turn to the problem of testing hypotheses on , the shape parameter .the nonnull asymptotic distributions of the statistics , , and for testing under the local alternative , where is assumed to be , is with .the s for the test of are easy to obtain and are given by , , , , , , , , , , and , where .the coefficients are obtained from , for .we have that for .after some algebra the coefficients reduce to it should be noticed that the above expressions depend on the model only through and the rank of the model matrix ; they do not involve the unknown parameter .we now present an analytical comparison among the local powers of the four tests for testing the null hypothesis .let be the power function , up to order , of the test that uses the statistic , for .we have for .it is well known that where is the probability density function of a non - central chi - square random variable with degrees of freedom and non - centrality parameter . from ( [ diff_power ] ) and ( [ diff_g ] ), we have hence , we arrive at the following inequalities : if , and if .the most important finding obtained so far is that the likelihood ratio , score , wald and gradient tests for testing the null hypothesis share the same null size and local power up to an error of order . to this order of approximationthe null distribution of the four statistics is .therefore , if the sample size is large , type i error probabilities of all the tests do not significantly deviate from the true nominal level , and their powers are approximately equal for alternatives that are close to the null hypothesis . the natural question now is how these tests perform when the sample size is small or of moderate size , and which one is the most reliable . in the next section , we shall use monte carlo simulations to put some light on this issue .in this section we shall present the results of a monte carlo simulation in which we evaluate the finite sample performance of the likelihood ratio , wald , score and gradient tests .the simulations were based on the model where and , .the covariate values were selected as random draws from the uniform distribution and for fixed those values were kept constant throughout the experiment .the number of monte carlo replications was 15,000 , the nominal levels of the tests were = 10% , 5% and 1% , and all simulations were performed using the ox matrix programming language .ox is freely distributed for academic purposes and available at http://www.doornik.com .all log - likelihood maximizations with respect to and were carried out using the bfgs quasi - newton method with analytic first derivatives through maxbfgs subroutine .this method is generally regarded as the best - performing nonlinear optimization method .the initial values in the iterative bfgs scheme were for and for , where first , the null hypothesis is , which is tested against a two - sided alternative .the sample size is , and .the values of the response were generated using .the null rejection rates of the four tests are presented in table [ tab1 ] .it is evident that the likelihood ratio ( ) and wald ( ) tests are markedly liberal , more so as the number of regressors increases .the score ( ) and gradient ( ) tests are also liberal in most of the cases , but much less size distorted than the likelihood ratio and wald tests in all cases . for instance , when , and , the rejection rates are 10.12% ( ) , 12.77% ( ) , 7.12% ( ) and 7.32% ( ) .it is noticeable that the score test is much less liberal than the likelihood ratio and wald tests and slightly less liberal than the gradient test .the score and gradient tests are slightly conservative in some cases .additionally , the wald test is much more liberal than the other tests .similar results hold for .table [ tab2 ] reports results for and and sample sizes ranging from 15 to 200 .as expected , the null rejection rates of all the tests approach the corresponding nominal levels as the sample size grows .again , the score and gradient tests present the best performances . .null rejection rates ( % ) ; = 0.5 and 1.0 , with . [cols="^,^,^,^,^,^,^,^,^,^,^,^,^ " , ] we now turn to the finite - sample power properties of the four tests . the simulation results above show that the tests have different sizes when one uses their asymptotic distribution in small and moderate - sized samples . in evaluating the power of these tests ,it is important to ensure that they all have the correct size under the null hypothesis . to overcome this difficulty, we used 500,000 monte carlo simulated samples , drawn under the null hypothesis , to estimate the exact critical value of each test for the chosen nominal level .we set , , and .for the power simulations we computed the rejection rates under the alternative hypothesis , for ranging from to .figure [ figpower ] shows that the power curves of the four tests are indistinguishable from each other .as expected , the powers of the tests approach 1 as grows .power simulations carried out for other values of , and showed a similar pattern . , , and .,width=377,height=283 ] overall , in small to moderate - sized samples the best performing tests are the score and the gradient tests .they are less size distorted than the other two and are as powerful as the others .we also performed monte carlo simulations considering hypothesis testing on . to save space ,the results are not shown .the score and gradient tests exhibited superior behavior than the likelihood ratio and wald tests .for example , when , , and , we obtained the following null rejection rates : 9.99% ( ) , 15.89% ( ) , 5.29% ( ) and 6.99% ( ) .again , the best performing tests are the score and gradient tests .this application focuses on modeling the die lifetime ( ) in the process of metal extrusion .the data were taken from . according to the authors , `` the estimation of tool life ( fatigue life ) in the extrusion operation is important for scheduling tool changing times , for adaptive process control and for tool cost evaluation . ''the authors noted that `` die fatigue cracks are caused by the repeat application of loads which individually would be too small to cause failure . ''the regression model is then appealing in this context since the main motivation for the distribution is the fatigue failure time due to propagation of an initial crack .we consider the following regression model : where , , and the covariates are ( friction coefficient ) , ( angle of the die ) and ( work temperature ) , for .we wish to test the significance of the interaction effects , i.e. , the interest lies in testing . the likelihood ratio , wald , score and gradient test statistics equal ( -value : 0.094 ) , ( -value : 0.045 ) , ( -value : 0.162 ) and ( -value : 0.157 ) , respectively .hence , the null hypothesis is rejected at the 10% nominal level when inference is based on the likelihood ratio or the wald test , but the opposite decision is reached when either the score or the gradient test is used . recall that our simulation results indicated that the likelihood ratio and wald tests are markedly liberal in small samples ( here , ) , which leads us to mistrust the inference delivered by the likelihood ratio and wald tests .therefore , we removed the interaction effects from the model as indicated by the score and gradient tests .the model containing only main effects is for .the null hypothesis is strongly rejected by the four tests at the usual significance levels .all tests also suggest the individual and joint exclusions of the friction coefficient and angle of the die from the model .hence , we end up with the regression model , for .the maximum likelihood estimates of the parameters are ( standard errors in parentheses ) : , and .the regression model is becoming increasingly popular in lifetime analyses and reliability studies . in this paper , we dealt with the issue of performing hypothesis testing concerning the parameters of this model .we considered the three classic tests , likelihood ratio , wald and score tests , and a recently proposed test , the gradient test .for the discussion that follows , let us concentrate on tests regarding the regression parameters , which are , in general , of primary interest .the four tests have the same distribution , under either the null hypothesis or a sequence of local alternatives , up to an error of order , as we showed .our monte carlo simulation study added some important information .it revealed that the likelihood ratio and the wald tests can be remarkably oversized if the sample is small .the score and the gradient tests are clearly much less size distorted than the other two tests .our power simulations suggested that all the four tests have similar power properties when estimated correct critical values are used .overall , this is an indication that the score and the gradient tests should be prefered . at this point ,a discussion on small - sample corrections for the classic tests is in order .a bartlett correction for the likelihood ratio statistic and a bartlett - type correction for the score statistic were derived in and , respectively ; see also cordeiro and ferrari ( 1991 ) .the corrected statistics have the following interesting properties : ( i ) the uncorrected and corrected statistics have the same asymptotic distribution under the null hypothesis ; ( ii ) the order of the error of the approximation for the distribution of the test statistics by is smaller for the corrected statistics than for the uncorrected statistics ; ( iii ) the corrections have no effect on the term of the local power of the corresponding tests ; ( iv ) simulation results in and show that these corrections reduce the size distortion of the tests and that the best performing test in small and moderate - sized samples is the test which uses the corrected score statistic .therefore , the bartlett - type - corrected score test is an excellent alternative to the tests under consideration in the present paper .the slight disadvantage of such an alternative is the extra computational burden involved in computing the bartlett - type correction .we computed the corrected versions of likelihood ratio and score statistics for the hypotheses tested in the real data application presented in section [ application ] .recall that the hypothesis of no interaction effects is rejected by the likelihood ratio and wald tests , but not rejected by the score and gradient tests .it is noteworthy that the decision reached by either the corrected likelihood ratio test or the corrected score test is in agreement with that obtained by the later two tests and in disagreement with the likelihood ratio and wald tests , which tend to reject the null hypothesis much more often than indicated by the significance level . finally , our overall recommendations for practitioners when performing testing inference in regressions are as follows .the score test or the gradient test should be prefered as both perform better than the likelihood ratio and wald tests in small and moderate - sized samples .while the gradient test is a little more liberal than the score test , it is easier to calculate .the bartlett - type corrected score test is a further better option although it requires a small extra computational effort .we gratefully acknowledge grants from fapesp and cnpq ( brazil ) .balakrishnan , n. , leiva , v. , lpez , j. ( 2007 ) .acceptance sampling plans from truncated life tests from generalized birnbaum saunders distribution . _ communications in statistics simulation and computation _ * 36 * , 643656 .leiva , v. , sanhueza , a. , angulo , j.m .a length - biased version of the birnbaum saunders distribution with application in water quality ._ stochastic environmental research and risk assessment _ * 23 * , 299307 .rao , c.r .score test : historical review and recent developments . in _ advances in ranking and selection , multiple comparisons , and reliability _ , n. balakrishnan , n. kannan and h. n. nagaraja , eds .birkhuser , boston .
the birnbaum saunders distribution has been used quite effectively to model times to failure for materials subject to fatigue and for modeling lifetime data . in this paper we obtain asymptotic expansions , up to order and under a sequence of pitman alternatives , for the nonnull distribution functions of the likelihood ratio , wald , score and gradient test statistics in the birnbaum saunders regression model . the asymptotic distributions of all four statistics are obtained for testing a subset of regression parameters and for testing the shape parameter . monte carlo simulation is presented in order to compare the finite - sample performance of these tests . we also present an empirical application . birnbaum saunders distribution , fatigue life distribution , gradient test , lifetime data , likelihood ratio test , local power , score test , wald test .
in this paper we use the trade credit network of italian firms to test a model of `` many - to - one '' contagion of economic growth or economic crisis . academic research on inter - firm trade credit networksis still in its infancy , as the data on trade - credit transactions is not easily accessible .an exception is the analysis of the trade credit network of japanese firms , which have been rather extensively characterized by , for example , tamura et al . , miura et al . and watanabe et al .. our approach to direct contagion of economic growth rate is built on the assumptions based on the previous results published in various economic literature on balance sheet contagion ( e.g. kiyotaki & moore , boissay , petersen & rajan , economic growth schumpeter ) , and from our own previously developed methods and models following complex systems approach ( solomon & richmond , challet et al . ) . when many firms simultaneously borrow from and lend to each other , and in particular when these firms are speculative and dependent on the credit flow , shocks to the liquidity of some firms may cause the other firms to also get into financial difficulties . as obvious as this argument might sound, it is very difficult to prove the effect of direct contagion in the data , exactly for the reason that the linearity is lost as soon as the firms are simultaneously interacting with each other .the `` many - to - one '' contagion relies on the hypothesis that the change in the annual sales of a supplier follows the change in the * mesoscopically aggregated demand * , i.e. that the yearly growth of sales of a supplier would be proportional to the yearly growth of demand of its customers . assuming that the growth of purchases of all customers of a supplier in a yearly period is known to us ,should nt we be able to predict the growth in the sales of the supplier ?indeed we should , as long as : * the linkages between the customers and the supplier are constant over the longer period ( at least two years , as the growth rate of financial indicators reported in balance or profit and loss statements is available on yearly basis ) * the growth of demand ( purchases ) of a customer is assumed to be uniformly distributed among all its suppliers . assuming that the above conditions are satisfied, we compare the prediction of the growth of sales with the real growth as measured from the profit and loss statements , and so we can test our hypotheses of growth contagion from customers to suppliers .the paper is organized as follows . in section [ sec : theory ] we describe the nature and applications of different interaction mechanisms that can be involved in the self - amplifying auto - catalytic loops in supporting both peer interactions and the bi - directional feedback between the micro and the macro structures of the economy . following that, we describe the possible methodologies that could be used in order to empirically show the existence and operation of an auto - catalytic feedback . in section[ sec : data ] , we give the detailed description of the data that were used for the network model and internal properties of the nodes . in that sectionthe reader can also find the definitions of the variables and the algebraic notations . in section [ sec :results ] , we elaborate on the empirical results and provide their interpretation and implications . finally we discuss the results in section [ sec : discussion ] .one of the major issues in economics is to understand how relativelly small and temporary endogeneous changes in technology or wealth distribution may generate macroscopic effects in aggregate productivity , asset prices etc . for this purpose , it is necessary to identify self - amplifying mechanisms , filtering out the main mechanisms that may trigger dynamics leading to a systemic change , from other interactions destined to drown in the noise of local , short lived perturbations .this implies that the vast majority of microeconomic interactions that may affect the macroeconomic / systemic level do so by a kind of auto - catalytic positive feedback loop .this idea was already proposed in various contexts , but was often dismissed in the absence of concrete mechanisms that realize it .below , we list three mechanisms that might effectively amplify microscopic events to macroscopic dynamics in economic systems . in this paperwe deal only with the first mechanism , with the second one we dealt in , and the third mechanism we might tackle in the future . [ [ peer - to - peer - one - to - many - and - many - to - one - interactions - between - firms - in - the - network ] ] ` peer - to - peer ' , ` one to many ' and ` many - to - one ' interactions between firms in the network + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the domino effect , or contagion can be best understood using powerful mathematical and statistical - mechanics tools developed in percolation theory .they allow the rendition of precise predictions that correspond to real world stylized facts : macroscopic transitions caused by minute parameter changes , fractal spatial and temporal propagation patterns , delays in growth or crisis diffusion between economic sectors or geographical regions . for a formal discussion of social and market percolation modelsthe reader is referred to goldenberg et al . . in this paperwe deal with the many - to - one contagion principle ( in the cases when a supplier has a single customer it reduces to peer - to - peer principle ) .the peers are tied financially , and also physically by the goods they pass .we introduce the assumption that the peers are correlated though their growth rate and we empirically test it .we attempt and succeed to find only partial evidence in our data that the many - to - one is responsible for the propagation of the growth rate on the trade network consisting of suppliers and their customers .this mechanism is only partly responsible for the congation of growth rate , and other mechanisms , such as the following ones , should also be considered .[ [ inter - scale - macro - to - micro - reaction - between - firms - and - the - system ] ] inter scale macro - to - micro reaction between firms and the system + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + for a long while , one of the drawbacks of the dynamical models inspired by physics was the absence of interaction between scales .a model that introduces this interaction was introduced by solomon and golo .this is a financial model extending the ideas of minsky , where not only the fate of individual firms ( e.g. failure ) influences the system state ( risk aversion leads to rise in corporate interest rates ) , but also the state of the system is feeds back onto its own components ( e.g. a rise in interest rate leads to more failures ) .together with contagion across the network , the model generates a bounty of predictions that agree with the stylized facts . for example : delaying or arresting the propagation of distress by targeted intervention in key individual components or in system properties ( such as the interest rate ) .this model has been confronted with empirical data in and provided a significant contribution in the interpretation of the economic collapse in 2008 .[ [ self - interaction - firms - acting - upon - themselves ] ] self - interaction : firms acting upon themselves + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + a model for self - dependent reproduction was proposed by shnerb et al . , generalizing on the ideas of malthus that proliferation in the microscopic level in stochastic systems leads to the spontaneous emergence of a collection of adaptive objects .the model is analytically tractable by statistical - mechanics method as mentioned above .it generates a host of qualitative ( phase transitions ) and quantitative predictions about the macroscopic behavior of the system .these predictions were precisely confirmed by empirical measurements in many cases : crossing exponentials between decaying and emerging economic sectors after a shock , identity of the wealth inequality ` pareto - zipf ' exponent and the market instability exponent , etc .the self - interaction mechanisms are not in the scope of this paper , and the microeconomics models of firms are not considered as well . in the analysis of contagion , causality is the key .the mechanisms we sketched in the preceding subsection call for testing , but the analysis should account for causality , rather than correspondence . the microscopic behavior of agents in a network should be revealed in the structure of the _ links _ between the agents and their _ interaction _ patterns . in a financial network ,the visible communication between agents is through the invoices they issue and the payments they make .the structure of links between the nodes in a network is formally termed the ` topology of the network ' .the metrics ( indicators ) typically used for measuring topology are for example : node degree being the number of connections from / to each node , clustering being the number of distinct groups of nodes , and connected - component sizes .however , the description of the topology is not the subject of our paper .we aim to justify the existence of the network and the application of the network analysis by trying to measure the significance of the interaction across the network nodes . in our work ,we focus on a selection of certain neighborhoods that consist of buyers from a single supplier .thus , the clustering coefficients and component size are used only for validating that the selection process does not destroy the statistical properties of the complete sample .we are taking into account that the properties of firms influence the interaction between them .in financial networks , the strength and the interaction patterns can not be interpreted without understanding the level of financial exposure to risk that the agents ( firms ) are in . the financial risk of providing and extendingthe trade credit relation is an interplay between a firm s own liquidity position , its social neighborhood ( its industry and its particular buyers and suppliers ) , and systemic effects such as the interest rate . since the risk exposure of a firm can not be directly measured , the ability to meet obligations subject to the environment and the system we have used a quantity available in our data termed the ` rating score ' .this quantity will be defined in the data section below .the data on individual firms come from a dataset ( further on abbreviated as bs ) of italian limited liability companies end - of - year balance sheet and profit & loss statements , which is a part of a proprietary database .the network is assembled from the sales invoices issued by suppliers to their customers when they sell an item .some of these invoices were presented to a bank in order to acquire trade - credit ( tc ) .the borrower in most cases was the seller in the supplier - customer pair .it is these invoices that the bank recorded and which we have been able to analyse . in this studywe combined the two datasets ( bs ) and ( tc ) in order to select the suppliers that are most appropriate for this analysis .[ [ balance - sheets - database ] ] balance sheets database + + + + + + + + + + + + + + + + + + + + + + + in italy , all limited liability firms are obliged to submit their annual financial report ( balance sheet ) to the local chamber of commerce .items contained in the firm s balance sheet are assets and liabilities of the firm such as : equity , net - sales , accounts receivable , inventory , bank loans , accounts payable , financial costs , etc .other than balance data , these reports contain financial ratios .financial ratios are ratios of quantities within the balance sheet items , such as acid test or the receivables conversion period and their purpose is to help quickly estimate the financial status of the firm .the balance sheets are collected and stored by an external agency .the firms , no matter whether defaulting or non - defaulting at the end of the period , are ranked with a ` rating score ' ranging from 1 to 9 in increasing order of default probability : 1 is attributed to firms that are predicted to be highly solvent , and 9 identifies firms displaying a serious risk of default .notice that the ranking is an ordinal : firms rated as 9 are not implied to have 9 times the probability of defaulting as compared to firms rated 1 .a good description of the rating score is given in a paper by bottazzi et al .for the purpose of our analysis , and in accordance to the practitioners behavior , we divide all companies into three groups , using the rating score : one group with an easy access to bank credit ( rating 1 - 3 ) , one with the access to bank credit ( abc ) at medium risk ( rating 4 - 6 ) and the third group at high risk and little or no access to bank credit ( rating 7 - 9 ) .l | l rating 1 - 3 & abc a + rating 4 - 6 & abc b + rating 7 - 9 & abc c [ [ trade - credit - data ] ] trade credit data + + + + + + + + + + + + + + + + + the trade credit database ( tc ) contains all inter - firm delayed payment transactions during 2007 that were intermediated by a large italian bank .this bank covers about 15% of the entire trade credit in italy , according to the official statistics of the bank of italy . [[ algebraic - notation ] ] algebraic notation + + + + + + + + + + + + + + + + + + we use the symbols and to mark a supplier and a customer , respectively .transactions between a supplier and a customer will be presented by magnitude and direction of the money flow . a general event of goods or services sold to a customercan be graphically described by .the 2007 tc ( trade credit ) records are invoices that account for goods / services supplied by firm to firm that were presented to the bank for discounting in 2007 .these transactions were booked as accounts receivable in firm s assets , and as accounts payable in firm s liabilities .we can write the sum of all payable trade invoices and cash payments from a customer to a supplier accounting for purchases of goods or services in 2007. the total invoices used by firm as collateral on a credit line in 2007 .we may also define the following notation from the bs ( balance sheet ) records of firm : : balance sheet item of total sales ( cash and credit ) of firm in year y : balance sheet item of total purchases ( cash and credit ) of firm in year y [ [ testing - for - completeness - of - firm - level - information ] ] testing for completeness of firm - level information + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + to ensure that we have sufficient coverage in the tc database of each firm s sales , we selected the firms for which the expression is greater than a pre - defined value .we call this the ` matching threshold ' or ` matching ' .we vary the threshold between 0 and 1 in order to create the best sample for our empirical analysis .we choose a representative sample with high level of completeness by setting the matching threshold so it is the maximal value that still renders a single large connected component of size , and that the consequent connectivity of firms spans from 1 to approximately the size of the component ( ) .the matching ratio reaches values greater than than 1 ( more than 100% completeness ) , perhaps due to misalignment of the time windows between the trade credit network and balance sheet data .this will be discussed below .the analysis in this paper is , therefore , restricted to the sample of suppliers that have a matching proportion of 0.8 and up to 1.2 ( i.e. 80% to 120% , or formally ) .the reason to prefer a range with a top - cap is that much larger proportion signals to us a mismatch : that the sample of invoices for the selected firm may be incorrect .possibly due to an error in invoice registration .we did not expect to retrieve a large network from the firms exhibiting high matching proportion .several factors reduce the matching .the two most prominent ones are : * trade credit data do not record changes in cash holdings .they do , however , record an accounts receivable on the supplier s side and an accounts payable on the customer side .when accounts receivables are cashed by the customer , a ` sale ' item will be booked to the supplier .now , outstanding invoices and payments made in 2007 may have been booked as accounts receivables in 2006 .we estimate a delay of 3 months on average for discounts and 9 months for payments ( cf . the misalignment of the time frames ) . *the sales ( ) are all sales of the firm including sales performed in other monetary channels .the tc database holds information on invoices only .out of firms that were initially available both as creditors in the tc data ( have incoming links ) and in the bs data ( have sales in ) , only 671 companies fulfill the matching range . loweringthis threshold would grow the sample exponentially but at the cost of unavoidably entering suppliers with lesser proportions of their total debt owed . in table [ tab : netparams ] we list some statistics of this network , assuming that the links are directional . out of the total 671 suppliers , 190 companies are of reasonable size in accordance with bank practices ( eur ) ..some basic parameters of the subset network .multi - edge node pairs give the number of node - pairs that link in both directions ( reciprocal ) .comparing the number of customers to the number of links presents a picture of many tree - like subgraphs connected to each other with minimal number of loops . [ cols=">,^",options="header " , ] in our inter - firm network , the in - degree of a supplier is the number of its customers . in a previous study , miura et al . obtained power laws of the in- and out - degree distributions , with exponents of . in our network , we were able to confirm a similar finding of , i.e a negative exponent .as for the in - degree vs. size , a statistically significant correlation was recovered in our sample with a slope of . in the japanese network of miura et al ., a slope of was obtained .both results appear in figure [ fig : indegree ] .0.5 ) , title="fig : " ] 0.5 ) , title="fig : " ] in the second observation , the correlation between sales in 2007 and in - degree ( panel [ subfig : indegreevssales ] ) pearson s correlation coefficient is .the broken line marks a net - sales of 1 million eur .the crossing between the broken and the red linear fitting line is characteristic of a firm with .below the dashed line we can find the small companies , and it is clear that most of them also appear left of the crossing point , indicating a client - base smaller than . in order to quantify the relevance of the many - to - one approach , we measure the number of suppliers in our selection that do not have a key customer but are permanently related with a large number of customers .a supplier with a single key - customer is a firm that is to a large extent ( 50 % ) dependent on a single customer .such a seller could be more fragile towards his customer s financial environment than a supplier who has no single key customer . however , the nature of the key - customer relations is not known to us. it could be the nature of small businesses , though it is obvious from figurel [ subfig : indegreevssales ] that the firms that have low in - degree correspond to a very large confidence interval ( meaning that there are firms that have very large sales , in the order of million euro , and yet a single customer ) .this might have a significant ( negative ) impact on the experiments that we are performing .namely , it might indicate that that there are some incidental events of sales that do not reflect the regular trading pattern .the vast number of key - customers in our dataset could also be connected with the fact that the data are preceding the economic crisis and it is probable that an increased number of liquidations , mergers etc happened which can be reflected in the incidental transactions .figure [ fig : truefalsekeycustimers ] displays two subgroups of the suppliers presented in figure [ fig : indegree ] . on the right panel are the suppliers that have a ` key - customer ' , and on the left appear the suppliers that do not have such a customer .we define a key - customer as one that purchases of at least 50% of the supplier s annual sales . out of the 671 firms in the samplethere are 414 creditors with a key - customer and 235 without one . having a key - customer is a feature of the supplier and clearly all suppliers that have one customer ( )qualify for it .we can appreciate that the range of firms that have a key - customer is dominated by suppliers with a small in - degree .however , we also find that suppliers with a large in - degree ( ) have a key - customer .the reason is that in many situations , the payment distribution to a single supplier is fat - tailed , i.e. the largest customer pays an order of magnitude more than the second largest one .a good example of this is a phone company : it has few very large customers and the majority are single - time walk - in clients .we can then expect that the suppliers with a key - customer will be subject to transmission of financial signals from their key customer , either directly by peer - to - peer interaction or indirectly by responding quickly to the factors that influence the key - customer . in this small subgroup of the firms , as the ratio of suppliers with a key customer to those without is 2:1, we should expect to see a contagion effect that spans the firms with a key customer at the very least .we checked the contagion of the growth between the customers and the suppliers on the selected sample of suppliers with the matching 80 - 120% .we compared the actual measured growth in net - sales ( cash and credit ) from 2007 to 2008 of each supplier : against an estimated aggregated growth of purchases from all its customers , we assume that the change in purchases of a buyer between one year to the next is uniform across all his suppliers .so the estimated purchases of buyer from supplier in 2008 is the payment received in 2007 , , weighed by the trend in purchases of : summing over all supplier s customers gives an estimate of the growth in his sales from 2007 to 2008 : if we place ( [ eq : loggrowth ] ) on the y - axis and ( [ eq : logpredictgrowth ] ) on the x - axis we obtain a 2-dimensional scatter in which a dot will travel in the upward direction away from the origin to mark a firm with growing sales , and will travel to the right of the origin to designate growing _ predicted _ growth in sales .this scatter plot is shown in figure [ subfig:3bgrowth ] .assuming that the links in the 2007 network are constant across the three - year frame ( 2006 , 2007 , 2008 ) and that customer s purchases correspond to supplier s sales , the resulting pattern is expected to form a straight line through the origin with a slope of . 0.5 , recorded in the financial statements , versus our estimation of growth of the collective / aggregated orders from all his customers .the side / top bar plots in both subfigures are used to estimate the centroid of the clouds of points .figure [ subfig:3agrowth ] has a centroid in the first quadrant , defined by a positive value of the mean growth rate of sales , and a positive value of the mean growth rate of estimated demand .figure [ subfig:3agrowth ] has a centroid in the third quadrant , defined by a negative value of the mean growth rate of sales , and a negative value of the mean growth rate of estimated demand.,title="fig : " ] 0.5 , recorded in the financial statements , versus our estimation of growth of the collective / aggregated orders from all his customers .the side / top bar plots in both subfigures are used to estimate the centroid of the clouds of points .figure [ subfig:3agrowth ] has a centroid in the first quadrant , defined by a positive value of the mean growth rate of sales , and a positive value of the mean growth rate of estimated demand .figure [ subfig:3agrowth ] has a centroid in the third quadrant , defined by a negative value of the mean growth rate of sales , and a negative value of the mean growth rate of estimated demand.,title="fig : " ] we applied the same aggregation and estimation procedure to the prior period . applying similar reasoning , we write the growth in sales ( 2006 - 2007 ) as again , assuming that any change in the customer pool of a supplier between 2006 and 2007 is negligible , we write the aggregated growth in orders for that period as figure [ subfig:3agrowth ] displays this scatter plot .it is important to note the magnitude of the growth rates : for the majority of the suppliers , a small increase in orders to the supplier correspond with a small increase in the annual net - sales .this is the reason that the bulk of the points are close to the origin of the axes .i.e. the growth rate distribution is extremely narrow and deviations are dominated by the rare events .we should still expect that by the scaling nature of the growth rates , the rare events will render the same occurrence as the frequent ones . in the subplots of figure [ fig:3growth ] we added ( top and right side of each subplot ) a box - and - whisker plot onthe sides to mark the univariate growth rate and predicted growth distributions . in these sidebarswe can note two features : * ( 1 ) * that the estimated growth rate distribution is also narrow , i.e. shows similarity to the tent - shaped actual growth rate distributions , and that * ( 2 ) * the positions of the median growth rate values indicate that the centroid of the pattern in the period 2007/2006 ( panel [ subfig:3agrowth ] ) , the median value of both , the estimated and the real growth of all suppliers is positive .the growth in the estimated sales is larger then the measured one . in contaxt to that , in the next period , 2008/2007 ( panel [ subfig : gsyp1yvsyi ] ) , the centroid sits in the third quadrant , i.e. the meadian values of the box plots in [ subfig : gsyp1yvsyi ] show negative growth of both the estimated and the real sales of the suppliers.this is an evidence of the transition between an economic boom ( a positive growth in sales and demand ) in the first period and a bust ( negative growth in demand and sales ) in the second period .the results show that a decline of growth in sales and purchases in the second period ( an incline in the first ) occurred concurrently for suppliers and buyers .this simultaneous switch in the typical behavior is the effect of the credit crunch : the purchasing power of the customers has shrunk as most of the suppliers were not able to allow credit to their customers .however , figure [ fig:3growth ] fails to prove the correlation between the estimated and the real sales of customers . even by visual inspection ,it is evident that the slope of is not present .the conclusion we draw from the plot is that in the selected sample little or no correlation exists between the aggregated changes in purchases by a supplier s customers and the growth rate in that supplier s sales .some companies , however , do not ` follow the crowd ' by going negative for several reasons apply , among which are regulatory actions and the sectoral behavior which we will analyze in the sequel .the sub - sample of the network ( ) consists of a heterogeneous set : there is a large variability in the connectivity pattern ( in - degree ) and in the net - sales .but the most relevant diversification factor ( and possibly related to the previous ones ) is that the companies come from different industries . about a half of the selected suppliers are in the manufacturing sector ( industrial classification numbers 15xx-37xx ) .the other half of the sample are in other industries , primarily in construction , wholesale , and transport .the degree distributions and the composition of the sample satisfying the 80 - 120% matching criterion are given in figure [ fig : indegreehist9panel ] .there is a striking difference in the connectivity between the sub - samples from different industrial sectors .the manufacturing sector ( d ) is the most populated , and the in - degree histogram of the companies within this sample shows that the companies follow the general in - degree distribution of figure [ fig : indegree ] .the second largest sector is wholesale trade and retail ( g ) .we note that the number of companies with large in - degree in sector g is exaggerated . comparing with the histogram of sector d , in g, the number of firms with is as large as that in d although the total number of firms is 4 times smaller .this is due to the different nature of their businesses as will be explained further in the text .the third largest sector is k. in our data set , most of the companies in this sector are it ( software ) companies .again , the accounting procedures in it are different from the ones in manufacturing .software companies will often provide services rather than goods . in order to understand the differences in growth rate correlation between the suppliers coming from different industrial sectors and their customers , the scatter plot in figure [ subfig : gsyp1yvsyi ]was split by sectoral subgroups .the subgroups are shown in figure [ fig : indegreehist9panel ] , and the split growth rate scatter plots are given in figure [ fig : firmsinview ] . in this figure , there are significant differences on the sectoral level .most remarkable is the characteristic behavior of the g sector ( the general name is ` wholesale and retail ' , but in our database it is mostly composed of wholesale firms ) . in this sectora significant number of firms has an estimated growth of orders close to one , but the measured growth in sales shows a notable variability away from no - growth .is given in the title of each panel .] versus estimated growth of orders of all customers of a supplier , per industrial sector .the sectors are the same as in figure [ fig : indegreehist9panel ] ] another sector that shows uncommon behavior is sector f , construction . in this sectorthe growth response is opposite to the situation in sector g : there s a large variability in the expected growth of customer orders but the recorded growth of supplier sales shows very little deviation from a state of no - growth .the atypical in - degree distribution in the construction industry is discussed in miura et al . . in the japanese industrial business network , they ran a flow algorithm and observed a difference between in - degree and the ability to source or receive money . in construction , a large in - degree of a firm may not indicate large amounts of inflowing funds since at times they may relay funds via small single - customer subcontracting firms that dominate their payment distribution ( firms that exist only to operate in a single one project ) . in our networkwe also observe greater intra - industry trade in construction , more than would be expected by chance .however , as can be seen from figure [ fig : indegreehist9panel ] , sector f ( center panel ) is small in total number of firms . 0.5 eur .correlation coefficients for each rating / size category appear in the legend in the upper left corner of each plot.,title="fig : " ] 0.5 eur .correlation coefficients for each rating / size category appear in the legend in the upper left corner of each plot.,title="fig : " ] in order to factor out the possible cause of heterogeneity in behavior we chose suppliers that belong in a single industrial sector . being the largest in the sample , we chose a sub - sector of manufacturing , ` machinery and mechanical equipment ' .this sector is well represented in the italian trade network .we further tried to distill the effect of contagion by observing the geometric mean of growth rates in two consecutive periods 2007/6 and 2008/7 .this is commonly termed the compound annual growth rate ( cagr ) .the plots of cagr are given in figure [ fig : cagr2panel ] .there we placed the cagr of the supplier on the y - axis versus the predicted cagr from the sum of the purchases of his trade - credit customers on the x - axis .colors of the dots mark the rating class of the companies : abc = a in gray , b in red , and c in blue . the scatter plot in figure [ fig : cagr2panel ] may be interpreted in the following way : the companies in the _ first quadrant _ have managed to keep positive growth of both sales and orders in the two - year period . in both panels we observe that points in this quadrant are red .this means that the suppliers with rating class b ( 4 - 6 ) managed to maintain their sales and orders with the same partners over the two year period and through the financial crisis .although this result is in contrast with the hypotheses proposed in , it did not come as a surprise .the homophily measurement on the same data set that is discussed in kelman et al gives clear evidence that same rating - class firms have a greater probability to attach to each other , with a small tendency of the customer to create business with a higher rated supplier . in the _ second quadrant _ are locate suppliers that kept their growth in sales , opposing the downward trend in orders by their customers .negative estimated growth but positive real growth could happen for many reasons .the most obvious one draws from our assumption that the network is static .this is only an approximation and although in the manufacturing sector it is mostly a good one , in some cases it is possible that financial changes take place and trading partners change over time , especially the ones that are credit - constrained .according to delli gatti et al . , the origin of fluctuations is due to the ever - changing configuration of the network of heterogeneous firms and the entire dynamics is being shaped by financial variables .the number of companies in this quadrant is also very small ( only 4 firms with sales larger than 1 million eur out of only 13 firms in the sample ) .the suppliers in the _ third quadrant _ were affected by the crisis the most . both the orders from them and their own sales have decreased . in this range ,surprisingly , we find the suppliers from the top rating class ` a ' . having a good credit - ratingis related with the current ratio and liquidity of the firm as previously demonstrated by beaver and later by ohlson and others .also , having a good credit - rating corresponds with easy access to bank credit . i.e. the borrower is able to obtain a greater proportion of his collateral on loan , with a low interest .however , a good liquid position does not correspond with the state of the market in general , therefore these companies were not able to maintain their growth rate in the time frame of the crisis .last , in the _ fourth quadrant _ are the suppliers , which estimated aggregated orders grew while their sales decreased .there could be several reason for this , though this might also be a sign that the network was not stable as expected , so the customers have purchased from a new supplier .the most interesting outcome of these measurements is the correlation of the cagr with both rating and firm size : observing the pearson s correlation coefficients , given in the keys of the subfigures of figure [ fig : cagr2panel ] , we could see that the correlation between the compound annual growth rates in sales and orders are the highest ( ) in the case of sized companies . in the second placecome the medium credit - rated companies , in the case of all firms , with the correlation coefficient of .the companies with rating class c did not exhibit correlations greater than either in the case of all firms or large firms .an additional support for the very good correlation between the expected growth of customers orders and the growth of sales in the companies with top ranking ( and therefore good access to credit ) , can be find in the literature .recently , several scholars , attempted to model and empirically confirm the effect of the 2007 - 2008 financial crisis on between - firm liquidity provision . searching for a causal effect of a credit - rationing by the banks , garcia - appendini & montoriol - garriga tested the hypothesis that firms with high liquidity levels before the crisis , increased the trade credit extended to other corporations and subsequently experienced better performance compared to ex - ante cash - poor firms .they conclude that trade credit taken by constrained firms increased during this period .therefore , they have intested in mainaining their customers and this might explain the high correlation which we measured .this work examines correlation of the estimated growth rates of customers orderd with the measured growth rate of suppliers sales in a many - to - one setting , where the customers of each supplier are responsible for at least 80% of the supplier s sales , measured at the beginning of the second time period . by establishing their trade connections in during one year period ,a growth rate prediction is made from a combination of the trend in purchases by these customers , and the social structure in the supplier s neighborhood .this prediction is compared with the real growth in sales of the suppliers .results indicate the existence of growth rate contagion only inside the following restricted sub - selections of the manufacturing firms , and compound over two years : * ( 1 ) * the large firms with a - class credit - rating , and * ( 2 ) * suppliers of any size that have medium credit - rating ( b - class ) .this gives evidence that direct contagion of growth rates between customers and suppliers is sensitive to the following factors : * sectoral heterogeneity : in general , the industry of a supplier and a customer may or may not be the same .each industrial sector has a behavior that is characteristic to it due to the accounting procedures required in that industry . * in data coming from the bank , microscopic effects are still secondary compared to macroscopic effects , even during a state of crisis since the problematic customers may avoid approaching the bank . *missing data : while carefully considering numerical drift and retaining the overall statistics , the cleaned and filtered sample is still three orders of magnitude smaller than the number of firms in the full data set .we are also aware that performing the mesoscopic many - to - one approach in the rather limited ( especially in terms of network dynamics ) data has been based on the assumptiosn that may not be valid - especially in the case of the crisis .it is expected that the customers , in the case when they have to shrink their orders , would be selective in the choice of the suppliers and they would not shrink their orders uniformly as our model assumes .however in order to know how the customers make these decision we would need to have more data . as a final note , the macroscopic effect that was captured by the measurements in figure [ fig:3growth ] is realistic and its interpretation is supported by the official statistics on industrial production and business confidence in the given period : the statistics show that the italian industrial production peaked in 2007 and then declined , reaching a 10-year low in 2009 .the work presented in this article is partly supported by the project `` a large scale network analysis of firm trade credit '' , project grant ` in01100017 ` , institute for new economic thinking ( inet ) .bree ds , kelman g , ussher l , lamieri m , solomon s ( 2015 ) . too dynamic to fail - empirical support for an autocatalytic model of minskysfinancial instability hypothesis . to appear in the journal of economic interaction and coordination .tamura k , miura w , takayasu m , takayasu h , kitajima s , et al .( 2012 ) estimation of flux between interacting nodes on huge inter - firm networks .world scientific , volume 16 , pp .miura w , takayasu h , takayasu m ( 2012 ) the origin of asymmetric behavior of money flow in the business firm network .european physical special topics 212 : 65 - 75 .watanabe h , takayasu h , takayasu m ( 2012 ) biased diffusion on japanese inter - firm trading network : estimation of sales from network structure .new j phys 14 .kiyotaki , nobuhiro , and john moore .credit cycles .. national bureau of economic research , 1995 .boissay f ( 2006 ) credit chains and the propagation of financial distress .technical report 573 , european central bank .url http://ideas.repec.org/p/ecb/ecbwps/20060573.html .petersen ma , rajan rg ( 1997 ) trade credit : theories and evidence .the review of financial studies 10 : 661691 .schumpeter ja ( 1976 ) capitalism , socialism and democracy .new york : allen & unwin .solomon s , richmond p ( 2002 ) stable power laws in variable economies ; lotka - volterra implies pareto - zipf .the european physical journal b - condensed matter and complex systems 27 : 257261 .challet d , solomon s , yaari g ( 2009 ) the universal shape of economic recession and recovery after a shock .economics : the open - access , open - assessment e - journal 3 .goldenberg j , libai b , solomon s , jan n , stauffer d ( 2000 ) marketing percolation .physica a : statistical mechanics and its applications 284 : 335347 .solomon s , golo n ( 2013 ) minsky financial instability , interscale feedback , percolation and marshall walras disequilibrium .accounting , economics and law 3 : 167260 .minsky hp ( 1982 ) the financial - instability hypothesis : capitalist processes and the behavior of the economy . in : kindleberger c ,laffargue jp , editors , financial crises : theory , history and policy , cambridge university press .shnerb nm , louzoun y , bettelheim e , solomon s ( 2000 ) the importance of being discrete : life always wins on the surface .proceedings of the national academy of sciences 97 : 1032210324 .louzoun y , shnerb nm , solomon s ( 2007 ) microscopic noise , adaptation and survival in hostile environments .the european physical journal b - condensed matter and complex systems 56 : 141148 .klass os , biham o , levy m , malcai o , solomon s ( 2007 ) the forbes 400 , the pareto power - law and efficient markets .the european physical journal b 55 : 143 - 147 .choi y , douady r ( 2012 ) financial crisis dynamics : attempt to define a market instability indi- cator .quantitative finance 12 : 13511365 .huang zf , solomon s ( 2001 ) finite market size as a source of extreme wealth inequality and market instability .physica a : statistical mechanics and its applications 294 : 503513 .krugman p ( 1999 ) balance sheets , the transfer problem , and financial crises .international tax and public finance 6 : 459472 .bottazzi g , grazzi m , secchi a , tamagni f ( 2011 ) financial and economic determinants of firm default .journal of evolutionary economics 21 : 373406 .schwarzkopf y , axtell rl , farmer jd ( 2010 ) the cause of universality in growth fluctuations .arxiv nonlinear sciences e - prints .shenoy j , williams r ( 2011 ) customer - supplier relationships and liquidity management : the joint effects of trade credit and bank lines of credit .kelman g , bree d , manes e , lamieri m , golo n , et al .( 2015 ) dissortative from the outside , assortative from the inside : social structure and behavior in the industrial trade network . in : proceedings of the 48th annual hawaii international conference on system sciences .ieee , computer society press , 2015 , p. 10 .delli gatti d , gaffeo e , gallegati m , giulioni g , palestrini a ( 2008 ) emergent macroeconomics : an agent - based approach to business fluctuations .beaver wh ( 1966 ) financial ratios as predictors of failure .journal of accounting research ( 4 ) empirical research in accounting : selected studies : 71 - 111 .ohlson ja ( 1980 ) financial ratios and the probabilistic prediction of bankruptcy .journal of accounting research 18 : 109 - 131 .garcia - appendini e , montoriol - garriga j ( 2012 ) firms as liquidity providers : evidence from the 2007 - 2008 financial crisis .available at ssrn 2023583 .
we propose a novel approach and an empirical procedure to test direct contagion of growth rate in a trade credit network of firms . our hypotheses are that the use of trade credit contributes to contagion ( from many customers to a single supplier - `` many to one '' contagion ) and amplification ( through their interaction with the macrocopic variables , such as interest rate ) of growth rate . in this paper we test the contagion hypothesis , measuring empirically the mesoscopic `` many - to - one '' effect . the effect of amplification has been dealt with in our paper . our empirical analysis is based on the delayed payments between trading partners across many different industrial sectors , intermediated by a large italian bank during the year 2007 . the data is used to create a weighted and directed trade credit network . assuming that the linkages are static , we look at the dynamics of the nodes / firms . on the ratio of the 2007 trade credit in sales and purchases items on the profit and loss statements , we estimate the trade credit in 2006 and 2008 . applying the `` many to one '' approach we compare such predicted growth of trade ( demand ) aggregated per supplier , and compare it with the real growth of sales of the supplier . we analyze the correlation of these two growth rates over two yearly periods , 2007/2006 and 2008/2007 , and in this way we test our contagion hypotheses . we could not find strong correlations between the predicted and the actual growth rates . we provide an evidence of contagion only in restricted sub - groups of our network , and not in the whole network . we do find a strong macroscopic effect of the crisis , indicated by a coincident negative drift in the growth of sales of nearly all the firms in our sample .
we consider a population of agents , representing for instance people , firms or nations , engaged in bilateral collaborative interactions .each interaction is described by a continuous snowdrift game , one of the fundamental models of game theory . in this game ,an agent can invest an amount of time / money / effort into the collaboration with another agent .cooperative investments accrue equal benefits to both partners , but create a cost for the investing agent . assuming that investments from both agents contribute additively to the creation of the benefit , the payoff received by agent from an interaction with an agent then be written as the game thus describes the generic situation in which agents invest their personal resources to create a common good shared with the partner .as an example of the snowdrift game , the reader may think of a scientific collaboration where two researchers invest their personal time in a project , while the benefit of the publication is shared between them .this example makes it is clear that the benefit of the collaboration must saturate when an extensive amount of effort is invested , whereas the cost to the an agents , measured for instance in terms of personal well - being , clearly grows superlinearly once the personal investment exceeds some hours per day . in the followingwe do not restrict the cost- and the benefit - functions , and , to specific functional forms , except in the numerical investigations .however , we assume that both are differentiable and , moreover , that is sigmoidal and is superlinear ( cf . fig .[ pocfig2 ] ) .these assumptions capture basic features of real - world systems such as inefficiency of small investments , saturation of benefits at high investments , as well as additional costs incurred by overexertion of personal resources and are widely used in the sociological and economic literature . to account for multiple collaborations per agent, we assume that the benefits received from collaborations add linearly , whereas the costs are a function of the sum of investments made by an agent , such that the total payoff received by an agent is given by where denotes the _ total investment _ of the agent while denotes the total investment made in the collaboration .this is motivated by considering that benefits from different collaborations , say different publications , are often obtained independently of each other , whereas the costs generated by different collaborations stress the same pool of personal resources of an agent .let us emphasize that we do not restrict the investment of an agent further .while investments can not be negative , no upper limit on the investments is imposed .furthermore , the agents are free to make different investments in collaborations with different partners .thus , to optimize its payoff , an agent can reallocate investments among its potential partners as well as change the total amount of resources invested . for specifying the dynamics of the network , we assume the agents to be selfish, trying to increase their total payoff by a downhill - gradient optimization every agent can cooperate with every other agent .thus , the network of potential collaborations is fully connected and the deterministic time - evolution of the model system is given by a system of ordinary differential equations of the form of eq .[ timeevolution ] .the network dynamics , considered in the following , is therefore only the shifting of link weights .note however that already the weight dynamics constitutes a topological change . as will be shown in the following ,the agents typically reduce their investment in the majority of potential collaborations to zero , so that a sparse and sometimes disconnected network of non - vanishing collaborations is formed. therefore the terminology of graph theory is useful for characterizing the state that the system approaches .below , we use the term _ link _ to denote only those collaborations that receive a non - vanishing investment .a link is said to be _ bidirectional _ if non - vanishing investments are contributed by both connected agents , while it is said to be _ unidirectional _ if one agent makes a non - vanishing investment without reciprocation by the partner .likewise , we use the term _ neighbours _ to denote those agents that are connected to a focal agent by non - vanishing collaborations and the term _ degree _ to denote the number of non - vanishing collaborations in which a focal agent participates . in the following ,the properties of the model are investigated mostly by analytical computations that do not require further specifications . only for the purpose of verification and illustrationwe resort to numerical integration of the ode system . for these we use the functions for studying the time - evolution of exemplary model realizations by numerical integration , all variables are assigned random initial values drawn independently from a gaussian distribution with expectation value and standard deviation constituting a homogeneous state plus small fluctuations .the system of differential equations is then integrated using euler s method with variable step size . in every timestep , chosen such that no variable is reduced by more than half of its value in the step .if in a given timestep a variable falls below a threshold and the corresponding time derivative is negative , then is set to zero for one step to avoid very small time steps .we emphasize that introducing the threshold is done purely to speed up numerical integration and does not affect the results or their interpretation .in particular , we confirmed numerically that , the exact value of does not influence the final configuration that is approached . in all numerical results shown below numerical exploration of the system reveals frustrated , glass - like behavior ; starting from a homogeneous configuration as described above , it approaches either one of a large number of different final configurations , which are local maxima of the total payoff . a representative example of an evolved network , and snapshots from the time - evolution of two smaller example networks are shown in figs . [ pocfig1],[pocfig1a ] , respectively . in the example networksonly those links are shown that receive a non - vanishing ( i.e. above - threshold ) investment .most of these non - vanishing links are _ bidirectional _ , receiving investments from both of the agents they connect . only rarely , _links appear , which are maintained by one agent without reciprocation by the partner . for further investigations it useful to define a _bidirectionally connected component _( bcc ) as a set of agents and the bidirectional links connecting them , such that , starting from one agent in the set , every other agent in the set can be reached by following a sequence of bidirectional links . in the numerical investigationswe observe that all bidirectional links within a bcc receive the same total investment in the final state .however , the investment made in every given link is in general not split equally among the two connected agents .furthermore , all agents within a bcc make the same total cooperative investment in the final state .however , the investments of one agent in different collaborations are in general different .the _ coordination _ of total investments , therefore arises although no agent has sufficient information to compute the total investment made by any other agent .we emphasize that the level of investments , which the agents approach is not set rigidly by external constraints but instead depends on the topology of the network of collaborations that is formed dynamically .this is evident for instance in differences of up to 20 % between the level of investment that is reached in different bccs of the same network . to understand how coordination of investment arises, we now formalize the observations made above .we claim that in our model in the final state the following holds : within a bcc ( i ) every agent makes the same total investment , and ( ii ) either all bidirectional links receive the same total investment or there are exactly two different levels of total investment received by bidirectional links . for reasons described below , the case of two different levels of total investment per linkis only very rarely encountered . in this caseevery agent can have at most one bidirectional link that is maintained at the lower level of investment .we first focus on property ( i ) .this property is a direct consequence of the stationarity of the final state .consider a single link .since both investments , and , enter symmetrically into , the derivative of the benefit with respect to either investment is .thus , if , the stationarity conditions require this stipulates that the slope of the cost of the two interacting agents must match the slope of the shared benefit in the stationary state ( fig .[ pocfig2 ] ) . due to the symmetry of , holds for all .therefore , eq . implies . as we assumed to be superlinear , is injective and it follows that , such that and , are at a point of identical total investment . iterating this argument along a sequence of bidirectional links yields ( i ) .let us remark that the stationarity of vanishing investments may be fixed due to the external constraint that investments have to remain non - negative .the stationarity condition for vanishing and uni - directional links , analogous to eq . , is therefore because of the inequalities that appear in this equation ,the argument given above does not restrict the levels of total investment found in different components .for similar reasons agents that are only connected by unidirectional links can sustain different levels of investment , which is discussed in sec . [ secexploitation ] .we note that , although the network of potential interactions is fully connected , no information is transfered along vanishing links .therefore , the equation of motion , eq . [ timeevolution ] , should be considered as a local update rule , in the sense that it only depends on the state of the focal agent and on investments received from a small number of direct neighbours . in order to understand property ( ii ) we consider multiple links connecting to a single agent . in an equilibriumthe investment into each of the links has to be such that the slope of the benefit function of each link is identical .otherwise , the payoff could be increased by shifting investments from one link to the other .since the benefit function is sigmoidal , a given slope can be found in at most two points along the curve : one above and one below the inflection point ( ip ) . by iteration, this implies that if a stationary level of investment is observed in one link , then the investment of all other links of the same bcc is restricted to one of two values , which amounts to the first sentence of ( ii ) . for understanding why the case of two different levels of investments is rarely encounteredthe stability of steady states has to be taken into account . a local stability analysis , based on linearisation and subsequent application of jacobi s signature criterion ,is presented in the appendix .we show that for a pair of agents connected by a bidirectional link , stability requires and every pair of links and connecting to the same agent has to satisfy note that eq .does not stipulate the sign of as it only implies .as eq . applies also to the link , the same holds for .we therefore have to consider three different cases when testing the compatibility of eq . with eq . : a ) : : and , ( both investments above the ip ) b ) : : and , ( both investments below the ip ) c ) : : and ( one investment above and one below the ip ) . in case a ) ,is trivially fulfilled as the left hand side has positive and the right hand side negative sign . in caseb ) , eq . and. are incompatible : estimating the lower bound of the right hand side of using the relation leads to the contradiction this shows that in a stable stationary state , every agent can at most have one link receiving investments below the ip . in case c ) , eq .can in principle be satisfied .however , the equation still imposes a rather strong restriction on a positive requiring high curvature of the benefit function close to saturation .the restriction becomes stronger , when the degree of agent increases .bilateral links with investments below the ip can be excluded entirely , if the benefit function approaches saturation softly , so that the curvature above the inflection point remains lower or equal than the maximum curvature below the inflection point . for such functions ,every pair of solutions to the stationarity condition yields a pair of coefficients violating . in this case only configurations in which all links receive investments above the ip can be stable and hence all links produce the same benefit in the stable stationary states .this explains why the case of two different levels of cooperation is generally not observed in numerical investigations if realistic cost and benefit functions are used .for understanding the central role the ip plays for stability consider that in the ip the slope of is maximal .therefore , links close to the ip make attractive targets for investments .if the total investment into one link is below the ip then some disturbance raising ( lowering ) the investment increases ( decreases ) the slope , thus making the link more ( less ) attractive for investments .hence , below the ip , a withdrawal of resources by one of the partners , no matter how slight , will make the collaboration less attractive , causing a withdrawal by the other partner and thereby launching the interaction into a downward spiral .conversely , for links above the ip the gradual withdrawal of resources by one partner increases the attractiveness of the collaboration and is therefore compensated by increased investment from the other partner . in psychologyboth responses to withdrawal from a relationship are well known .the proposed model can therefore provide a rational for their observation that does not require explicit reference to long term memory , planning , or irrational emotional attachment .for our further analysis property ( ii ) is useful as it implies that , although our model is in essence a dynamical system , the bccs found in the steady states of this system can be analyzed with the tools of graph theory for undirected graphs . in the secs .[ secleaders ] , [ secgiantcomp ] we go one step further and treat not only the bcc but the whole network as an undirected graph .we thereby ignore the differences between directed and undirected links in order to study properties such as the degree- and component - size distributions before we continue in sec .[ secexploitation ] with a more detailed investigation of directed links and their topological implications .despite the coordination described above , the payoff extracted by agents in the final state can differ significantly .this is remarkable because the agents follow identical rules and the network of collaborations is initially almost homogeneous with respect to degree , link weights , and neighbourhood . because all bidirectional links in a bcc produce the same benefit , the total benefit an agent receives is proportional to the degree of the agent .by contrast , the cost incurred by an agent does not scale with the degree , but is identical for all agents in the bcc , because agents of high degree invest a proportionally smaller amount into their collaborations .topological positions of high degree thus allow agents to extract significantly higher benefits without requiring more investment .the payoff distribution in the population is governed by the degree distribution describing the relative frequency of agents with degree .figure [ pocfig3 ] shows a representative degree distribution of an evolved networks in the final state . while the finite width of the distribution indicates heterogeneity , the distribution is narrower , and therefore fairer , than that of an erds - rnyi random graph , which constitutes a null - model for randomly assembled network topologies .we verified that variance of the evolved network is below the variance of a random graph for the whole range of admissible mean degree in a network of given size .although the snowdrift game is not a zero - sum game , payoffs can not be generated arbitrarily . in order to sustain the extraction of high payoffs by agents of high degree, investments have to be redistributed across the network . in the definition of our model, we did not include the transport of resources directly .nevertheless , a redistribution of investments arises indirectly from the asymmetry of the agents investments .this is illustrated in fig .[ pocfig4 ] .consider for instance an agent of degree 1 .this agent necessarily focuses his entire investment on a single collaboration .therefore , the partner participating in this collaboration only needs to make a small investment to make the collaboration profitable .he is thus free to invest a large portion of his total investment into links to other agents of possibly higher degree . in this way investments flow toward the regions of high degree where high payoffs are extractedto explore the topological properties of the networks of collaborations in the final state further , we performed an extensive series of numerical integrations runs in which we varied all parameters in a wide range .these revealed that an important determinant of the topology is the mean degree , where denotes the number of links and the number of agents in the network .given two evolved networks with similar , one finds that the networks are also similar in other properties such as the component - size distribution , clustering coefficient , and the fraction of collaborations that are unidirectional .we therefore discuss the topological properties of the evolved networks as a function of , instead of the original model parameters .we first consider the expected size of a network component to which a randomly chosen agent belongs .in contrast to the bcc s discussed above , unidirectional collaborations are now taken into account in the computation of component sizes .the value of in the evolved network as a function of is shown in fig .[ pocfig5]a .the figure reveals that large components begin to appear slightly below . because of the difficulties related to integrating differential equations ,our numerical investigations are limited to networks of up to 100 agents .while it is therefore debatable whether the observed behaviour qualifies as a phase transition , it can be related to the giant component transition commonly observed in larger networks . in the giant component transition a component is formed that scales linearly with network size . in the absence of higher correlationsthe transition occurs at , where is the mean excess degree of the network , i.e. , the number of additional links found connected to a agent that is reached by following a random link . in erds - rnyi random graphs , ,therefore the giant component transition takes place at . in the present model the transition in is shifted to higher values of because of the nature of the underlying snowdrift game .the snowdrift game favors cooperation in the sense that for an agent of degree zero it is always advantageous to initiate an interaction .therefore is the lowest possible value that can be observed in evolved networks .further , any evolved network with invariably consists of isolated pairs , which precludes the existence of a giant component .finally , the relatively narrow degree distribution of the evolved networks implies and therefore at the transition . to estimate an upper limit for the connectivity at which the giant component transition occurs , it is useful to consider degree homogeneous networks . in these networksthe degree distribution is a delta function and , so that the transition occurs at . in the networks evolved in the proposed model we can therefore expect a critical value of between one and two .based on numerical results we estimate that the giant component transition in the present model occurs at ( fig .[ pocfig5 ] ) . at this valuea power - law distribution of component sizes , which is a hallmark of the giant - component transition , begins to show already in relative small networks with .while in sec. [ seccoordination ] we have mainly considered bidirectional links , and in sec.[secleaders ] and [ secgiantcomp ] only distinguished between vanishing and non - vanishing links , we will now focus on unidirectional links , which one partner maintains without reciprocation by the other .the presence of such links in collaboration networks was recently discussed in detail by . for the discussion below it is advantageous to consider the mean degree of agents in a connected component , where and are the number of agents and links in the component .note that in large components while the two properties can be significantly different in small components .in contrast to , allows us to infer global topological properties : components with are trees .components with contain exactly one cycle to which trees might be attached . and, components with contain more than one cycle , potentially with trees attached . as in the previous section, the term component refers to maximal subgraphs which are connected by bidirectional and/or unidirectional links . according to this definition a componentmay , beside one or more bccs , contain agents , which only have unidirectional links . in the followingwe denote the set of these agents as the non - bcc part of the component ( nbcc ) . for the sake of simplicity we focus on components which contain only one bcc , but note that the case of multiple bccs can be treated analogously . unlike the bcc ,the nbcc is not a subcomponent but only a set of agents which are not necessarily connected .nevertheless , numerical results show that ( i * ) all nbcc agents make the same total investment and ( ii * ) all unidirectional links maintained by nbcc agents receive the same total investment .while property ( ii * ) can be understood analogously to property ( ii ) of bccs , property ( i * ) can not be ascribed to stationarity or stability conditions but seems to result from optimality restrictions . as a consequence of the properties ( i * ) and ( ii * )the number of outgoing links is identical for all agents in the nbcc .so far we have decomposed a component into the bcc and the nbcc . within each subset ,all agents make the same total investment , and all links receive the same total investment , therefore each subset can be characterized by two parameters , for bcc and for the nbcc . to recombine the subsets and infer properties of the whole component, we need to study the relation between these four parameters .the central question guiding our exploration is , why do agents not start to reciprocate the unidirectional investments .the lack of reciprocation implies that the unidirectional links are either less attractive or just as attractive as bidirectional links .we distinguish the two scenarios a ) : : , b ) : : . in casea ) the unidirectional collaborations are as attractive as targets for investments as bidirectional collaborations . in typical networks , where all remaining links receive investments above the ip this implies .furthermore , in case a ) the stationarity condition , eq . , requires that , which stipulates .therefore the whole component consists of agents making an investment and links receiving an investment .conservation of investments within a component implies and hence we know further that , where is the number of outgoing links of an agent in the nbcc . inserting in eq .yields , showing that unidirectional links that are as attractive as bidirectional links can only occur in components in which mean degree , , is an integer multiple of 2 .this matches the numerical data displayed in fig .[ pocfig6]a , which shows that is observed in components with and .it is remarkable that observing in a pair of collaborations is sufficient to determine the mean degree of the whole component .moreover components in which the mean degree is exactly 2 have to consist of a single cycle potentially with trees attached . in the numerical investigations we mostly observe cycles of bidirectional links to which trees of unidirectional links are attached , as shown in fig .[ pocfig7]b . in caseb ) the bidirectional links are more attractive targets for investments than unidirectional links . in typical networks with implies .now the stationarity condition , eq ., demands that , so that unidirectional links receive a higher investment than bidirectional links .by contrast the total investment made by an agent investing in bidirectional links is higher than the one made by agents investing in unidirectional links , i.e. this relationship restricts the connectivity in the bcc to , which implies , because the mean degree of the component can not be smaller than 2 if a subcomponent already has a degree greater than 2 .therefore , we find that unidirectional links that are less attractive than bidirectional links only occur in components in which the mean degree is larger than , but not an integer multiple of 2 ( cf . fig .[ pocfig6]a ) . as such links are only found at beyond the giant component transition they occur typically in large components as shown in fig .[ pocfig1 ] . in numerical investigations , we also observe some unidirectional links in components with ( cf .[ pocfig6]b ) . to explain these we have to consider case b ) but relax the assumption that both , and are above the ip .thus , we obtain case c ) , about which we know that the unidirectional links are less attractive than bidirectional links , , and that the unidirectional link only receives investments from one agent , i.e. , .moreover , implies and therefore .therefore which shows that unidirectional links can only appear in components with if the investment received by unidirectional links is smaller than the investment received by bidirectional links .satisfying and simultaneously requires .the components with , in which such links are found , are trees formed by a core of bidirectional links , to which individual agents are attached by unidirectional links ( fig .[ pocfig7]a ) .chains of unidirectional links , as we have observed in case a ) , can not appear for as this would mean that some agents would have one incoming and one outgoing link below the ip , which is ruled out by a trivial extension of the reasoning from sec .[ seccoordination ] .in this paper we have proposed a model for the formation of complex collaboration networks between self - interested agents . in this modelthe evolving network is described by a large system of deterministic differential equations allowing agents to maintain different levels of cooperation with different partners .we showed analytically that bidirectionally communities are formed , in which every agent makes the same total investment and every collaboration provides the same benefit .in contrast to models for cooperation on discrete networks , the present model thereby exhibits a high degree of coordination which can be interpreted as a precursor of a social norm .we emphasized that coordination is generally achieved although single agents possess insufficient information for computing the total investment made by any other agent and although the level of cooperation that is reached in a community is not fixed rigidly by external constraints . despite the high degree of coordination, we observed the appearance of privileged agents , reminiscent of the leaders emergind in . in the model proposed in the present paper ,the privileged agents hold distinguished topological positions of high degree centrality allowing them to extract much higher payoffs than other agents , while making the same cooperative investment .however , we found that in the absence of further mechanism reinforcing differences the assembled topologies were fairer than random graphs . although our primary aim was to investigate the formation of social networks , some aspects of the behavior of social agents are reminiscent of results reported in psychology .for instance our investigation showed that agents can react to the withdrawal of investment by a partner either by mutual withdrawal of resources or by reinforcing the collaboration with increased investment .our analysis provides a rational which links the expected response to the withdrawal of resources to an inflection point of an assumed benefit function .furthermore , we investigated under which conditions non - reciprocated collaborations appear . here, our analysis revealed that such unidirectional collaborations can appear in three distinct scenarios , which can be linked to topological properties of the evolving networks .in particular exploited agents whose investments are not reciprocated invest less than the average amount of resources in their links when occurring in small components , but more than the average amount , when integrated in large components .we believe that the results from the proposed model can be verified in laboratory experiments in which humans interact via a computer network .such experiments may confirm the topological properties of the self - organized networks reported here and may additionally provide insights into the perceived cost and benefit functions that humans attach to social interactions .furthermore , results of the proposed model may be verified by comparison with data on collaboration networks between people , firms or nations .this comparison may necessitate modifications of the model to allow for instance for slightly different cost functions for the players .most of these extensions are straight forward and should not alter the predictions of the model qualitatively .for instance in the case of heterogeneous cost functions , players will make different total investments , but will still approach an operating point in which the slope of their cost function is identical .further , coordination should persist even if the network of potential collaborations is not fully connected .finally , but perhaps most importantly our analytical results do not rely heavily on the assumption that only two agents participate in each collaboration .most of the results can therefore be straight - forwardly extended to the case of multi - agent collaborations .our analytical treatment suggests that the central assumption responsible for the emergence of coordination is that the benefit of a collaboration is shared between the collaborating agents , but is independent of their other collaborations , whereas the cost incurred by an agent s investment depends on the sum of all of an agent s investments .because this assumption seems to hold in a relatively large range of applications we believe that also the emergence of coordination and leaders by the mechanisms described here should be observable in a wide range of systems .the analysis presented in this paper has profited greatly from the dual nature of the model , combining aspects of dynamical systems and complex network theory . in particular our analytical investigations were based on the application of jacobi s signature criterion to the system s jacobian matrix .apart from the symmetry of the jacobian , this ` double - jacobi ' approach does not depend on specific features of model under consideration .the same approach can therefore be used to address significant extensions of the present model .we therefore believe that also beyond the field of social interactions , the double - jacobi approach will prove to be a useful tool for the analytical exploration of the weighted adaptive networks that appear in many applications .to determine the local asymptotic stability of the steady states we study the jacobian matrix defined by .the terms contained in this matrix can be grouped into three different types albeit evaluated at different points . for reasons of symmetry and consequentially , and . ordering the variables according to the mapping the jacobian can be written in the form which is shown here for . as each cooperation is determined by a pair of variables , each occurs twice forming quadratic subunits with the corresponding entries and .subsequently , we restrict ourselves to the submatrix of , which only captures variables belonging to ` non - vanishing ' links . as argued before , ` vanishing links ' , i.e. links with , are subject to stationarity condition .if , their stability is due to the boundary condition and is independent of the second derivatives of and .hence , they can be omitted from the subsequent analysis .this means in particular that the spectra of different topological components of the network decouple and can thus be treated independently .all eigenvalues of the real , symmetric matrix are real .according to jacobi s signature criterion the number of negative eigenvalues equals the number of changes of sign in the sequence where is the rank of and , . in a stable systemthe sequence has to alternate in every step .a necessary condition for stability is therefore alternation in the first steps . by means of an even number of column and row interchanges the above stated form of always be transformed such that the first block reads since we assume that is a non - vanishing link , and , hence , and to be in the same component , both agents make the same total investment .it follows from definition that and therewith that .thus , the sequence alternates if equation stipulates that and have the same sign .of the two possible scenarios the second is ruled out by eq . :if , it follows from eq . that , which contradicts .hence , the necessary conditions for stability , eqs . , , require if either agent or agent has another bilateral link , say , it is furthermore possible to transform by an even number of row and line interchanges such that the first block reads in this representation the sequence alternates if condition can then be written as inserting the definitions eqs.- in eqs . and yields the stability conditions cited in the main text as eqs . - . axelrod r and hamilton wd 1981 _ science _ * 211 * 13901396 doebeli m , hauert c and killingback t 2004 _ science _ * 306 * 859862 nowak ma and sigmund k 2004 _ science _ * 303 * 793799 nowak ma 2006 _ science _ * 314 *15601563 axelrod r 1984 _ the evolution of cooperation _( new york : basic books ) nowak ma and may rm 1992 _ nature _ * 92 * 826829 burtsev m and turchin p 2006 _ nature _ * 440 * 10411044 hauert c and doebeli m 2004 _ nature _ * 428 * 643646 eguluz vm , zimmerman mg , cela conde cj and san miguel m 2005 _ am .j. soc . _ * 110*(4 ) 9771008 santos fc and pacheco jm 2005 _ phys .lett . _ * 95 * 09810414 ohtsuki h , hauert c , lieberman e and nowak ma 2006 _ nature _ * 441 * 502505 santos fc , santos md and pacheco jm 2008 _ nature _ * 454 * 213216 macy mw 1991 _ am ._ * 97*(3 ) 808843 gould rv 1993 _ am .rev . _ * 58*(2 ) 182196 willers d 1999 _ network exchange theory _( westport : praeger ) fehr e and fischbacher u 2003 _ nature _ * 425 * 785791 palla g , barabsi al and vicsek t 2007 _ nature _ * 446 * 664667 braha d and bar - yam y 2009 _ adaptive networks _( heidelberg : springer ) 3950 gross t and blasius b 2008 _ jrs interface _ * 5 * 259271 gross t and sayama h ( eds . ) 2009 _ adaptive networks : theory , models , and data _( heidelberg : springer ) ashlock d , smucker md , stanleyea and tesfatsion l 1996 _ biosystems _ * 37 * 99125 bala v and goyal s 2001 _ j. econ . theory _* 17 * 101120 bornholdt s and rohlf t 2000 _ phys .lett . _ * 84 * 6114 pascuski m , bassler ke and corral a 2000 _ phys . rev ._ * 84 * 31853188 for a collection of respective publications see http://adaptive-networks.wikidot.com/publications skyrms b and pemantle r 2000 _ proc .usa _ * 97 * 93409346 zimmermann mg , eguluz vm san miguel m and spadaro a 2000 _ adv .complex syst . _ * 3 * 283297 zimmermann mg , eguluz vm and san miguel m 2004 _ phys .e _ * 69 * 065102 zimmermann mg and eguluz vm 2005 _ phys .e _ * 72 * 056118 fu f , wu t and wang l 2008 _ phys .e _ * 79 * 036101 pacheco jm , traulsen a and nowak ma 2006 _ j. theor .biol . _ * 243 * 437443 pacheco jm , traulsen a and nowak ma 2006 _ phys ._ * 97 * 25810314 szolnoki a , perc m and danku z 2008 _ euro . phys ._ * 84 * 50007 van segbroeck s , santos fc , lenaerts t and pacheco jm 2009 _ phys ._ * 102 * 058105 szolnoki a and perc m 2009 _ epl _ * 86 * 30007 poncela j , gmez - gardees j , flora lm , snchez a and moreno y 2008 _ plos one _ * 3 * e2449 poncela j , gmez - gardees j , traulsen a and moreno y 2009 _ new j. phys . _ * 11 * 083031 fu f , wu t and wang l 2009 _ phys . rev. e _ * 79 * 036101 suzuki r , kato m and arita t 2008 _ phys .e _ * 77 * 021911 ebel h and bornholdt s 2002 _ preprint _ arxiv : cond - mat/0211666 zschaler g , traulsen a and gross t 2009 _ preprint _arxiv:0910.0940 biely c , dragosits k and thurner s 2007 _ physica d _ * 228 * 4048 szolnoki a and perc m 2009 _ new j. phys . _ * 11 *093033 koenig md , battiston s , napoletano m and schweitzer f 2008 _ preprint _ cer - eth working paper 08/95 tomassini m , pestelacci e and luthi l 2010 _ biosystems _ * 99 * 5059 oliver p , marwell g and teixeira r 1985 _ am .j. sociol . _* 91 * 522556 heckathorn dd 1996 _ am .rev . _ * 61 * 250277 this can be shown by taking determinants with into account .baxter la 1984 _ j soc . pers .relat . _ * 1 * 2948 newman me 2003 _siam review _ * 45 * 167256 zeidler e , hackbusch w , schwarz hr and hunt b 2004 _ oxford user s guide to mathematics _ ( new york : oxford university press )
we study the self - assembly of a complex network of collaborations among self - interested agents . the agents can maintain different levels of cooperation with different partners . further , they continuously , selectively , and independently adapt the amount of resources allocated to each of their collaborations in order to maximize the obtained payoff . we show analytically that the system approaches a state in which the agents make identical investments , and links produce identical benefits . despite this high degree of social coordination some agents manage to secure privileged topological positions in the network enabling them to extract high payoffs . our analytical investigations provide a rationale for the emergence of unidirectional non - reciprocal collaborations and different responses to the withdrawal of a partner from an interaction that have been reported in the psychological literature . cooperation is the basis for complex organizational structures in biological as well as in social systems . the evolutionary and behavioural origin of cooperation is a subject of keen scientific interest , because the ubiquity of cooperation in nature seems to defy the often high costs incurred by the cooperating agent . evolutionary game theory has identified several mechanism allowing for the evolution and persistence of costly cooperation . in particular the emergence of cooperation is promoted if the interacting agents are distributed in some ( potentially abstract ) space , so that only certain agents can interact at any given time . in the context of social cooperation spatial structure can be appropriately modeled by a complex network , in which nodes represent agents , while the links correspond to collaborations . the topology of this network , i.e. , the specific configuration of nodes and links , has been shown to be of central importance for the level of cooperation that evolves . in social networks the topology is not static , but reacts to the behaviour of the agents . this defines an inherent dynamical interplay : while the agents s behaviour may depend on their topological neighbourhood , this neighbourhood is , at least in part , shaped through the agent s behavioural choices . networks containing such an dynamical interplay between the state of the nodes and the networks topology are called _ adaptive networks _ . while adaptive networks have been studied for some time in the social literature ( e.g. ) , pioneering work only recently triggered a wave of detailed dynamical investigations in physics . recent publications discuss simple cooperative games such as the one - shot prisoner s dilemma , the iterated prisoner s dilemma , and the snowdrift game on adaptive networks . they showed numerically and analytically that a significantly increased level of cooperation can be achieved if individuals are able rewire their links if links are formed and broken or if new agents are added to the network . moreover , it has been shown that the adaptive interplay between the agents strategies and the network topology can lead to the emergence of distinguished agents from an initially homogeneous population . while important progress has been made in the investigation of games on adaptive networks , it is mostly limited to discrete networks , in which the agents can only assume a small number of different states , say , unconditional cooperation with all neighbours and unconditional defection . by contrast , continuous adaptive networks have received considerably less attention . most current models therefore neglect the ability of intelligent agents to maintain different levels of cooperation with different self - chosen partners . in this paper we propose a weighted and directed adaptive network model in which agents continuously and selectively reinforce advantageous collaborations . after a brief description of the model , we show in sec . [ seccoordination ] that the network generally approaches a state in which all agents make the same total cooperative investment and every reciprocated investment yields the same benefit . despite the emergence of this high degree of coordination , the evolved networks are far from homogeneous . typically the agents distribute their total investment heterogeneously among their collaborations , and each collaborations receives different investments from the partners . in sec . [ secleaders ] , we show that this heterogeneity enables resource fluxes across the network , which allow agents holding distinguished topological positions to extract high payoffs . thereafter , in sec . [ secgiantcomp ] , we investigate further topological properties of the evolved networks and identify the transition in which large cooperating components are formed . finally , in sec . [ secexploitation ] , we focus on the appearance of unidirectional ( unreciprocated ) investments . specifically , we identify three distinct scenarios in which unidirectional collaborations can arise and discuss their implications for the interaction topology . our conclusions are summarized in sec . [ secdiscussion ] .
curvature - related level set models have been proposed in many problems of image and surface processing , due to their well understood analytical features and nice smoothing properties . in this paper, we consider one of such models , related to the problem of reconstructing a surface from a discrete set of points on it . for ,let be a discrete set of points in , which are to be understood as points on a surface ( or a curve if ) , from which the surface itself has to be reconstructed .the set will be also termed as `` data set '' in what follows , and we assume that its points might be affected by noise .thus , the reconstructed surface is not expected to pass through each of the data set points , but rather ( depending on the criterion used to perform the reconstruction ) to provide some trade - off between an exact interpolation and a smooth behaviour . a level set model for this problem has been proposed by zhao et al . in , and leads to the following evolutive problem where is the euclidean distance from the set , and denotes the gradient operator . as customary in level set methods , the reconstructed surface at a given time is represented as the zero - level set of the solution , that is , in fact , is related to an energy functional in which the norm of the distance from is integrated on the whole surface ( see ) . more precisely , given a surface , we define the energy as the surface integral and look for a minimum of this functional ( this has the clear meaning of a compromise between the total surface of and its distance from the data set ) .once we define the evolution of an initial guess for the surface along the gradient flow of , and express the surface at time as the zero - level set of a function as in , we obtain .details about the derivation of this model may be found in , whereas well - posedness of can be proved in the framework of viscosity solutions ( see ) , which requires minimal regularity assumptions on the solution . here , we are interested in the stationary version of , that is , which plays the role of an euler lagrange equation for the energy and should be satisfied at local minima of the energy . in particular, a solution of will be obtained in what follows as a regime solution of for .we quote that similar techniques , retaining the regularizing effect of curvature - like terms , but possibly based on different evolution operators , are proposed for example in . of course , level set methods are not the only strategy used to solve the problem of surface reconstruction for example , segmentation techniques have been successfully proposed for this problem in recent years ( see ) . on the other hand ,the largest amount of literature on the topic is probably devoted to least squares techniques . while we give up a complete review of this line of research , we mention that a relatively recent and successful technique in this framework makes use of radial basis functions ( rbf ) space reconstructions , whose application to surface reconstruction stems from pioneering works published in the late 90s ( see also for a general review ) .the aim of the present work is to investigate the application of semi - lagrangian ( sl ) numerical techniques to , focusing in particular on their implementation with rbf space reconstructions .the application of rbf techniques to sl schemes has gained a certain popularity , although it has been restricted so far to the case of hyperbolic problems ( see , e.g. , and the literature therein ) . on the other hand ,semi - lagrangian schemes for curvature - related equations have been first proposed in and has gone through a number of improvements and applications ( see , in particular , for an in - depth convergence analysis and for two applications to image processing , along with for a general review on sl schemes ) . in this framework , rbf would be expected to provide a more flexible tool to construct a sparse space reconstruction .in fact , since their construction is not based on a space grid , radial basis functions allow in principle for a sparse implementation , as well as for local refinements , although we are not aware of general strategies which could effectively handle rbfs in very disordered geometries .we propose therefore a _ structured , but local _ rbf space reconstruction in which one could better focus on the region close to the data set , instead of working on a full ( and computationally more expensive ) grid .this will be the final goal of the paper .we mention that , despite being constructed by different numerical tools , our scheme implements a `` localization '' of the numerical effort , much in the same spirit of the multigrid / multilevel techniques shown in .we finally remark that , while a model like might not be rated as the cutting - edge technique for surface reconstruction , yet the coupling of sl schemes with localized rbf space reconstructions shows a good potential in terms of accuracy and computational cost , and seems to be valuably applicable to a wider class of problems ( in particular , level set models as the one under consideration ) .the outline of the paper is as follows .section [ schema ] will review the basic principles of construction of a sl scheme for , as well as the underlying ideas of rbf interpolation .section [ test ] will show numerical tests of increasing difficulty in two and three space dimensions , with both full and reduced grids .we sketch in this section the construction of a sl approximation to .we start by sketching the basic ideas on the two dimensional case , then give the three - dimensional version of the scheme , and last describe the main improvements and modifications for the case of rbf space interpolations .the main feature of the sl scheme under consideration is to be explicit , yet not constrained by the classic `` parabolic cfl '' condition , typical in the explicit treatment of diffusion terms .the degenerate diffusion performed by the curvature operator is treated by means of a convex combination of ( interpolated ) values of the numerical solution at the previous step , as we will soon show .before introducing the scheme , we rewrite the mean curvature operator in such a way that the derivation of the method will be more natural .let us recall that where is the identity matrix , and is a matrix which projects the diffusion on the tangent plane of each level surface .the projection is a matrix of rank with eigenvectors corresponding to the eigenvalue , and can be written as for a matrix having these eigenvectors as columns . in the case , there is only one eigenvector orthogonal to the gradient , namely in this case , the operator can be rewritten as , the projection matrix is a matrix of rank 2 spanning the two - dimensional space orthogonal to the gradient of .the two orthonormal eigenvectors of are : and , once set , the mean curvature operator can be rewritten as we will derive the scheme from this form , which corresponds to the probabilistic interpretation described in . in the two - dimensional case , the scheme has the structure \left(x_j+\d t\ > d d(x_j)+\sqrt{2 \d t\ > d(x_j ) } \>\sigma^n_j\right)+ \\ \displaystyle \hspace{1.3 cm } + \frac{1}{2}i[u^n]\left(x_j+\d t\ >d d(x_j)-\sqrt{2 \d t\ > d(x_j)}\>\sigma^n_j\right ) & x_j \in { { \mathcal{g } } } , n\ge 0 \\ u^0_j = u^0(x_j ) & x_j \in { { \mathcal{g}}}\end{cases}\ ] ] where (x) ] and is therefore explicit and stable .this technique of treating singularities can be shown to be compatible with the definition of viscosity solution for ( see ) .following , we can also write a three - dimensional version of the scheme ( which will be used in the section on numerical tests to recover surfaces ) . the new form of will then be written as where and . using this matrix to perform an average of 4 points on the tangent plane, we write the scheme as \left(x_j+\d t\ > d d(x_j)+\sqrt{2\d t\ > d(x_j ) } \>\sigma^n_j\delta_i\right)\ ] ] in which the vectors are defined as for all combinations of the signs . via minor algebraic manipulations, can be rewritten as (x_j+\d t\ > d d(x_j ) ) - u^n_j\big)+\\ & & + \frac{d(x_j)}{|h^1_j|^2}\big(i[u^n](x_j+\d t\ >d d(x_j)+h^1_j ) - 2 i[u^n](x_j+\d t\ > d d(x_j ) ) + i[u^n](x_j+\d t\ > d d(x_j)-h^1_j)\big ) + \nonumber \\ & & + \frac{d(x_j)}{|h^2_j|^2 } \big(i[u^n](x_j+\d t\ >d d(x_j)+h^2_j ) - 2 i[u^n](x_j+\d t\ > d d(x_j ) ) + i[u^n](x_j+\d t\ > d d(x_j)-h^2_j))\big ) .\nonumber \end{aligned}\ ] ] where in the second and third line of , it is possible to recognize the second finite differences along the directions and , which has the effect of generating a diffusion along the tangent space of the level sets , in agreement with the curvature operator . in the first line of, an upwind approximation of the transport term appears . in ( approximately ) singular conditions , i.e. , when , the diffusion term is switched to a 7-point laplacian by analogy with .while the sl scheme has proved to be robust and relatively accurate in a variety of applications , we study in this work an adaptation to this specific case . in, the interest is in following the zero - level set of the solution , which is in turn supposed to stay in the neighbourhood of the data set .the computational effort has therefore to be concentrated in this latter region computing an accurate solution away from the data set is useless .keeping this idea in mind , we implement with a space reconstruction in the form of a radial basis function ( rbf ) interpolation , which lends itself to a sparse implementation ( see ) .in particular , we have used here the matlab rbf interpolation toolbox described in .the general structure of the space reconstruction under consideration is (x ) = c_0(u ) + c(u)\cdot x + \sum_i \lambda_i(u ) \phi_\rho(|x - x_i|)\ ] ] where the scalar , the vector and the coefficients ( all of which depend on ) are determined by imposing interpolation conditions , and a suitable closure of the system ( see ) .note that in this case , since ] , as shown in the left plot of fig .[ test1bgrid ] .we apply on the grid with time step and show ( in the right plot of fig .[ test1bgrid ] ) the trend of the normalized update between two successive iterations , the algorithm being stopped after 150 iterations .the final solution obtained by multiquadric rbfs ( with a scale factor , see ) is shown in the left plot of fig .[ test1bsol ] . the final solution obtained by linear rbfs and by linear rbfs applied on a finer gridare shown respectively in the center and in the right plot of fig .[ test1bsol ] .the reconstructed curve is very close to the theoretical forecast of a polygonal line , except for a slight smoothing of the non - convex , upper section of the shape ( see fig .[ test1bsol ] ) .regardless of the number of points in the data set , this gives the natural indication that the computational grid is still too coarse .note also that convergence of iterates tends to become quite slow , although it seems not to depend on the type of rbf used .in fact , we have included a single plot for both linear and multiquadric rbfs , whose convergence histories show an analogous behaviour .the same test is performed again on a reduced grid made of 106 point distributed around the data set according to with , as shown in the left plot of fig .[ test1dgrid ] , which also shows the anchor points ( marked as the outer black crosses ) together with the initial condition .the smaller area covered by the grid allows to decrease the total number of nodes to a fraction of about 12% of the original grid .the final solution obtained by multiquadric rbfs with and by linear rbfs are respectively shown in the left and right plots of fig .[ test1dsol ] .the result is essentially equivalent to what is obtained with the full grid ( fig .[ test1dsol ] ) .note that , comparing the right plots of fig .[ test1bgrid ] and [ test1dgrid ] , convergence of the iterative solver seems also to be comparable in terms of iteration number , but with a lower cost for a single iteration . as in the case of a full grid , convergence of the iterates for linear and multiquadric rbfs is very similar .we consider a data set made of 748 points chosen on a 3d heart shape in the set ^ 3 ] .the recovered surfaces obtained by applying to three different levels of noisy datasets ( ) , still on the same space grid , with time step and 80 iterations , are shown in fig .[ test2bnoise ] . as a third test, we consider a data set made of 4020 points uniformly chosen on a shape made by two intersecting cubes in ^ 3 ] .we consider a data set made of 2602 points ( about of the original dataset ) and a grid containing the dataset and additional nodes chosen on a narrow band of the data set , computed with from a full grid of nodes .we apply on the grid with time step .the algorithm is stopped after 150 iterations , showing the final solution and the convergence history in the two plots of fig .[ test7sol ] , while the data set and the grid are shown in fig .[ test7grid ] .carr , r.k .beatson , j.b .cherrie , t.j .mitchell , w.r .fright , b.c . mccallum and t.r .evans,_reconstruction and representation of 3d objects with radial basis functions _ , proceedings of acm siggraph 2001 , 6776 , 2001 e. franchini , s. morigi , f. sgallari , _ implicit shape reconstruction of unorganized points using pde - based deformable 3d manifolds _ , numerical mathematics : theory , methods and applications , * 3 * ( 2010 ) , 405430 j. ye , i. yanowsky , b. dong , r. gandlin , a. brandt , s. osher , _ multigrid narrow band surface reconstruction via level set functions _ ,proceedings of the 8th international symposium on visual computing isvc 2012 ( rethymnon , crete ) , 6170 , springer , 2012 h. zhao , s. osher , b. merriman and m. kang , _ implicit and non - parametric shape reconstruction from unorganized points using variational level set method _ , computer vision and image understanding , * 80 * ( 2000 ) , 295319
we propose a semi - lagrangian scheme coupled with radial basis function interpolation for approximating a curvature - related level set model , which has been proposed by zhao et al . in to reconstruct unknown surfaces from sparse data sets . the main advantages of the proposed scheme are the possibility to solve the level set method on unstructured grids , as well as to concentrate the reconstruction points in the neighbourhood of the data set , with a consequent reduction of the computational effort . moreover , the scheme is explicit . + numerical tests show the accuracy and robustness of our approach to reconstruct curves and surfaces from relatively sparse data sets .
in this work , we study reaction - diffusion pde systems as well as their discrete analogues ( `` compartmental - systems '' ) . here , is the laplacian operator on a suitable spatial domain , and no flux ( neumann ) boundary conditions are assumed . in biology , a pde system of this form describes individuals ( particles , chemical species , etc . ) of different types , with respective abundances at time and location , that can react instantaneously , guided by the interaction rules encoded into the vector field , and can diffuse due to random motion .reaction - diffusion pde s play a key role in modeling intracellular dynamics and protein localization in cell processes such as cell division and eukaryotic chemotaxis ( e.g. , ) as well as in the modeling of differentiation in multi - cellular organisms , through the diffusion of morphogens which control heterogeneity in gene expression in different cells ( e.g. ) . from a bioengineering perspective, reaction - diffusion models can be used to model artificial mechanisms for achieving cellular heterogeneity in tissue homeostasis ( e.g. , ) .the `` symmetry breaking '' phenomenon of diffusion - induced , or turing , instability refers to the case where a dynamic equilibrium of the non - diffusing ode system is stable , but , at least for some diagonal positive matrices , the corresponding uniform state is unstable for the pde system .this phenomenon has been studied at least since turing s seminal work on pattern formation in morphogenesis , where he argued that chemicals might react and diffuse so as result in heterogeneous spatial patterns .subsequent work by gierer and meingardt produced a molecularly plausible minimal model , using two substances that combine local autocatalysis and long - ranging inhibition .since that early work , a variety of processes in physics , chemistry , biology , and many other areas have been studied from the point of view of diffusive instabilities , and the mathematics of the process has been extensively studied .most past work has focused on local stability analysis , through the analysis of the instability of nonuniform spatial modes of the linearized pde .nonlinear , global , results are usually proved under strong constraints on diffusion constants as they compare to the growth of the reaction part . in this note, we are interested in conditions on the reaction part that guarantee that no diffusion instability will occur , no matter what is the size of the diffusion matrix . we show that if the reaction system is `` contractive '' in the sense that trajectories globally and exponentially converge to each other with respect to a diagonally weighted norm , then the same property is inherited by the pde . in particular ,if there is an equilibrium , it will follow that this equilibrium is globally exponentially stable for the pde system .a similar result is also established for a discrete analog , in which a set of ode systems are diffusively interconnected .we were motivated by the desire to understand the important biological systems described in for which , as we will show , contractivity holds for diagonally weighted norms , but not with respect to diagonally weighted norms , for any .closely related work in the literature has dealt with the synchronization problem , in which one is interested in the convergence of trajectories to their space averages in weighted norms , for appropriate diffusion coefficients and laplacian eigenvalues , specifically , which used passivity ideas from control theory for systems with special structures such as cyclic systems , which extended this approach to more general passive structures , and which obtained a generalization involving a contraction - like diagonal stability condition .our work uses very different techniques , from nonlinear functional analysis for normed spaces , than the quadratic lyapunov function approaches , appropriate for hilbert spaces , followed in these references .we start by reviewing several useful concepts from nonlinear functional analysis , and proving certain technical properties for them . let be a normed space . for ,the right and left semi inner products are defined by [ existence - of - norm ] as every norm possesses left and right gateaux - differentials , the limits in exist and are finite . for more detailssee .the right and left semi inner products , induce the norm in the usual way : .conversely if the norm arises from an inner product , as when is a hilbert space , .moreover the right and left semi inner products satisfy the cauchy - schwarz inequalities : the following elementary properties of semi inner products are consequences of the properties of norms .see for the proof . for and , 1 . ; 2 . ; 3 . .in general , the semi inner product is not symmetric : let be a normed space and be a function , where .the strong least upper bound logarithmic lipschitz constants of induced by the norm , on , are defined by =\displaystyle\sup_{u\neq v\in y}\frac{(u - v , f(u)-f(v))_{\pm}}{\|u - v\|_x^2},\ ] ] or equivalently =\displaystyle\sup_{u\neq v\in y}\lim_{h\to0^{\pm}}\frac{1}{h}\left(\frac{\|u - v+h(f(u)-f(v))\|_x}{\|u - v\|_x}-1\right).\ ] ] if , we write instead of .[ subadd ] let be a normed space .for any , and any : 1 .\leq m_{y , x}^+[f]+m_{y , x}^+[g] ] for . 1 . by the definition of , and the triangle inequality for norms , we have &=&\displaystyle\sup_{u\neq v\in y}\lim_{h\to0^+}\frac{1}{h}\left(\frac{\|u - v+h((f+g)(u)-(f+g)(v))\|_x}{\|u - v\|_x}-1\right)\\ & = & \displaystyle\sup_{u\neq v\in y}\lim_{h\to0^+}\frac{1}{2h}\left(\frac{\|2(u - v)+2h((f+g)(u)-(f+g)(v))\|_x}{\|u - v\|_x}-2\right)\\ & \leq&\displaystyle\sup_{u\neqv\in y}\lim_{h\to0^+}\frac{1}{2h}\displaystyle\left(\frac{\|u - v+2h(f(u)-f(v))\|_x}{\|u - v\|_x}-1\right)+\\ & & \displaystyle\sup_{u\neq v\in y}\lim_{h\to0^+}\frac{1}{2h}\displaystyle\left(\frac{\|u - v+2h(g(u)-g(v))\|_x}{\|u - v\|_x}-1\right)\\ & = & m_{y , x}^+[f]+m_{y , x}^+[g ] \end{array}\ ] ] 2 . for ,the equality is trivial , because both sides are equal to zero . for : &=&\displaystyle\sup_{u\neq v\in y}\lim_{h\to0^{\pm}}\frac{1}{h}\left(\frac{\|u - v+h(\alpha f(u)-\alpha f(v))\|_x}{\|u - v\|_x}-1\right)\\ & = & \displaystyle\sup_{u\neq v\in y}\lim_{h\to0^{\pm}}\frac{\alpha}{\alpha h}\left(\frac{\|u - v+(\alpha h)(f(u)-f(v))\|_x}{\|u - v\|_x}-1\right)\\ & = & \alpha m_{y , x}^{\pm}[f ] .\end{array}\ ] ] let be a normed space and be a function , where .the least upper bound lipschitz constant of induced by the norm , on , is defined by =\displaystyle\sup_{u\neq v\in y}\frac{\|f(u)-f(v)\|_x}{\|u - v\|_x}.\ ] ] note that <\infty ] . for any fixed , using this inequality , we have : since this inequality holds for any , taking we have : from which the conclusion follows using and . the least upper bound ( lub ) logarithmic lipschitz constant generalizes the usual logarithmic norm ; for every matrix we have = \mu_x[a] ] .[ cols="^,^",options="header " , ] [ tab - mu ] for ease of reference , we summarize the main notations and definitions in table [ tab - def ] .[ tab - def ]suppose , a bounded domain in with smooth boundary and outward normal , and a subset have been fixed .we denote where is the set of twice continuously differentiable functions .in addition , we denote , where is the set of all continuous functions . note that for each , for , and for , and both are finite because is a continuous function on and is a compact subset of . for any , and any nonsingular , diagonal matrix , we introduce a -weighted norm on as follows : since without loss of generality we will assume for each . with a slight abuse of notation , we use the same symbol for a norm in : [ u = s ] for any , , where note that and .let , . for (the proof is analogous when ) , by the definitions of and note that this equality between weighted norms of functions and of vectors depends on our having taken the matrix to be diagonal .this is the key place where the assumption that is diagonal is being used .in this section , we study the reaction - diffusion pde : subject to the neumann boundary condition : [ as - pde ] in we assume : * is a ( globally ) lipschitz and twice continuously differentiable vector field with components : for some functions , where is a convex subset of .* , with , is called the diffusion matrix .* is a bounded domain in with smooth boundary and outward normal . by a solution of the pde on an interval , where , we mean a function , with , such that : 1 . for each , is continuously differentiable ; 2 . for each , is in ; and 3 . for each , and each , satisfies the above pde .theorems on existence and uniqueness for pde s such as can be found in standard references , e.g. .one must impose appropriate conditions on the vector field , on the boundary of , to insure invariance of .convexity of insures that the laplacian also preserves .since we are interested here in estimates relating pairs of solutions , we will not deal with existence and well - posedness .our results will refer to solutions already assumed to exist .pick any and suppose that is a solution of defined on .define by . also define the function as follows : for any , let denote an diagonal matrix of operators on with the operators on the diagonal .suppose that solves the pde , on an interval , for some ] , where , as defined before , =\displaystyle\lim_{h\to0^+}\sup_{x\neq y\in v}\frac{1}{h}\left(\frac{{{\left\vert x - y+h(f(x)-f(y ) ) \right\vert}_{p , q}}}{{{\left\vert x - y \right\vert}_{p , q}}}-1\right).\ ] ] now we state the main result of this section .[ main - result0 ] consider the pde and suppose assumption [ as - pde ] holds .let ] . to prove the lemma, we consider the following three cases : * case 1 . * . by the definition of ] as follows : observe that is continuously differentiable : note that in general is differentiable for and its derivative is . now by green s identity , the neumann boundary condition , and by the assumption that , it follows integrating by parts that : since and is continuous and , for small enough and therefore inequality holds . notethat by the definition of , any satisfies the neumann boundary condition .* case 2 . * .let since is a continuous function at , and since in case , we showed that for any , we conclude that . * case 3 . * . before proving this case we need the following lemma , which is an easy exercise in real analysis .( for completeness , we include a proof in an appendix . )[ p - limit ] let be a lebesgue measurable set with finite measure and let be a bounded , continuous function on . then is an increasing function of and its limit as is .for a fixed , pick with . by the definition of the norm , implies that for some , .let by lemma [ p - limit ] , is an increasing function of , hence for any , .now fix , , and .define as follows : in both cases and ( the proof is similar to the proof of in case , since both and ) .therefore , for some small , which implies that : now by lemma [ p - limit ] , since , and as , we can conclude that in other words , for a fixed , there exists such that for any , let .then for any , which implies [ mm ] for any function , any , and any positive diagonal matrix , \leq { m_{p , q}}[f],\ ] ] where is the lub logarithmic lipschitz constant induced by the norm defined on : . by the definition of ] . for any , by subadditivity of semi inner product , lemma [ key4 ] , and lemma [ mm ] , \leq m^+_{\mathbf{y } , \mathbf{x}}[\tilde{f}]\leq c.\ ] ] now using corollary [ key1 ] , for all [ thm - for - mu ] consider the reaction - diffusion system and suppose assumption [ as - pde ] holds .in addition suppose for some , , and a positive diagonal matrix , for all , where is the logarithmic norm induced by .then , for any two solutions of , we have [ tv ] in general the result of theorem [ thm - for - mu ] holds also for time varying systems : when we assume , where is the jacobian , for all and we omit the details of this easy generalization . to prove theorem [ thm - for - mu ], we use the following proposition , from .[ key3 ] let be a normed space and is a connected subset of .then for any ( globally ) lipschitz and continuously differentiable function , .\ ] ] moreover if is convex , then .\ ] ] the proof is immediate from theorem and proposition .[ contracting ] consider the reaction - diffusion system and suppose assumption [ as - pde ] holds . in addition suppose for some , and a positive diagonal matrix , for all .then is contracting in , meaning that solutions converge ( exponentially ) to each other , as .we provide an example of a biochemical model which can be shown to be contractive by applying corollary [ contracting ] when using a weighted norm , but which is not contractive using any weighted norm , so that previous results can not be applied .even more interestingly , this system is not contractive in any norm , .the example is of great interest in molecular systems biology , and contractivity in a weighted norm was shown for ode systems in , but the pde case was open .the variant with more enzymes discussed in can also be extended to the pde case in an analogous fashion .* example *. a typical biochemical reaction is one in which an enzyme ( whose concentration is quantified by the non - negative variable ) binds to a substrate ( whose concentration is quantified by ) , to produce a complex ( whose concentration is quantified by ) , and the enzyme is subject to degradation and dilution ( at rate , where ) and production according to an external signal , which we assume constant ( a similar result would apply if is time dependent , see remark [ tv ] ) . an entirely analogous system can be used to model a transcription factor binding to a promoter , as well as many other biological process of interest .the complete system of chemical reactions is given by : {k_2 } } y.\ ] ] we let the domain represent the part of the cytoplasm where these chemicals are free to diffuse .taking equal diffusion constants for and ( which is reasonable since typically and have approximately the same size ) , a natural model is given by a reaction diffusion system if we assume that initially and are uniformly distributed , it follows that , so is a constant .thus we can study the following reduced system : note that ] and .we first consider the case .we ll show that there exists such that for any small , .this will imply computing explicitly , we have : where we take a point of the form , for a which will be determined later . to show we ll equivalentlyshow that for any small enough : note that the of the left hand side of the above inequality is where therefore it suffices to show that for some value ( because implies that there exists such that for , holds ) .since , by assumption , is differentiable and hence , since choosing small enough such that and choosing , or equivalently , large enough , we can make . for , using table [ tab - mu ] , . for large enough , ( and ) and hence .in this section , we derive a result analogous to that for pde s for a network of identical ode models which are diffusively interconnected .we study systems of ode s as follows : [ as - ode ] in , we assume : * for a fixed convex subset of , say , is a function of the form : where , with for each , and is a ( globally ) lipschitz function . * for any define as follows : where is a positive diagonal matrix and .+ with a slight abuse of notation , we use the same symbol for a norm in : * is a continuously differentiable function . * with , which we call the diffusion matrix .* is a symmetric matrix and , where .we think of as the laplacian of a graph that describes the interconnections among component subsystems .[ ode ] consider the system and suppose assumption [ as - ode ] holds. let $ ] , where is the lub logarithmic lipschitz constant induced by the norm on defined by .then for any two solutions of , we have this theorem is proved by following the same steps as in the pde case and using lipschitz norms and properties of discrete laplacians on finite graphs . for odes, we can make some of the steps more explicit , and for purposes of exposition , we do so next .we start with several technical lemmas .the following elementary property of logarithmic norms is well - known . to see more properties of logarithmic normsee .[ mu - prop ] let be the largest real part of an eigenvalue of . then , . [ negm+p ] for any , , where is the strong least upper bound logarithmic lipschitz constant induced by the norm .let .note that since , by the definition of kronecker product , .in addition because is symmetric and is diagonal , is also symmetric and therefore . also the off diagonal entries of , like , are positive because is a laplacian matrix . by corollary [ mu = m+ ] , it suffices to show that for any .we first show that for . for , similarly for , now suppose . by lemma [ mu - prop ] , , where is an eigenvalue of . because , is an eigenvalue of ; therefore . to show that , by remark [ m+dini - linear ] , it suffices to show that where is the solution of . by the definition of dini derivative, it suffices to show that is a non - increasing function of .let , where with .here we abuse the notation and assume that .we ll show that .first we ll prove the following inequality : [ ab ] for any real and and : for , the inequality is trivial .suppose , and w.l.o.g and let .then it suffices to prove that for , let .we want to show that for . since and for , indeed . as we explained above, is symmetric and . using this information and the above inequality : since .note that is differentiable for .[ mux = mup ] let and denote the logarithmic norms induced by and respectively .then recall the following properties of kronecker product : * ; * if and are invertible , then .hence : \\ & = & \displaystyle\mu_p(-l\otimes qdq^{-1})\\ & = & \displaystyle\mu_p(-l\otimes d ) .\end{array}\ ] ] the last equality holds because both and are diagonal , and so are commutative . therefore [ negm+x ] let denote the strong least upper bound logarithmic lipschitz constant induced by the norm on . then , =0.\ ] ] by proposition [ negm+p ] , corollary [ mu = m+ ] and lemma [ mux = mup ] , \;=\;\displaystyle\mu_{p , q}(-l\otimes d ) \;=\;\displaystyle\mu_p(-l\otimes d ) \;=\;\displaystyle m_{p}^{+}[-l\otimes d ] \;=\;0.\ ] ] [ mm2 ]let denote the strong lub logarithmic lipschitz constant induced by the norm on and is the lub logarithmic lipschitz constant induced by the norm on .then , \leq { m_{p , q}}[f].\ ] ] the proof is exactly the same as the proof of proposition [ mm ] . by subadditivity of , proposition [ subadd ] , proposition [ negm+x ] , and lemma [ mm2 ] : \;\leq\;\displaystyle m^+_{p , q}[\tilde{f}]+m^+_{p , q}[-l\otimes d ] \;\leq\;m_{p , q}[f ] \;=\;c .\end{array}\ ] ] now using corollary [ key1 ] , [ force ] assume is a linear operator . then the proof is immediate by subadditivity of logarithmic norm , proposition [ mm2 ] , and corollary [ mu = m+ ] .note that ( [ force ] ) does nt need to hold if .consider the following system : where , and . in this example and .we ll show that for , while .by table , consider the reaction - diffusion ode and suppose assumption [ as - ode ] holds .in addition assume that is continuously differentiable and for all .then for any two solutions of we have the proof is immediate by theorem and proposition .in this section we will show that for any , and , and nodes which are interconnected according to a connected , undirected graph , if , where is a convex subset of and is the smallest positive eigenvalue of , then every solution of ( [ discrete ] ) : has the following property : where and is a row vector defined by to show this , we first state the following lemma from : [ deimling ] for any , for nodes , there is just one possible graph : the complete graph with graph laplacian matrix which results in the following ode system : note that the smallest and only positive eigenvalue of is . for a fixed solution of this ode , let , , and .then by the definition of , and lemma [ deimling ] we have , \\ & = & pw^p(t)m^+_{p , q}[f-2\tilde{d}]\\ & \leq&pw^p(t)\displaystyle\sup_{(x , y)\in v}\mu_{p , q}(j_f(x , y)-2d ) . \end{array}\ ] ] the last inequality results by proposition [ key3 ] , \geq m^+_{p , q}[f-2\tilde{d}].\ ] ] therefore , i.e. , in this case , where for , there are two possible graphs : first , the complete graph with graph laplacian matrix which leads to the following ode system : note that the smallest positive eigenvalue of is . for a fixed solution of this ode , define and as follows : and similar to case , we have \\ & \leq&pw^p(t)\displaystyle\sup_{(x , y , z)\in v}\mu_{p , q}\left(j_f(x , y , z)-3d\right ) , \end{array}\ ] ] which leads to where taking the roots , the second graph has the graph laplacian matrix , with the following ode system : note that the smallest positive eigenvalue of is .let again for a fixed solution , and . then by lemma [ ab ] , hence \\ & \leq&pw^p(t)\displaystyle\sup_{(x , y , z)\in v}\mu_{p , q}(j_f(x , y , z)-d ) .\end{array}.\ ] ] similar to the previous case : where in this case [ joo] let , be arbitrary sets and let be an arbitrary function . for any and , denote and the set of all real numbers such that for any , and let . then if and only if for every , . in this case . 1 . . hence by the definition , .2 . . for an arbitrary ll show that and then we can conclude that . since , there exists such that ( otherwise and so ) .this means that for all , which implies and hence .3 . . for an arbitrary ll show that and then we can conclude that .since , there exists such that ( otherwise ) .since we assumed that for every , , and since , , we have . by corollary [ n.e.intersection ] , because , then , i.e. there exists such that for all , , which implies and hence .now we suppose and fix .we ll show that .since , there exists such that .this means that there exists such that for all , , i.e. . since , there exists such that for all , , i.e. . for a fixed arbitrary norm on and a fixed arbitrary matrix , define by where . for any and , let and let be the set of all real numbers such that whenever ._ proof of claim ._ to apply proposition [ joo ] , we ll show that for , , where and is defined as above . by lemma [ f - dec ], is decreasing in which implies when . also by the definition of , implies that for any . on the other hand ,each is a closed subset of , so they are all compact .hence their intersection is non - empty ._ proof of claim ._ by lemma [ f - dec ] , since is non - increasing as , . by corollary [ n - dec ] , since is non - increasing as , by claim , the right hand side of the equalities in claim are equal , and therefore so are their left hand sides : which implies . fix .then there exists such that .indeed . using holder s inequality , is an increasing function of .now we ll show that as , .note that for any , and therefore . to prove the converse inequality , for any , we define which by the definition of has positive measure , i.e. but because , as .therefore for any arbitrary , which implies m. miller , m. hafner , e.d .sontag , n. davidsohn , s. subramanian , p. e. m. purnick , d. lauffenburger , and r. weiss .modular design of artificial tissue homeostasis : robust control through synthetic cellular heterogeneity ., 8:e1002579 , 2012 .
this paper proves that contractive ordinary differential equation systems remain contractive when diffusion is added . thus , diffusive instabilities , in the sense of the turing phenomenon , can not arise for such systems . an important biochemical system is shown to satisfy the required conditions .
this work is concerned with the numerical solution of the steady - state equation of radiative transfer ( ert ) with isotropic physical coefficients and scattering kernel : where ( ) is a bounded domain with smooth boundary , is the unit sphere in , and ( being the unit outer normal vector at ) is the incoming part of the phase space boundary . for the only reason of simplifying the presentation , we have assumed that there is no incoming source on the boundary .moreover , we have assumed that the internal source is only a function of the spatial variable .in fact , this is not needed either for our algorithm to work ; see more discussions in section [ sec : concl ] .the equation of radiative transfer is a popular model for describing the propagation of particles in complex media .it appears in many fields of science and technology , ranging from classical fields such as nuclear engineering , astrophysics , and remote sensing , to modern applications such as biomedical optics , radiation therapy and treatment planning , and imaging in random media .the coefficients and have different physical meanings in different applications . in general , the coefficient measures the strength of the scattering of the underlying medium at , while measures the strength of the physical absorption of the medium .the coefficient measures the total absorption at due to both the physical absorption and absorption caused by scattering , that is the loss of particles from the current traveling direction into other directions due to scattering .numerical methods for solving the equation of radiative transfer has been extensively studied , see for instance and references therein for an overview . besides monte carlo type of methods that are based on stochastic representation of the ert , many different deterministic discretization schemes have been proposed and numerous iterative schemes , as well as preconditioning strategies , have been developed to solve the discretized systems ; see for instance and references therein .there are many challenging issues in the numerical solutions of the equation of radiative transfer .one of such challenges is the high - dimensionality involved .the ert is posed in phase space , meaning that the main unknown in the equation , in steady state , depends on both the spatial variable and the angular variable . in the spatial three - dimensional case, the unknown depends on five variables , three in the spatial domain and two in the angular domain .this poses significant challenges in terms of both solution speed and storage . in this work ,we propose a new method to solve the ert in isotropic media , that is , media whose physical coefficients and the scattering kernel do not depend on the angular variable , i.e. , the media absorb and scatter particles in the same manner for all directions .our method is based on the observation that when the underlying medium is isotropic , the angularly averaged ert solution , , satisfies a fredholm integral equation of the second type .this integral equation can be solved , using a fast multiple method , for .once this is done , we can plug into the ert to solve for itself .the rest of this paper is organized as follows . in section [ sec : integral ] , we re - formulate the ert into a fredholm integral equation of the second type for the unknown .we then propose in section [ sec : fmm ] a numerical procedure for solving the ert based on this integral formulation and implement an interpolation - based fast multipole method to solve the integral equation .important issues on the implementation of our method are discussed in section [ sec : impl ] . in section [ sec : num ] we present some numerical tests for the algorithm that we developed .concluding remarks are then offered in section [ sec : concl ] .our algorithm is based on the integral formulation of the ert .this is a well - developed subject .we refer to for more details .to present the formulation , let us first introduce a function defined as we can then rewrite the equation of radiative transfer , using the method of characteristics , into the following integral form : here is the distance it takes for a particle to go from to reach the domain boundary in the direction : the integral formulation in is classical and has been used to derive many theoretical results and numerical methods on the ert .the most crucial step of our algorithm is to integrate the integral formulation again over to obtain an integral equation for the local density : the result is a fredholm integral equation of the second type .it reads where the linear integral operator is defined as to simplify the expression for , let , and define the function which is nothing but the total absorption along the line segment between and .we can then express the integral operator as where the integral kernel is defined as with the surface area of the unit sphere . when and when . in the case where and are independent of the spatial variable, the integral kernel simplifies to the algorithm we propose here is based on the integral formulation of the ert for the variable that we derived in .we need the following result on the operator .the proof is standard .let and be bounded such that .then the linear operator , defined in , is compact . for any , we define since and are bounded , we conclude that is bounded , by boundedness of , and therefore .therefore , the operator defined as is a hilbert - schmidt integral operator and hence a compact operator .let be a sufficiently large ball that contains , that is , .for any , we have that where the last step comes from the young s convolution theorem .this implies that , when , we have therefore , as . since is compact for each ,we conclude , by for instance ( * ? ? ?* chapter 3 , theorem 5 ) , that is compact . from ( [ eq : ert intgrl u ] ), we can obtain that where .the operator is a fredholm operator , and by fredholm alternative theorem and the fact that the ert admits only the zero solution when , see for instance , we conclude that there is a unique solution to .let us finish this section with the following important observation .the kernel for the volume integral equation that we derived here takes the same form in the cases of homogeneous ( i.e. and do not depend on spatial variable ) and inhomogeneous ( i.e. and depend on spatial variable ) media .this means that the algorithm that we present in the next sections work for both homogeneous and inhomogeneous media , even though in the case of homogeneous media some simplifications can be made to reduce the computational costs of the algorithm .this is quite different for integral formulations of many other problems , such as the helmholtz or the laplace equation where only homogeneous problems can be done with explicit kernels ( that are mostly the corresponding green functions ) .our strategy of solving the ert is to first solve for and then solve for from .the main solution procedure is as follows .* algorithm i : general solution procedure * 1 .evaluate the source function analytically , or by : 1 . solving the following scattering - free transport equation for : 2 .evaluating .2 . use a krylov subspace method , such as the gmres algorithm or the minres algorithm , to solve the integral equation for .3 . recover the ert solution by 1 .evaluating the source ; 2 . solving the following scattering - free transport equation for : the solution of the scattering - free transport equations in the first and last steps can be done efficiently with a fast sweeping method such as that in or even analytically in special cases .therefore , our focus here will be on the solution of the integral equation in the second step .let us remark that one feature of the above method for solving the ert is that it _ does not _ require an explicit discretization over the angular variable .it is clear that the main computational cost of the algorithm is on the solution of the integral equation which involves only the spatial variable .therefore , besides the solution of the scattering - free transport equation , the computational complexity of the algorithm _ does not _ scale with the size of the angular discretization . in many applications , the main quantities of interestsis the local density , not . in these cases ,the step of algorithm i is not necessary .the computational complexity of the algorithm therefore is completely independent of the angular discretization .for the same reason , the storage requirement of the algorithm also depends only on the spatial discretization .there are many existing methods for the discretization of the integral equation with weakly singular kernel ; see for instance and references therein . herewe assume that we have a spatial discretization , consisting of nodes , of the integral equation that gives us the following approximation to the integral equation with the weight for the -th point . since is singular at , we set in the above summation and use the weight to control the self - contribution of to the summation .to solve the integral equation with a gmres or minres algorithm , we need to be able to evaluate matrix - vector product of the form for different vectors .therefore , the main computational cost will be determined by the computational cost of the evaluation of , that is the summation .direct evaluation of such a summation takes operations in general . in this work, we use the fast multipole method ( fmm ) , originally developed by greengard and rokhlin , to accelerate the evaluation of this matrix - vector product . for the simplicity of implementation, we use an interpolation - based fmm that was proposed by fong and darve in .other efficient implementations of fmm , see for instance and references therein , may also be applied to our problem here .this will be a future work .the fmm method in , based on chebyshev interpolation , works as follows .let be the first - kind chebyshev polynomial of degree defined on ] can be approximated by the following interpolation formula where , being the set of chebyshev interpolation notes which are simply taken as the set of the -dimensional tensor product of the chebyshev nodes of .the same approximation can be constructed when the kernel is defined on any regular domains by a linear transform .if we now plug the approximation into the summation , we have , after a slight re - arrangement , the following formula this formula allows us to evaluate efficiently in three steps by simply following the order of the summations : ( i ) evaluate , ; ( ii ) evaluate , ; and then ( iii ) evaluate , .if the computational cost of the evaluations of the interpolation polynomial and the kernel do not scale with and , then the costs of the three steps scale as , and respectively . therefore , the total cost scales as when is sufficiently small . in our implementation of the fong - darve fmm algorithm , we follow the standard multilevel approach with tree structuresthe only specialty for our implementation is the related to the evaluation of the kernel for the pair which we describe in the next section .we now briefly comment on some important issues on the implementation of the algorithm we described in the previous section .[ [ validity - of - low - rank - approximation . ] ] validity of low rank approximation .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + due to boundedness of the exponential factor at and the fact that decays as a function of , our kernel in should admit the same if not better low - rank approximation as the kernel which has been well - studied in the fast multipole method community .this justifies the chebyshev interpolation in .[ [ the - computational - cost . ] ] the computational cost .+ + + + + + + + + + + + + + + + + + + + + + + the mostly computationally expensive step of the fmm algorithm is step ( ii ) of evaluating where we have to evaluate the integral kernel for different pairs .each evaluation of the kernel requires the evaluation of a line integral of the total absorption coefficient along the line that connects and .if the integral can be analytically computed , for instance when is constant in which case the integral is simply , this evaluation is relatively cheap . otherwise , these evaluations have to be done numerically with selected quadrature rules . in many practical applications ,the total absorption coefficient consists of a constant background with localized perturbations . in this case, we can think of as a function with the periodic boundary condition .we can therefore accelerate the evaluation of the line integrals with the technique of fast fourier transform ( fft ) .assume that is sufficiently smooth to allow for the term fourier representation : where is assumed to ensure that is real - valued .it is then straightforward to verify that the line integral of from to is given by where , , and for a given set of chebyshev interpolation points , we have a fixed number of pairs of for which we need to evaluate . in our implementation, we cache all these kernel evaluations .these kernel evaluations will be reused without any extra calculations during the gmres iterations ; see for instance the numerical results in tab . [tab : con coeff ] and tab .[ tab : var coeff ] of section [ sec : num ] .[ [ fmm - approximation - accuracy . ] ] fmm approximation accuracy .+ + + + + + + + + + + + + + + + + + + + + + + + + + + the accuracy of solution to the ert with our numerical procedure depends mainly on two factors : the resolution of the spatial discretization , and the accuracy of the fast multipole approximation of the summation , the latter relying on the order of the chebyshev polynomial used .increasing the order of the polynomial will increase the accuracy of the approximation in general .however , that will also increase the computational cost of the algorithm , due to the increase of cost in evaluating for instance .we have to therefore balance between accuracy and cost .compared to existing kernels that have been studied the fmm community , our kernel here decays faster when the total absorption is large .we therefore have to use more chebyshev interpolation points in general to ensure accuracy of the approximation .let be the numerical solution with a direct evaluation of the summation in and the fmm - accelerated numerical solution .when and solutions are computed on the same mesh , the finer mesh will produce larger error when the same number of interpolation points is used .this is because finer mesh provides structures that are harder to capture with the same interpolation polynomial .moreover , the accuracy of approximating by depends on the total absorption coefficient since the larger is , the faster the exponential decay is in the integral kernel .therefore , for the same order of interpolation , the larger is , the worse the approximation is .we observe these phenomenon in our numerical experiments ; see for instance , the simulations in section [ sec : num ] .0.5 cm we now present some numerical simulations to demonstrate the performance of our algorithm .we focus on the comparison between the algorithm for solving with a regular gmres solver and with our algorithm , i.e. , a gmres solver with fmm accelerations on the evaluation of .our main purpose is to demonstrate that the computational complexity of the fmm - accelerated gmres algorithm indeed scales linearly with respect to the size of the spatial discretization , while maintains desired accuracy .this means that the main cost of our algorithm for solving the ert is independent of the angular discretization . in all the simulations , we nodimensionalize the transport equation .all the simulations are done in the fixed square domain with physical absorption coefficient .we vary the scattering coefficient to test the performance of the algorithm in different regimes .the larger the scattering coefficient is , the more diffusive the solution of the ert behaves , since the size of the domain and the physical absorption coefficient are fixed .however , as we will see , the performance of our algorithm does not change dramatically from the low scattering transport regime to the highly scattering diffusive regime .we introduce four time measures : ( i ) denotes the time cost of direct summation for ; ( ii ) denotes the time cost of fmm evaluation of ; ( iii ) denotes the time cost of the gmres algorithm with direct summation for solving ; and ( iv ) denotes the time cost of the gmres algorithm with fmm - acceleration for solving . note that in our computations , we have cached all the line integrals needed when setting up the algorithm .therefore , the ( resp . ) does not include ( resp . ) .all the computational times shown below are based on a dell optiplex 745 pentium d 3.4ghz desktop with 16 gb ram . to measure the accuracy of the fmm - accelerated calculation , with respect to the solution of the regular discretization, we use the relative error where and are respectively the solutions with the direct gmres algorithm and the fmm - accelerated gmres algorithm .l*6cl & & & & & & relative error ' '' '' ' '' '' + ' '' '' 1,024 & 4 & 1.20e 01 & 3.51e 02 & 1.18e 00 & 3.71e 00 & 2.17e 04 + 4,096 & 4 & 6.65e 01 & 1.25e 01 & 4.02e 01 & 6.23e 01 & 3.39e 04 + 16,384 & 4 & 3.25e 00 & 4.98e 01 & 1.09e 03 & 9.93e 02 & 3.90e 04 + 65,536 & 4 & 1.50e 01 & 2.04e 00 & & & + 262,144 & 4 & 6.29e 01 & 1.00e 01 & & & + + 1,024 & 6 & 2.38e 01 & 4.95e 02 & 1.18e 00 & 3.71e 00 & 2.06e 06 + 4,096 & 6 & 1.38e 00 & 1.61e 01 & 4.02e 01 & 6.23e 01 & 3.02e 06 + 16,384 & 6 & 8.23e 00 & 8.83e 01 & 1.09e 03 & 9.93e 02 & 3.18e 06 + 65,536 & 6 & 3.59e 01 & 2.89e 00 & & & + 262,144 & 6 & 1.59e 02 & 1.23e 01 & & & + + 1024 & 9 & 5.43e 01 & 1.32e 01 & 1.18e 00 & 3.71e 00 & 9.90e 16 + 4096 & 9 & 3.64e 00 & 5.29e 01 & 4.02e 01 & 6.23e 01 & 4.24e 09 + 16,384 & 9 & 2.17e 01 & 2.80e 00 & 1.09e 03 & 9.93e 02 & 4.47e 09 + 65,536 & 9 & 1.10e 02 & 1.09e 01 & & & + 262,144 & 9 & 5.76e 02 & 4.87e 01 & & & + [ [ experiment - i . ] ] experiment i. + + + + + + + + + + + + + in the first set of numerical experiments , we perform simulations with a fixed scattering coefficient and total absorption coefficient ( which means the physical absorption is ) .the source function we used is a ring source illustrated in the left plot of fig .[ fig : sources ] . in tab .[ tab : con coeff ] we show comparisons in three groups with increasing number of chebyshev interpolation points : , and .we first note that , with reasonable relative approximation accuracy ( on the order of with ) , the ffm - gmres algorithm outperforms the regular gmres algorithm dramatically .this trend is kept when we increase the accuracy of the fmm approximation by increasing , , the number of chebyshev interpolation points .when the spatial discretization is too fine , it takes the regular gmres algorithm too much time to finish the calculations .however , the ffm - accelerated gmres can still solve the system in relatively short time .l*6cl & n & & & & & relative error ' '' '' ' '' '' + ' '' '' 1,024 & 4 & 2.01e 01 & 3.07e 02 & 8.46e 01 & 1.94e 00 & 2.16e 04 + 4,096 & 4 & 1.12e 00 & 9.55e 02 & 2.88e 01 & 3.26e 01 & 3.23e 04 + 16,384 & 4 & 5.26e 00 & 3.52e 01 & 7.78e 02 & 5.20e 02 & 3.69e 04 + 65,536 & 4 & 2.35e 01 & 1.62e 00 & & & + 262,144 & 4 & 1.01e 02 & 6.64e 00 & & & + + 1,024 & 6 & 3.19e 01 & 2.15e 02 & 8.46e 01 & 1.94e 00 & 1.73e 05 + 4,096 & 6 & 2.37e 00 & 1.27e 01 & 2.88e 01 & 3.26e 01 & 1.36e 05 + 16,384 & 6 & 1.30e 01 & 5.23e 01 & 7.78e 02 & 5.20e 02 & 6.83e 06 + 65,536 & 6 & 6.24e 01 & 2.22e 00 & & & + 262,144 & 6 & 2.89e 02 & 9.79e 00 & & & + + 1024 & 9 & 7.40e 01 & 8.72e 02 & 8.46e 01 & 1.94e 00 & 2.71e 15 + 4096 & 9 & 7.01e 00 & 4.44e 01 & 2.88e 01 & 3.26e 01 & 4.84e 06 + 16,384 & 9 & 4.18e 01 & 2.32e 00 & 7.78e 02 & 5.20e 02 & 3.09e 06 + 65,536 & 9 & 2.10e 02 & 1.05e 01 & & & + 262,144 & 9 & 1.02e 03 & 4.85e 01 & & & + [ [ experiment - ii . ] ] experiment ii .+ + + + + + + + + + + + + + in the second set of numerical experiments , we repeat the simulations in experiment i for an inhomogeneous medium . the coefficients are given as we again use the a ring source illustrated in the left plot of fig .[ fig : sources ] . in tab .[ tab : var coeff ] we show comparison in three groups with increasing number of chebyshev interpolation points .the first noticeable difference between tab .[ tab : var coeff ] and tab .[ tab : con coeff ] is that the time it takes to evaluate the matrix - vector multiplication is now considerably more expensive .this is mainly due to the fact that for variable coefficient , we need to evaluate the integrals by numerical quadrature rules , while in the constant coefficient case the kernels are given analytically for any pair . in our implementation , we cached allthe line integrals so that they can be used repeatedly during gmres iterations .this is the reason why the solution costs for variable coefficient cases in tab .[ tab : var coeff ] is very similar to the corresponding constant coefficient cases in tab [ tab : con coeff ] .the overall computational costs again scale linearly with respect to the spatial discretization .l*6cl & n & & & relative error ' '' '' ' '' '' + 1024 & 4 & 2.2 & 2.0 & 8.39e 05 + 1024 & 4 & 5.2 & 5.0 & 1.93e 04 + 1024 & 4 & 10.2 & 10.0 & 4.13e 04 + + 1024 & 6 & 2.2 & 2.0 & 8.85e + 1024 & 6 & 5.2 & 5.0 & 1.82e + 1024 & 6 & 10.2 & 10.0 & 5.15e + + 1024 & 9 & 2.2 & 2.0 & 5.02e + 1024 & 9 & 5.2 & 5.0 & 7.91e + 1024 & 9 & 10.2 & 10.0 & 1.17e + l*6cl & n & & & relative error ' '' '' ' '' '' + 4096 & 4 & 2.2 & 2.0 & 1.15e 04 + 4096 & 4 & 5.2 & 5.0 & 3.07e 04 + 4096 & 4 & 10.2 & 10.0 & 9.08e 04 + + 4096 & 6 & 2.2 & 2.0 & 1.27e 06 + 4096 & 6 & 5.2 & 5.0 & 2.79e 06 + 4096 & 6 & 10.2 & 10.0 & 9.62e 06 + + 4096 & 9 & 2.2 & 2.0 & 1.39e 08 + 4096 & 9 & 5.2 & 5.0 & 2.46e 08 + 4096 & 9 & 10.2 & 10.0 & 6.86e 08+ l*6cl & n & & & relative error ' '' '' ' '' '' + 16384 & 4 & 2.2 & 2.0 & 1.26e 04 + 16384 & 4 & 5.2 & 5.0 & 3.60e 04 + 16384 & 4 & 10.2 & 10.0 & 1.20e 03 + + 16384 & 6 & 2.2 & 2.0 & 1.30e 06 + 16384 & 6 & 5.2 & 5.0 & 2.92e 06 + 16384 & 6 & 10.2 & 10.0 & 1.04e 05 + + 16384 & 9 & 2.2 & 2.0 & 1.82e 08 + 16384 & 9 & 5.2 & 5.0 & 3.20e 08 + 16384 & 9 & 10.2 & 10.0 & 9.71e 08 + . from top to bottom : , and . shownare ( from left to right ) : the solution and the error with , , and . ] except that . ] except that . ][ [ experiment - iii . ] ] experiment iii .+ + + + + + + + + + + + + + + in the third set of numerical experiments , we study the dependence of the computational cost of the algorithm on the scattering coefficient of the ert . we perform simulations using the source function that is illustrated in the right plot of fig .[ fig : sources ] .the results are summarized in tab .[ tab : periodic conv para-1024 ] , tab .[ tab : periodic conv para-4096 ] and tab .[ tab : periodic conv para-16384 ] for different scattering coefficients , , , and , with different levels of spatial discretizations .the solution by the gmres algorithm with direct summation as well as the error ( with being the solution with a fmm - accelerated gmres algorithm ) are shown in fig .[ fig : periodic 1024 ] , fig .[ fig : periodic 4096 ] , and fig .[ fig : periodic 16384 ] respectively for the domain with , and cells , using different numbers of chebyshev interpolation points .the results show that the error of the fmm approximation does not change dramatically with respect to the change of the scattering coefficient .that is , the algorithm we developed works in both diffusive regimes and transport regimes , as long as the medium is isotropic .l*6cl & n & & & relative error ' '' '' ' '' '' + 1024 & 4 & 2.2 & 2.0 & 9.53e 05 + 1024 & 4 & 5.2 & 5.0 & 2.13e 04 + 1024 & 4 & 10.2 & 10.0 & 4.50e 04 + + 1024 & 6 & 2.2 & 2.0 & 1.19e + 1024 & 6 & 5.2 & 5.0 & 2.09e + 1024 & 6 & 10.2 & 10.0 & 5.53e + + 1024 & 9 & 2.2 & 2.0 & 7.71e + 1024 & 9 & 5.2 & 5.0 & 8.47e + 1024 & 9 & 10.2 & 10.0 & 1.10e + l*6cl & n & & & relative error ' '' '' ' '' '' + 4096 & 4 & 2.2 & 2.0 & 1.24e 04 + 4096 & 4 & 5.2 & 5.0 & 3.28e 04 + 4096 & 4 & 10.2 & 10.0 & 9.58e 04 + + 4096 & 6 & 2.2 & 2.0 & 1.48e 06 + 4096 & 6 & 5.2 & 5.0 & 2.99e 06 + 4096 & 6 & 10.2 & 10.0 & 1.01e 05 + + 4096 & 9 & 2.2 & 2.0 & 1.85e 08 + 4096 & 9 & 5.2 & 5.0 & 2.94e 08 + 4096 & 9 & 10.2 & 10.0 & 7.68e 08 + [ [ experiment - iv . ] ] experiment iv .+ + + + + + + + + + + + + + we repeat here the numerical simulations in experiment iii with a different source function , the source function in the left plot of fig . [fig : sources ] .the relative error of the fmm - accelerated solutions are summarized in tab .[ tab : ring conv para-1024 ] , tab .[ tab : ring conv para-4096 ] , and tab .[ tab : ring conv para-16384 ] .the results are very similar respectively to those showed in tab .[ tab : periodic conv para-1024 ] , tab .[ tab : periodic conv para-4096 ] , and tab .[ tab : periodic conv para-16384 ] .this shows again that the performance of the algorithm does not depend on the strength of the scattering of the underlying medium .overall , in either diffusive or transport regime , we can achieve very good accuracy with only a few chebyshev interpolation points in each direction .the solution by the gmres algorithm with the direct summation , and the error are show in fig .[ fig : ring 1024 ] and fig .[ fig : ring 4096 ] . l*6cl & n & & & relative error ' '' '' ' '' '' + 16384 & 16 & 2.2 & 2.0 & 1.35e + 16384 & 16 & 5.2 & 5.0 & 3.82e + 16384 & 16 & 10.2 & 10.0 & 1.20e + + 16384 & 36 & 2.2 & 2.0 & 1.51e + 16384 & 36 & 5.2 & 5.0 & 3.14e + 16384 & 36 & 10.2 & 10.0 & 1.09e + + 16384 & 64 & 2.2 & 2.0 & 2.19e + 16384 & 64 & 5.2 & 5.0 & 3.60e + 16384 & 64 & 10.2 & 10.0 & 1.05e + with the source in the left plot of fig .[ fig : sources ] . from top to bottom : , and . shownare ( from left to right ) : the solution and the error with , , and . ] except that . ]to summarize , we presented in this work a fast numerical method for solving the equation of radiative transfer in isotropic media .the main idea of the method is to reformulate the ert into an integral equation of the second type and then use the fast multipole technique to accelerate the solution of such an integral equation .our numerical tests show that the algorithmic cost indeed scales linearly with respect to the size of the spatial component of the problem .there are a few features of the method we proposed here .first , with the integral formulation , we avoid angular discretization of the ert in the most expensive part of the solution process .this in principle allows us to handle large problems that would be hard to handle in , for instance , the discrete ordinate formulation , with limited ram .second , the kernel in our integral formulation of the ert takes the same form for homogeneous and inhomogeneous media .therefore , the algorithm we developed does not need to be modified going from homogeneous media problems to inhomogeneous media problems .this is quite different from existing fast multipole based methods .that said , in homogeneous media , the setup of our algorithm is relatively computationally inexpensive since the kernel in the corresponding integral equation is explicitly given .in inhomogeneous media , the setup requires the evaluation of the kernel for different pairs that involves line integrals of the total absorption coefficients between and .this evaluation is more expensive than the homogeneous media case , but is still relatively low . in many practically relevant problems, we have coefficients that can be treated as periodic functions .fast fourier transform type of techniques can be used to accelerate the setup process of the algorithm . in our implementation of the fmm algorithm, we cached all the calculations that involve the evaluation of the line integrals .this does not cause major storage problem since the number of chebyshev interpolation nodes used in the implementation is always relatively small .let us also emphasize that , even though our formulation requires that the underlying medium to be isotropic , the internal and boundary source functions need not to be isotropic at all .in fact , the only thing that would have changed for the algorithm with an anisotropic source is the evaluation of .in addition , as we have seen from our numerical tests , the fmm approximation with a very small number of chebyshev interpolation nodes already give relatively accuracy approximations to the true numerical solutions .this suggests that we can probably use the algorithm with small numbers of chebyshev interpolation points as a preconditioning strategy for a general transport solver for more complicated problems .we are currently exploring in this direction .to the best of our knowledge , what we proposed is the first algorithm for solving the ert within the frame work of the fast multipole method .our contribution is mainly on the introduction of the idea , not on the implementation of fast multipole methods .indeed , our implementation is rather primitive which we believe can be greatly improved , either by refining the current strategy or by exploring other approaches .the study we have in this short paper is by no means enough to draw conclusions on every aspect of the algorithm , for instance how the algorithm benchmarks with existing methods .however , numerical simulations we have performed show that this is a promising method that is worth careful further investigated .we hope that this work can motivate more studies in this direction .this work is partially supported by the national science foundation through grants dms-1321018 and dms-1620473 . , _ a kinetic theory for nonanalog monte carlo particle transport algorithms : exponential transform with angular biasing in planar - geometry anisotropically scattering media _ , j. comp .phys . , 145 ( 1998 ) , pp .
we propose in this work a fast numerical algorithm for solving the equation of radiative transfer ( ert ) in isotropic media . the algorithm has two steps . in the first step , we derive an integral equation for the angularly averaged ert solution by taking advantage of the isotropy of the scattering kernel , and solve the integral equation with a fast multipole method ( fmm ) . in the second step , we solve a scattering - free transport equation to recover the original ert solution . numerical simulations are presented to demonstrate the performance of the algorithm for both homogeneous and inhomogeneous media . equation of radiative transfer , integral equation , fast algorithm , fast multipole method , preconditioning . 65f08 , 65n22 , 65n99 , 65r20 , 45k05